text
stringlengths 1
2.25M
|
---|
---
abstract: 'Manifest $N=2$ supersymmetric Toda systems are constructed from the $sl(n,n+1)$ superalgebras by taking into account their complex structure. In the $n\rightarrow \infty$ continuum limit an $N=2$ extension of the $(2+1)$-dimensional heavenly equation is obtained. The integrability is guaranteed by the existence of a supersymmetric Lax pair. We further analyze the properties of the $(1+1)$-dimensionally reduced system. Its bosonic sector is of hydrodynamical type. This is not the case for the whole supersymmetric system which, however, is super-hydrodynamical when properly expressed in terms of a supergeometry involving superfields and fermionic derivatives.'
author:
- |
Ziemowit Popowicz${}^a$[^1] and Francesco Toppan${}^b$[^2]\
\
${}^a$[*Institute for Theoretical Physics, University of Wroc[ł]{}aw,*]{}\
[*50-204 Wroc[ł]{}aw, pl. Maxa Borna 9, Poland*]{}\
\
${}^b$[*CBPF, CCP, Rua Dr.*]{} [*Xavier Sigaud 150,*]{}\
[*cep 22290-180 Rio de Janeiro (RJ), Brazil*]{}
title: |
The $N=2$ Supersymmetric Heavenly Equation\
and Its Super-Hydrodynamical Reduction
---
=24.cm =-.5cm =0.25cm=0.25cm
Introduction
============
The so-called “Heavenly Equation" in $(1+2)$ dimensions was at first introduced to describe solutions of the complexified Einstein equations [@ple]. It has received a lot of attention [@sav] for its remarkable integrability properties. Even if, to our knowledge, no attempt so far in the literature has been made to analyze the corresponding situation in the case of supergravity, the supersymmetric heavenly equation has been introduced and discussed in the context of superintegrable models. It appears [@saso] as the continuum limit of a system of equations known as “supersymmetric Toda lattice hierarchy", introduced in [@opeh]. Such system of equations and their hidden $N=2$ supersymmetric structures have been vastly investigated [@leso]. On the other hand, it has been recently pointed out that the dimensional reduction of the $(1+2)$-dimensional heavenly equation to a $(1+1)$-dimensional system is related to Benney-like integrable hierarchies of hydrodynamical equations [@cdp]. Another very recent reference discussing the hydrodynamical properties of the reduced heavenly equation is [@fp].
In this paper we introduce the $N=2$ supersymmetric heavenly equation, clarifying its integrability properties and its algebraic arising from the $n\rightarrow \infty$ limit of a class of $N=2$ supersymmetric Toda equations. Further, we investigate its $(1+1)$ dimensional reduction which provides the supersymmetric hydrodynamical equations extending the Benney-like hierarchy of reference [@cdp]. The integrability properties of such systems are induced by the integrability property of the $N=2$ superheavenly equation and its Lax representation discussed below. This Lax representation is not in the form of supersymmetric dispersionless Lax operator, as we discuss at length later.
The plan of the paper is as follows, in the next section we introduce the discretized $N=2$ Toda lattice hierarchy (as well as its continuum limit, the $N=2$ Superheavenly Equation), as a supersymmetric Toda system based on the $sl(n|n+1)$ superalgebra. It is worth to remark that these are the same superalgebras originally employed in the construction of [@saso]. On the other hand, the superalgebras of the $sl(n|n+1)$ series admit a complex structure, allowing the construction of superToda systems based on $N=2$ superfields, according to the scheme of [@ivto]. For this reason, the next section results provide a generalization of those of [@saso]. The supersymmetric Lax pairs providing integrability of the systems (for any given $n$ and in the limit $n\rightarrow \infty$) are explicitly constructed. They provide a dynamical formulation in the $x_\pm$ plane (without involving the extra-time direction $\tau$ of the superheavenly equation). In the following, we investigate the dimensional reduction of both the discretized and continuum systems from $(1+2)$ to $(1+1)$ dimensions. We obtain a supersymmetric system of equations with an interesting and subtle property. Unlike its purely bosonic subsector, the whole system involving fermions is not of hydrodynamical type. However, the same system, once expressed in terms of a supergeometry involving superfields and fermionic derivatives, satisfies a graded extension of the hydrodynamical property, which can be naturally named of “super-hydrodynamical type of equation". With this expression we mean that there is a way to recast the given ($1+1$)-dimensional supersymmetric equations into a system of non-linear equations for superfields which involves only first-order derivations w.r.t. the supersymmetric fermionic derivatives. This super-hydrodynamical system furnishes the supersymmetrization of the system introduced in [@cdp].
The $N=2$ Superheavenly equation.
=================================
The construction of a continuum limit (for $n\rightarrow\infty$) of a discretized superToda system requires a presentation of the system in terms of a specific Cartan matrix. The symmetric choice in [@kac] for the Cartan matrix of the superalgebra $sl(n|n+1)$ does not allow to do so. On the other hand [@fss], the Cartan matrix $a_{ij}$ of $sl(n|n+1)$ can be chosen to be antisymmetric with the only non-vanishing entries given by $a_{ij}
=\delta_{i,i+1}-\delta_{i,i-1}$.
The Cartan generators $H_i$ and the fermionic simple roots $F_{\pm i}$ satisfy $$\begin{aligned}
\relax [H_i, F_{\pm j}] &=& \pm a_{ij}F_{\pm j}, \nonumber\\
\{F_i, F_{-j} \} &=& \delta_{ij}.\end{aligned}$$ The continuum limit of the [@saso] construction could have been performed for any superalgebra admitting an $n\rightarrow
\infty$ limit, such as $sl(n|n)$, etc. On the other hand, the superalgebras of the series $sl(n|n+1)$ are special because they admit a complex structure and therefore the possibility of defining an $N=2$ manifestly supersymmetric Toda system, following the prescription of [@ivto]. This is the content of the present section.
At first we introduce the $N=2$ fermionic derivatives $D_\pm$, ${\overline D}_\pm$, acting on the $x_\pm$ $2D$ spacetime ($\theta_\pm$ and ${\overline\theta}_\pm$ are Grassmann coordinates). The $2D$ spacetime can be either Euclidean ($x_\pm = x\pm t$) or Minkowskian ($x_\pm = x\pm i t$).
We have $$\begin{aligned}
D_\pm &=& \frac{\partial}{\partial\theta_\pm}
-i{\overline\theta}_\pm \partial_\pm,\nonumber\\ {\overline D}_\pm
&=&
-\frac{\partial}{\partial{\overline\theta}_\pm}+i\theta_{\pm}\partial_\pm
.\end{aligned}$$ They satisfy the anticommutator algebra $$\begin{aligned}
\{ D_\pm, {\overline D}_\pm \}&=& 2i\partial_\pm\end{aligned}$$ and are vanishing otherwise.
Chiral ($\Phi$) and antichiral ($\overline \Phi$) $N=2$ superfields are respectively constrained to fulfill the conditions $$\begin{aligned}
{\overline D}_\pm {\Phi}&=& 0, \nonumber\\ D_\pm {\overline \Phi}
&=&0.\end{aligned}$$ Accordingly, a generic chiral superfield ${\Phi}$ is expanded in its bosonic ${\varphi}$, ${F}$ (the latter auxiliary) and fermionic component fields (${\psi}_{+},{\psi}_{-}$) as $$\begin{aligned}
{\Phi }({\hat x}_\pm, { \theta}_\pm) &=& {\varphi} +{\theta}_{+}
{\psi}_{+} +{\theta}_{-}\psi_{-} + {\theta}_+{\theta}_-{ F},\end{aligned}$$ with ${\varphi}$, ${ \psi}_\pm$ and ${ F}$ evaluated in ${\hat
x}_\pm = x_\pm+i{\overline\theta}_\pm\theta_\pm$.
Similarly, the antichiral superfield ${{\overline\Phi}}$ is expanded as $$\begin{aligned}
{\overline\Phi }({\overline x}_\pm, {\overline \theta}_\pm) &=&
{\overline\varphi} +{\overline\theta}_{+} {\overline\psi}_{+}+
{\overline\theta}_{-}{\overline\psi}_{-}
+{\overline\theta}_+{\overline\theta}_-{\overline F},\end{aligned}$$ with all component fields evaluated in ${\overline x}_\pm = x_\pm
-i{\overline \theta}_\pm \theta_\pm$.
Due to the complex structure of $sl(n|n+1)$, its Cartan and its simple (positive and negative) root sector can be split into its conjugated parts $$\begin{aligned}
&
\begin{array}{ll}
{\cal H} \equiv \{ H_{2k-1}\}, & \quad {\overline {\cal
H}}\equiv \{ H_{2k}\}, \\
{\cal F}_+ \equiv\{ F_{2k-1}\}, & \quad {\cal F}_- \equiv \{ F_{-(2k-1)}\},\\
{\overline {\cal F}_+} \equiv\{
F_{-{2k}} \}, &\quad {\overline{\cal F}_-} \equiv \{ F_{2k}\},
\end{array}&\end{aligned}$$ for $k=1,2,\ldots, n$ .
Following [@ivto], we can introduce the $sl(n|n+1)$ $N=2$ superToda dynamics, defined for the Cartan-valued chiral (${\bf
\Phi}$) and antichiral (${\bf {\overline\Phi}}$ ) $N=2$ superfields, $$\begin{aligned}
{\bf \Phi} &=&
\sum_{k=1}^n\Phi_{k}H_{2k-1},\nonumber\\ {\bf {\overline\Phi}} &=&
\sum_{k=1}^n{\overline\Phi}_k H_{2k},\end{aligned}$$ through the Lax operators ${\cal L}_\pm$ and ${\overline{\cal
L}_\pm}$, given by $$\begin{aligned}
{\cal L}_+ &=& D_+{\bf \Phi} + e^{\bf{\overline\Phi}} {{F}_+}
e^{-\bf{\overline{\Phi}}},\nonumber\\ {\cal L}_- &=&- {F}_-\end{aligned}$$ and $$\begin{aligned}
{\overline{\cal L}}_+ &=& {\overline
F}_+,\nonumber\\ {\overline {\cal L}}_- &=& {\overline D}_-{\bf
{\overline \Phi}} + e^{\bf{\Phi}} {\overline F}_-e^{-\bf{\Phi}},\end{aligned}$$ where $$\begin{aligned}
&
\begin{array}{ll}
F_+ = \sum_k F_{2k-1}, & \quad F_- =\sum_k F_{-(2k-1)},\\
{\overline F}_+ = \sum_k F_{-(2k)},& \quad
{\overline F}_- = \sum_k F_{2k},
\end{array}
&\end{aligned}$$ (as before, the sum is over the positive integers up to $n$).
Explicitly, we have $$\begin{aligned}
{\cal L}_+ &=& \sum_k (D_+\Phi_{k} H_{2k-1} + e^{{\overline
\Phi}_{k-1}-{\overline \Phi}_{k}}F_{2k+1}),\nonumber\\ {\cal L}_-
&=& -\sum_kF_{-(2k-1)},\nonumber\\ {\overline{\cal L}}_+ &=&
\sum_k F_{-2k},\nonumber\\ {\overline{\cal L}}_- &=&
\sum_k({\overline D}_-{\overline \Phi}_k H_{2k} +
e^{\Phi_k-\Phi_{k+1}}F_{2k}),\nonumber\\ &&\end{aligned}$$ Please notice that here and in the following we have formally set ${\overline\Phi}_0\equiv 0$.
The zero-curvature equations are given by $$\begin{aligned}
D_+ {\cal L}_- +D_-{\cal L}_+ +\{{\cal L}_+,{\cal L}_-\} &=&
0,\nonumber\\ {\overline D}_+{\overline {\cal L}}_- +{\overline
D}_-{\overline{\cal L}}_+ + \{{\overline{\cal
L}}_+,{\overline{\cal L}}_-\} &=& 0,\end{aligned}$$ so that the following set of equations for the constrained (anti)chiral $N=2$ superfields is obtained $$\begin{aligned}
D_+D_- \Phi_k &=& -e^{ {\overline
\Phi}_{k-1}-{{\overline\Phi}_k} },\nonumber\\ {\overline
D}_+{\overline D}_-{\overline\Phi}_k &=& -e^{\Phi_k-\Phi_{k+1}},\end{aligned}$$ for the positive integers $k=1,2,\ldots , n$.
By setting, $$\begin{aligned}
B_k = \Phi_k-\Phi_{k+1},&\quad& {\overline B}_k =
{\overline\Phi}_k -{\overline\Phi}_{k+1},\end{aligned}$$ we get the two systems of equations $$\begin{aligned}
D_+D_- B_k = e^{{\overline B}_k} -e^{{\overline B}_{k-1}},&\quad&
{\overline D}_+{\overline D}_- {\overline B}_k = e^{B_{k+1}} -
e^{B_{k}},\end{aligned}$$ for $k=1,2,\ldots, n$.
By identifying $k$ as a discretized extra time-like variable $\tau$ we obtain, in the continuum limit for $n\rightarrow \infty$, $$\begin{aligned}
\label{2shev}
D_+D_- {B} =
\partial_\tau e^{{\overline B}},&\quad & {\overline
D}_+{\overline D}_- {{\overline B}}=
\partial_\tau e^{B},\end{aligned}$$ which corresponds to the $N=2$ extension of the superheavenly equation.
Indeed, the presence in the previous equations of the first derivative in $\tau$ is an artifact of the $N=2$ superfield formalism. Once solved the equations at the level of the component fields and eliminated the auxiliary fields in terms of the equations of motion, we are left with a system of second-order equations.
We have, in terms of the component fields, $$\begin{aligned}
B&=& \Big (1+i{\overline\theta}_+\theta_+\partial_{+}
+i{\overline\theta}_-\theta_-\partial_{-}-{\overline\theta}_+\theta_+
{\overline\theta}_-\theta_-\partial_+\partial_-\big ) {\cal C},
\nonumber \\ {\overline B}&=& \Big
(1-i{\overline\theta}_+\theta_+\partial_+
-i{\overline\theta}_-\theta_-\partial_- -{\overline\theta}_+
\theta_+{\overline\theta}_-\theta_-\partial_+\partial_-\Big
){\overline {\cal C}},\end{aligned}$$ where $$\begin{aligned}
{\cal C} &=& \Big (b + \theta_+\psi_{+}+\theta_-\psi_{-} +
\theta_+\theta_-a\Big ),\nonumber \\ {\overline {\cal C}}&=& \Big
({\overline
b}+{\overline\theta}_+{\overline\psi}_++{\overline\theta}_
-{\overline\psi}_-+{\overline\theta}_+{\overline\theta}_-{\overline
a} \Big ),\end{aligned}$$ with $a$, ${\overline a}$ bosonic auxiliary fields. All component fields are evaluated in $x_{\pm}$ only.
The equations of motion of the $N=2$ superheavenly equation are explicitly given in components by $$\begin{aligned}
-a &=& (e^{\overline b})_\tau ,\nonumber\\
2i\partial_-\psi_+&=&({\overline \psi}_-e^{\overline b})_\tau ,
\nonumber\\ -2 i\partial_+\psi_- &=& ({\overline
\psi}_+e^{\overline b})_\tau ,\nonumber\\
-4\partial_+\partial_-b&=& ({\overline a}e^{\overline
b}-{\overline \psi}_+{\overline\psi}_-e^{\overline b})_\tau ,\end{aligned}$$ and $$\begin{aligned}
- {\overline a}&=&(e^b)_\tau,\nonumber\\
2i\partial_-{\overline\psi}_+&=&(\psi_-e^b)_\tau,\nonumber\\ -2
i\partial_+{\overline\psi}_-&=& (\psi_+e^b)_\tau,\nonumber\\
-4\partial_+\partial_-{\overline b}&=&
(ae^b-\psi_+\psi_-e^b)_\tau.\end{aligned}$$ Eliminating the auxiliary fields we obtain the systems $$\begin{aligned}
\label{shev1}
2i\partial_-\psi_+&=&({\overline \psi}_-e^{\overline b})_\tau ,\nonumber\\
-2 i\partial_+\psi_- &=& ({\overline
\psi}_+e^{\overline b})_\tau ,\nonumber\\ 4\partial_+\partial_-b
&=& \Big ( (e^{b})_{\tau} e^{\overline b}+({\overline
\psi}_+{\overline\psi}_-e^{\overline b} \Big )_{\tau} ,\end{aligned}$$ and $$\begin{aligned}
\label{shev2}
2i\partial_-{\overline\psi}_+&=&(\psi_-e^b)_\tau,\nonumber\\ -2
i\partial_+{\overline\psi}_-&=& (\psi_+e^b)_{\tau},\nonumber\\
4\partial_+\partial_-{\overline b} &=& \Big (
(e^{\overline b})_{\tau}e^{b} + (\psi_+\psi_-e^b\Big )_\tau.\end{aligned}$$ The bosonic component fields $b$, ${\overline b}$, as well as the fermionic ones $\psi_\pm$, ${\overline\psi}_\pm$, are all independent. The equations (\[shev1\], \[shev2\]) are a manifestly $N=2$ supersymmetric extension of the system introduced in [@saso].
Super-hydrodynamical reductions of the superheavenly equation.
==============================================================
The equations (\[shev1\]) and (\[shev2\]) correspond to a $(1+2)$-dimensional system, manifestly relativistic and $N=2$ supersymmetric w.r.t. the two-dimensional subspace spanned by the $x_\pm$ coordinates, while possessing an extra bosonic time-like dimension denoted as $\tau$. A very interesting example of integrable system in $1+1$-dimension which is currently intensely investigated, see e.g. , can be recovered by applying a dimensional-reduction to, let’s say, the bosonic sector of the $(1+2)$-dimensional heavenly equation. We can refer to such a system, perhaps a bit pedantically, as the $(1+1)$-dimensionally reduced heavenly equation. It can be obtained by setting equal to zero the fermionic variables $\psi_\pm, {\overline \psi}_\pm\equiv
0 $, while $b$, ${\overline b}$, can be consistently constrained as ${\overline b}=b$. The $x_\pm$ coordinates are identified, i.e. $x_+=x_-=x$.
The resulting equation, by changing the normalizations (setting $f=2b$ and $t=2\tau$) can be conveniently written as $$\label{zhev}
f_{tt} = (e^f)_{xx}$$ This equation has recently received a lot of attention in the literature. It is an example of a completely integrable, hydrodynamical-type equation. It admits a multihamiltonian structure and possesses an infinite number of conserved charges in involution. It has been recently shown that it can be recovered via a dispersionless Lax representation given by $$L:=p^{-1}\Big ( 1+gp^2+hp^4 \Big )^{\frac{1}{2}},$$ with $f,g$ functions of $x,t$, while $p$ is the classical momentum. The equations of motion are read from $$\label{poiss}
\frac{\partial L}{\partial t} = \frac{p}{2} \Big \{ L^{2}_{\leq
0}, L \Big \}$$ where $ \{\ast , \ast \}$ denotes the usual Poisson brackets and $L^{2}_{\leq 0}$, defined in [@cdp], is explicitly given by $L^{2}_{\leq 0} = p^{-2}+g$.
The equation (\[poiss\]) leads to $$\begin{aligned}
\label{2comp}
\frac{\partial h}{\partial t} &=& hg_x \\ \nonumber \frac{\partial
g}{\partial t} &=& h_x \label{hydro}\end{aligned}$$ The equation (\[zhev\]) is recovered after eliminating $g$ from the previous system and setting $h=e^{f}$. In the (\[hydro\]) form, the “$(1+1)$-dimensionally reduced heavenly equation" is shown to be a hydrodynamical type of equation.
An interesting question, whose solution as we will see is non-trivial, is whether the $(1+1)$-dimensional reduction (for $x_+=x_-=x$) of the full $N=2$ supersymmetric heavenly equation is also of hydrodynamical type. The answer is subtle. The introduction of the fermionic fields $\psi_\pm$, ${\overline\psi}_\pm$, whose equations of motion are of first order in the extra-bosonic time, does not allow to represent the dimensionally reduced system from (\[shev1\], \[shev2\]) in hydrodynamical form, due to a mismatch with the second-order-equation satisfied by the bosonic fields $b$, ${ \overline b}$. On the other hand, it is quite natural to expect that important properties are not spoiled by the supersymmetrization. This is indeed the case with the hydrodynamical reduction, when properly understood. The key issue here is the fact that the nice features of the supersymmetry are grasped when inserted in the proper context of a supergeometry, which must be expressed through the use of a superfield formalism. It is in this framework that a super-hydrodynamical reduction of the dimensionally reduced system from (\[shev1\], \[shev2\]) becomes possible. Indeed, for $x_+=x_-=x$, we can write down the (\[2shev\]) system through the set of, first-order in the fermionic derivatives, [*superfield*]{} equations $$\begin{aligned}
\label {comp1}
{\cal D}_{-} {\cal B} & = & N_{\tau}, \\ \nonumber {\cal D}_{+} N
& = & e^{\overline B}\end{aligned}$$ and $$\begin{aligned}
\label {comp2}
{\overline {\cal D}}_{-} {\overline {\cal B}} & = & {\overline
N}_{\tau}, \\ \nonumber {\overline {\cal D}}_{+} {\overline N} & =
& e^B\end{aligned}$$ with $N$ and ${\overline N}$ subsidiary fermionic superfields . Taking into account the following expansions $$\begin{aligned}
e^{\overline B}&=&(1-i{\overline\theta}_+\theta_+\partial_+
-i{\overline\theta}_-\theta_-\partial_-
-{\overline\theta}_+\theta_+{\overline\theta}_-\theta_-\partial_+\partial_-)
e^{\overline {\cal C} } = \nonumber \\
&& {\cal D}_{+}\theta_{+}
\Big (1-i{\overline {\theta_{-}}}\theta_{-}\partial_-\Big )e^{\overline {\cal C}}\nonumber \\
e^{B}&=&(1+i{\overline\theta}_+\theta_+\partial_+
+i{\overline\theta}_-\theta_-\partial_-
-{\overline\theta}_+\theta_+{\overline\theta}_-\theta_-\partial_+\partial_-)
e^{{\cal C} } = \nonumber \\
&& -
{\overline {\cal D}}_{+}{\overline\theta}_+
\Big ( 1+i{\overline\theta_-}\theta_-\partial_-\Big )e^{\cal C}\end{aligned}$$ we easily obtain the following solutions for $N$, ${\overline N}$ $$\begin{aligned}
N &=&
\Big (\theta_{+}-i\theta_{+}
{\overline\theta}_{-}\theta_{-}\partial_{-} \Big ) e^{\overline {\cal C}}
+ {\cal D}_{+}\Omega_1 \nonumber \\
{\overline N} &=&
- \Big ({\overline\theta}_{+}+
i{\overline\theta}_{+}{\overline\theta}_{-}\theta_{-}\partial_{-} \Big )
e^{\cal C} + {\overline {\cal D}}_{+}{\overline\Omega}_1
\label{nnbar}\end{aligned}$$ in terms of arbitrary $\Omega_1$, $\Omega_2$ bosonic superfunctions.
If we set, as we are free to choose, $$\begin{aligned}
\Omega_1 &=& \theta_{+} \Big
(\Psi_{-}-2i{\overline\theta}_{-}b_{-} -i{\overline\theta}_{-}
\theta_{-} \Psi_{-,-}\Big ) \nonumber \\ {\overline\Omega}_1 &=&
{\overline\theta_{+}} \Big
({\overline\Psi}_{-}-2i\theta_{-}{\overline b}_{-}
+i{\overline\theta}_{-}\theta_{-} {\overline\Psi}_{-,-}\Big )\end{aligned}$$ and substitute the values of $N$, ${\overline N}$ given in (\[nnbar\]) back to (\[comp1\]) and (\[comp2\]), we obtain the equivalence of this system of equations w.r.t. the $x_+=x_-=x$ dimensional reduction of the (\[shev1\]) and (\[shev2\]) equations. This proves the existence of a super-hydrodynamical reduction expressed in a superfield formalism. If we express in terms of the component fields this set of equations, as mentioned before, we are not obtaining a hydrodynamical type equation. This fact should not be regarded as a vicious feature of our system, but rather as a virtue of the supersymmetry. In several examples, this is one, the introduction of the super-formalism allows extending both the properties of the systems and the techniques used to investigate them, to cases for which the ordinary methods are of no help. In a somehow related area we can cite, e.g., the derivation of bosonic integrable hierarchies associated with the bosonic sector of super-Lie algebras [@anp]. They are outside the classification based on and cannot be produced from ordinary Lie algebras alone.
Let us finally discuss the restriction from $N=2\rightarrow N=1$, i.e. down to the $N=1$ supersymmetry. It is recovered from setting $$\begin{aligned}
{\overline\theta}_+=-\theta_{+},
&\quad&{\overline\theta}_-=-\theta_-, \nonumber\\ {\overline {\cal
D}}_+= -{\cal D}_+, &\quad& {\overline {\cal D}}_-=-{\cal D}_-.\end{aligned}$$
The equations (\[shev1\]) and (\[shev2\]) now read $$\begin{aligned}
\label{sushev}
{\cal D}_+{\cal D}_-{\cal C} = \partial_{\tau}e^{\overline{\cal
C}}, &\quad& {\cal D}_+{\cal D}_-{\overline {\cal C}} =
\partial_{\tau}e^{\cal C},\end{aligned}$$ with $$D_\pm = \frac{\partial}{\partial\theta_\pm}
+i\theta_\pm \partial_\pm$$ The equations in (\[sushev\]) have a similar structure in components as the equations (\[shev1\]) and (\[shev2\]). We have indeed $$\begin{aligned}
\partial_-\psi_+&=&i({\overline \psi}_-e^{\overline b})_\tau ,\nonumber\\
\partial_+\psi_- &=& -i({\overline\psi}_+e^{\overline b})_\tau ,\nonumber\\
\partial_+\partial_-b &=& \Big ( (e^{b})_{\tau} e^{\overline b}+({\overline
\psi}_+{\overline\psi}_-e^{\overline b} \Big )_{\tau} ,\end{aligned}$$ and $$\begin{aligned}
\partial_-{\overline\psi}_+&=& i(\psi_-e^b)_\tau,\nonumber\\
\partial_+{\overline\psi}_-&=& -i(\psi_+e^b)_{\tau},\nonumber\\
\partial_+\partial_-{\overline b} &=& \Big (
(e^{\overline b})_{\tau}e^{b} + (\psi_+\psi_-e^b\Big )_\tau.\end{aligned}$$
If we further dimensionally reduce to $(1+1)$-dimension our $N=1$ system, by constraining the variables $x_{\pm}$ ($x_+=-x_-=it$) and setting ${\overline {\cal C}}={\cal C}$, we obtain the system $$\begin{aligned}
\partial_t\psi_+ &=& \partial_{\tau}\Big ( e^b\psi_- \Big ) \nonumber \\
\partial_t\psi_- &=& \partial_{\tau} \Big ( e^b\psi_+ \Big ) \nonumber \\
\partial_t^2 b &=& \partial_{\tau} \Big ( \frac{1}{2} (e^{2b})_{\tau} +e^b\psi_+\psi_-\Big
).\end{aligned}$$ As its $N=2$ counterpart, this system is not of hydrodynamical type. However, it can be easily shown, with a straightforward modification of the procedure previously discussed for the $N=2$ case, to be super-hydrodynamical when expressed in terms of superfields and superderivatives.
Conclusions.
============
In this paper we have at first introduced manifest $N=2$ supersymmetric discrete Toda equations in association with the complex-structure superalgebras of the $sl(n,n+1)$ series. We proved that in the $n\rightarrow \infty$ limit the discrete variable can be regarded as a continuum extra bosonic time $\tau$. The corresponding system is a manifestly $N=2$ supersymmetric extension of the standard heavenly equation in $(2+1)$ dimensions, the supersymmetry being associated with the plan spanned by the bosonic $x_\pm$ coordinates and their fermionic counterparts, which are naturally expressed in an $N=2$ superspace formalism. A supersymmetric Lax-pair, proving the integrability of the system, was constructed. It expresses the dynamics w.r.t. the $\pm$ superspace coordinates ($\tau$ in this respect can be considered either as an auxiliary parameter or as the discrete label of the original formulation).
We further investigated the dimensional reduction of the above systems from the $(2+1)$-dimensional to the $(1+1)$-dimensional case. In the purely bosonic case the reduced systems of equations are of a special type, they are hydrodynamical systems of non-linear equations. The supersymmetric case is much subtler. This system is not of hydrodynamical type since the presence of the fermions spoils this property. However, when properly expressed in terms of a supergeometry (essentially, superfields and fermionic derivatives) is of super-hydrodynamical type (the precise meaning of this term has been explained in the Introduction). More than a vice, this can be considered as a virtue of the supersymmetry. It allows extending the notion of hydrodynamical equations beyond the realm of the systems ordinary allowed.
Finally, it is worth mentioning a formal, however challenging problem, concerning the supersymmetrization. While the integrability of the systems under consideration is automatically guaranteed by the given supersymmetric and relativistic Lax-pairs in the $\pm$ plane mentioned above, one can wonder whether a dispersionless Lax operator, directly expressing the ${\tau} $ dynamics (i.e., in terms of the bosonic extra-time) could be found. The answer is indeed positive for the purely bosonic sector. In the supersymmetric case, however, no closed dispersionless Lax operator is known at present. It is repeated here an analogous situation already encountered for the polytropic gas systems [@dp]. The problem of constructing supersymmetric dispersionless Lax operators for these related systems is still open.\
\
Z.P. is grateful for the hospitality at CBPF, where this work was initiated, while F.T. is grateful for the hospitality at the Institute of Theoretical Physics of the University of Wroc[ł]{}aw, where the paper has been finished.
[99]{} J.F. Plebański, [*J. Math. Phys.*]{} [**16**]{} (1975) 2395. J.F. Plebański and H. Garcia-Compean, [*Acta Physica Polon.*]{} [**B 26**]{} (1995) 3, [*Int. J. Mod. Phys.*]{} [**A 10**]{} (1995) 3371; E. Alifanto, G. Soliani and L. Solombrino, [*Lett. Math. Phys.*]{} [**41**]{} (1997) 379; M.V. Saveliev, [*Comm. Math. Phys.*]{} [**121**]{} (1989) 283; M.V. Saveliev and A.M. Vershik, [*Comm. Math. Phys.*]{} [**126**]{} (1989) 367, [*Phys. Lett.*]{} [**A 143**]{} (1990) 121. M.V. Saveliev and P. Sorba, [*Lett. Math. Phys.*]{} [**22**]{} (1991) 119. M. Olshanetsky [*Comm. Math. Phys.*]{} [**88**]{} (1983) 63; Z. Popowicz, [*J. Phys.*]{} [**A 19**]{} (1986) 1495; J. Evans and T. Hollowood, [*Nucl. Phys.*]{} [**B 352**]{} (1991) 723. O. Lechtenfeld and A. Sorin, [*J. Nonlin. Math. Phys.*]{} [**8**]{} (2001) 183, [*ibid.*]{} [**7**]{} (2000) 433. A. Constandache, A. Das and Z. Popowicz, “A New Benney-like Lattice and a New Dispersionless Toda Hierarchy", nlin.SI/0204053. E.V. Ferapontov and M.V. Pavlov, “Hydrodynamic reductions of the heavenly equation", nlin.SI/0301048. E. Ivanov and F. Toppan, [*Phys. Lett.*]{} [**B 309**]{} (1993) 28. V.G. Kac, [*Comm. Math. Phys.*]{} [**53**]{} (1977) 31. L. Frappat, P. Sorba and A. Sciarrino, “Dictionary on Lie Superalgebras", hep-th/9607161. H. Aratyn, E. Nissimov and S. Pacheva, [*Phys. Lett.*]{} [**A 201**]{} (1995) 293. A. Das and Z. Popowicz, [*Phys. Lett.*]{} [**A 296**]{} (2002) 15.
[^1]: [*e-mail: [email protected]*]{}
[^2]: [*e-mail: [email protected]*]{}
|
---
abstract: 'When a resonance associated with electromagnetically induced transparency (EIT) in an atomic ensemble is modulated by an off-resonant standing light wave, a band of frequencies can appear for which light propagation is forbidden. We show that dynamic control of such a bandgap can be used to coherently convert a propagating light pulse into a stationary excitation with non-vanishing photonic component. This can be accomplished with high efficiency and negligble noise even at a level of few-photon quantum fields thereby facilitating possible applications in quantum nonlinear optics and quantum information.'
address: 'Physics Department and ITAMP, Harvard University, Cambridge, Massachusetts 02138'
author:
- 'A. André and M. D. Lukin'
title: Manipulating Light Pulses via Dynamically controlled Photonic Bandgap
---
Techniques for coherent control of light-matter interaction are now actively explored for storing and manupilating quantum states of photons. In particular, using electromagnetically induced transparency (EIT) [@EIT1; @EIT2] and adiabatic following of “dark-state polaritons” [@polaritons], the group velocity of light pulses can be dramatically decelerated and their quantum state can be mapped onto metastable collective states of atomic ensembles [@storage].
In contrast to such a coherent absorption process, the present Letter describes how a propagating light pulse can be converted into a stationary excitation with non-vanishing photonic component. This is accomplished via controlled modification of the photonic density of states in EIT media by modulating the refractive index with an off-resonant standing light wave. By varying the properties of the resulting photonic band structure in time, the original light pulse can be converted into an excitation inside the bandgap where its propagation is forbidden. Long storage of excitations with non-vanishing photonic component may open interesting prospects for enhancement of nonlinear optical interactions [@nlo1; @nlo3]. In particlular, an intriguing and practically important [@Q-comp; @Q-info] application of this effect for interactions between few-photon fields is dicussed in the concluding paragraph of this Letter.
Before proceeding, we note that there exists a substantial litterature on photonic bandgap [@PBG] materials. Recently photonic bandgap structures have been investigated theoretically [@mabuchi] for strong coupling of single atoms with photons. Photonic bandgap based on interaction with atoms in an optical lattice were also investigated [@deutsch]. We also note other related work on EIT-based control of the propagation properties of light in atomic media [@EIT-prop].
The key idea of the present approach can be qualitatively understood by first considering a medium consisting of stationary atoms with a level structure shown in Fig. 1a. The atoms are interacting with a weak signal field and two strong fields. The running wave control field $\Omega_c$ is tuned to resonant frequency of the $|b\rangle\rightarrow|c\rangle$ transition. In the absence of the field $\Omega_s$, this situation corresponds to the usual EIT: in the vicinity of a frequency corresponding to two-photon resonance the medium becomes transparent for a signal field. This transparency is accompanied by a steep variation of the refractive index.
The dispresion relation can be further manipulated by applying an off-resonant standing wave field with Rabi frequency $\Omega_s(z)=2\Omega_s\cos(k_sz)$ and a frequency detuning $\Delta$. This field induces an effective shift of resonant frequency (light shift) that varies periodically in space, resulting in a spatial modulation of the index of refraction according to $\delta n(z) = (c/v_g)
4\frac{\Omega_s^2}{\Delta}\cos^2(k_sz)$, where $c/v_g$ is the ratio of speed of light in vacuum to group velocity in the medium. When the modulation depth is sufficiently large, signal light propagating near atomic resonance in the forward $z$ direction with wavenumber $k$ near $k_s$ may undergo Bragg scattering into the backward propagating mode with wavenumber $-k$. In direct analogy to e.g., optical interferometers, the scattering of the counterpropagating fields into each other can modify the photonic density of states. In particular, a range of frequencies (“photonic bandgap”) can appear for which light propagation is forbidden [@yariv]. According to a standard technique to analyze the resulting band structure, Bloch’s theorem can be applied so that the propagating solutions obey $E(z+a)=e^{i K a}E(z)$, where $K$ is the Bloch wave vector. Imposing this condition and assuming that the wave vectors of the fields are close ($k\simeq k_s$), we can solve for the band structure and obtain near two-photon resonance $$\cos(Ka)=\cosh\left(\frac{g^2N}{\Omega_c^2}a\sqrt{\Delta_s^2-
(\omega-\omega_{ba})^2}\right),$$ where $g=\wp\sqrt{\frac{\nu}{2\hbar\epsilon_0 V}}$ is the atom-field coupling constant, $N$ is the number of atoms, $\Delta_s=\Omega_s^2/\Delta$ is the amplitude of the light shift modulation, $\wp$ is the dipole moment of the $a-b$ transition, $V$ the quantization volume and the factor $g^2N/\Omega_c^2$ corresponds to $c/v_g$. For frequencies such that $|\omega-\omega_{ba}|\leq|\Delta_s|$ a [*bandgap*]{} is created: the Bloch wavevector acquires an imaginary part and the propagation of waves in the medium is forbidden. For an outside observer such a medium can be viewed as a mirror: an incident wave with frequency inside the bandgap would undergo almost perfect reflection. Calculations in Fig. 2 indicate that this qualitative result remains valid even for realistic EIT conditions, including a finite transparency bandwidth and finite ground-state decoherence rate $\gamma_{bc}$.
A specific, distinguishing feature of the present scheme is the possibility of [*dynamically*]{} changing the properties of the medium by switching in time the fields $\Omega_s(t)$ and $\Omega_c(t)$ on and off. In particular, by combining the techniques of [@storage] with the present idea, the following scenario can be implemented: first, with the standing wave turned off $\Omega_s=0$, a forward propagating pulse is stored in the medium as a Raman coherence between levels $|b\rangle$ and $|c\rangle$. Then, by switching on both the “control” field $\Omega_c$ and the standing wave field $\Omega_s$, the pulse can be released into the bandgap and initially propagates in the forward direction. In the presence of a bandgap, the forward ($\hat{{\mathcal E}}_+$) and backward ($\hat{{\mathcal E}}_-$) components are coupled due to Bragg scattering off the index grating, so that amplitude in the forward mode $+k$ is converted into amplitude in the backward mode $-k$ and vice versa. In this case, the pulse can be effectively trapped in the photonic bandgap.
We now turn to a detailed description of the dynamic trapping procedure. We are interested here in the propagation of fields with possibly non-trivial statistics, such as single photon fields, so that a quantum description is used. In the presence of the standing wave it is convenient to decompose the propagating signal fields into two slowly varying components $\hat{{\mathcal E}}_+$ (propagating forward) and $\hat{{\mathcal E}}_-$ (propagating backward) so that the electric field is $\hat{E}(z,t) = \sqrt{\frac{\hbar \nu}{2\epsilon_0
V}}\left[\sum_{\sigma=\pm}\hat{{\mathcal
E}}_\sigma(z,t)e^{i(\nu/c)(\sigma z-ct)}+{\rm h.c.}\right]$ and couples resonantly to the transition between the ground state $|a\rangle$ and the excited state $|b\rangle$, the carrier frequency of the optical field being $\nu=\omega_{ab}$. Two time dependent classical driving fields with Rabi-frequencies $\Omega_s(t)$ and $\Omega_c(t)$ are used to control the propagation as shown if Fig. 1a.
To describe the quantum properties of the medium, we use collective slowly varying atomic operators [@fleisch1] $\hat{\sigma}_{\mu\nu}(z,t)=\frac{1}{N_z}\sum_{j=1}^{N_z}|\mu_j\rangle
\langle\nu_j|e^{-i\omega_{\mu\nu}t}$ where the sum is performed over a small but macroscopic volume containing $N_Z\gg 1$ atoms around position $z$. The interaction hamiltonian is then, in a rotating frame $$\begin{aligned}
\hat{H}&=&\frac{N}{L}\int
dz\left\{\Delta\hat{\sigma}_{dd}-\left[g(\hat{{\mathcal E}}_+e^{ik_0z}
+\hat{{\mathcal E}}_-e^{-ik_0z})\hat{\sigma}_{ab}\right.\right. \nonumber
\\
&+& \left.\left. \Omega_c e^{ik_cz}\hat{\sigma}_{ac}
+2\Omega_s \cos(k_sz)\hat{\sigma}_{dc}+{\rm h.c.}\right]\right\},\end{aligned}$$ where $k_0=\nu/c$ and $N$ the number of atoms.
Since the two propagating fields $\hat{{\mathcal E}}_+$ and $\hat{{\mathcal E}}_-$ interact with the atoms, we expect an optical coherence $\hat{\sigma}_{ba}$ to appear as well as a Raman coherence $\hat{\sigma}_{bc}$. Moreover these two fields will give rise to coherences with distinct spatial variations, i.e., varying as $e^{ik_0z}$ for the component of $\hat{\sigma}_{ba}$ induced by $\hat{{\mathcal E}}_+$ while that due to $\hat{{\mathcal E}}_-$ will vary as $e^{-ik_0z}$. We therefore decompose the optical and Raman coherences according to these two distinct spatial variations $\hat{\sigma}_{ba}(z,t)=\hat{\sigma}_{ba}^+(z,t)e^{ik_0z}+
\hat{\sigma}_{ba}^-(z,t)e^{-ik_0z}$ and $\hat{\sigma}_{bc}(z,t)=\hat{\sigma}_{bc}^+(z,t)e^{i(k_0-k_c)z}+
\hat{\sigma}_{bc}^-(z,t)e^{-i(k_0+k_c)z}$. Using slowly varying envelopes, we then have the equations of motion for the forward and backward modes $$\begin{aligned}
\left(\frac{\partial}{\partial t}\pm c\frac{\partial}{\partial
z}\right)\hat{{\mathcal E}}_\pm(z,t)=igN\hat{\sigma}_{ba}^\pm(z,t). \end{aligned}$$ Assuming weak quantum fields and solving perturbatively, we find to lowest order in the weak fields and in an adiabatic approximation (assuming $\Omega_s(t)$ and $\Omega_c(t)$ change in time slowly enough [@polaritons]) $$\begin{aligned}
\hat{\sigma}_{ba}^\pm(z,t)&=&-\frac{i}{\Omega_c}\left[\frac{\partial}{\partial
t}\hat{\sigma}_{bc}^\pm(z,t)-i\Delta_se^{\pm 2i\Delta kz}
\hat{\sigma}_{bc}^\mp(z,t)\right] \\
\hat{\sigma}_{bc}^\pm(z,t)&=&-\frac{g\hat{{\mathcal E}}_\pm(z,t)}{\Omega_c}
-i\frac{\hat{F}_{ba}^\pm(t)}{\Omega_c},\end{aligned}$$ where $\Delta k=k_s-k_0$, $\Delta_s=|\Omega_s|^2/\Delta$ is the amplitude of the spatially modulated light shift caused by the standing wave field $\Omega_s$ and $\hat{F}_{ba}^\pm(t)$ are $\delta$-correlated noise forces. Note that in the adiabatic limit the noise forces are negligible [@polaritons]. The propagation equations are thus $$\begin{aligned}
\left(\frac{\partial}{\partial t}\pm c\frac{\partial}{\partial
z}\right)
\hat{{\mathcal E}}_\pm(z,t)=
&-&\frac{g^2N}{\Omega_c}\frac{\partial}{\partial t}
\frac{\hat{{\mathcal E}}_\pm(z,t)}{\Omega_c}\nonumber
\\
&+&i\frac{g^2N}{\Omega_c}\Delta_s
\frac{\hat{{\mathcal E}}_\mp(z,t)}{\Omega_c}e^{\pm 2i\Delta kz},\end{aligned}$$ which indicates that the forward and backward slowly propagating modes become coupled. Specifically, the first term on the right-hand side gives rise to propagation at the group velocity $v_g=c/(1+\frac{g^2N}{\Omega_c^2})$ [@slowlight] while the second term gives rise to coupling between the forward and backward propagating modes. This coupling is optimum when the effective phase matching $\Delta k=k_s-k_0=0$ is achieved [@yariv]. Note that both the “control” field $\Omega_c$ and the standing wave amplitude $\Omega_s$ can be time-dependent and that as long as changes are slow enough (adiabatic limit [@polaritons]) the above equations describe the correct dynamics of the coupled modes.
To obtain a solution in the case of time-dependent fields $\Omega_c(t)$ and $\Omega_s(t)$, we introduce new quantum fields $\hat{\Psi}_+(z,t)$ and $\hat{\Psi}_-(z,t)$ (forward and backward propagating dark-state polaritons [@polaritons]) $\hat{\Psi}_\pm(z,t)=\cos\theta(t)\hat{{\mathcal E}}_\pm(z,t)
-\sin\theta(t)\sqrt{N}\hat{\sigma}_{bc}^\pm(z,t)$, where $\tan^2\theta(t)=\frac{g^2N}{\Omega_c(t)^2}$ is the mixing angle between the photon and matter components of the polariton. The polaritons then obey the coupled equations $$\begin{aligned}
\left(\frac{\partial}{\partial\tau}
\pm c\frac{\partial}{\partial z}\right)
\hat{\Psi}_\pm &=& i\Delta_s\tan^2\theta(t)\hat{\Psi}_\mp,
\label{propeqn}\end{aligned}$$ where $\tau(t)=\int^{t}dt'\;\cos^2\theta(t')$. Eq. (\[propeqn\]) describes propagation with velocity $v_g(t)=c\cos^2\theta(t)$ of the two polaritons (traveling in opposite directions) and coupling with rate $\Delta_s(t)\sin^2\theta(t)$. Note that in the limit $c\gg v_g$, the photonic component $\hat{{\mathcal E}}_\pm\simeq (\Omega_c/g\sqrt{N})\hat{\Psi}_\pm$ is finite for non-zero control field $\Omega_c$.
We consider now the scenario in which the standing wave beams are initially off and the control field is on, with Rabi frequency $\Omega_c^{in}$ (corresponding to a group velocity $v_g^{in}$). A forward propagating photon wavepacket can then be stored in the medium in the form of a Raman coherence $\hat{\sigma}_{bc}^+(z,t)$ and subsequently released [@storage]. We consider the case when the standing wave field is first switched on, establishing the bandgap, followed by the control field (with Rabi frequency $\Omega_c^0$ corresponding to a group velocity $v_g^0$, possibly different from $v_g^{in}$), releasing the pulse in the bandgap medium. For simplicity we consider the case when the standing wave is switched on before or simultaneously with the control field, so that, the coupling rate $\Delta_s(\tau)\tan^2\theta(\tau)$ does not depend on $\tau$. In this case, we solve (\[propeqn\]) by Fourier transforming $\hat{\Psi}_\pm(z)=\frac{1}{2\pi}\int{dk\;e^{ikz}\hat{\Psi}_\pm(k)}$ to obtain $$\begin{aligned}
\hat{\Psi}_+(k,\tau) &=&
\left[\cos(\zeta\tau)-i\frac{kc}{\zeta}\sin(\zeta\tau)\right]\hat{\Psi}_+(k,0)
\nonumber \\
\hat{\Psi}_-(k,\tau) &=&
-i\frac{\chi}{\zeta}\sin(\zeta\tau)\hat{\Psi}_+(k,0),
\label{solrabi}\end{aligned}$$ where $\chi\equiv\Delta_s(\tau)\tan^2\theta(\tau)$ and $\zeta=\sqrt{(kc)^2+\chi^2}$. According to (\[solrabi\]), the various Fourier components (wavenumber $k$) of the pulse cycle back and forth between the corresponding forward and backward modes at a rate which depends weakly on $k$. In particular when the spatial extent of the pulse inside the medium is large enough to give a relevant range of wavenumber negligible compared to the strength of the coupling of forward and backward modes, pulse distortion is negligible and the spatial envelopes have the time dependence $\hat{\Psi}_+(z,\tau)=\cos(\chi\tau)\hat{\Psi}_+(z,0)$ and$\hat{\Psi}_-(z,\tau)=-i\sin(\chi\tau)\hat{\Psi}_+(z,0)$. The wavepacket periodically cycles between a forward and backward propagating component, the result of which is [*trapping*]{} of the pulse in the medium as shown in Fig. 3. The wavepacket is trapped as a combination of light pulse and Raman coherence.
The above analysis involves an adiabatic approximation and ignores the decay of Raman coherence. In order to ignore the motion compared to the coupling we require that $\chi\gg kc$ and since the maximum $k$ can be estimated from the initial length of the pulse in the medium we find that this is equivalent to requiring that $\Delta_sT\gg \frac{v_g^0}{v_g^{in}}$, where $T$ is the duration of the initial pulse. As seen from (\[solrabi\]) the effect of non-zero values of $k$ is that the trapped pulse become spatially distorted. Expanding $\sqrt{\chi^2+(kc)^2}\tau\simeq
[\chi+(kc)^2/(2\chi)]\tau$ we need $\tau\ll\chi/(kc)^2$, which gives after expressing $\tau$ in terms of real time $t$ that $\frac{t_{int}}{T}\ll\Delta_sT(v_g^{in}/v_g^0)^2$, where $t_{int}$ is the maximum time during which the pulse may be trapped without suffering distortion. Furthermore, taking into account the limits imposed by adiabaticity (i.e., modulation of index occurs within the transparency window $\Delta_s\ll(\Omega_c^0)^2/\gamma$) and the fact that the trapped pulse must fit inside the medium when travelling at the reduced group velocity, we find that the trapping or interaction time is limited by $$t_{int}\lesssim\frac{g^2N}{\gamma_{ab}(c/L)}\frac{v_g^{in}}{v_g^0}T
,\frac{1}{\gamma_{bc}},$$ where $L$ is the length of the medium. This limiting quantity corresponds to the density length-product and can be rather large for optically dense medium.
To summarize, we have shown that by spatially modulating the dispersive feature of the EIT resonance it is possible to induce a photonic bandgap. By dynamically controlling the resulting band structure, a propagating light pulse can be converted into a stationary excitation which is effectively trapped in the medium.
To conclude, we note some interesting avenues opened by this work. First, we note that the present work is not restricted to the use of stationary or cold atoms, for example a Doppler-free configuration involving pairs of copropagating fields is shown in Fig. 1b. In this case, the two polaritons are asociated with distinct atomic states $|c\rangle$ and $|c'\rangle$. Each polariton corresponds to a Doppler-free Raman configuration and they are coupled by a Doppler-free two-photon transition. Second, this work may open interesting prospects for nonlinear optics. For example, a trapped photonic excitation can be used to induce a light shift via interaction with another atom-like polariton. Large nonlinear phase shifts at the single-photon level can be expected and open up the way for possible applications in quantum non-linear optics and quantum information without the limitations associated with traveling wave configurations [@nlo3; @imamoglu] and without invoking cavity QED techniques [@cqed]. Finally, it is intriguing to consider extension of these ideas to manipulate photonic bandgap in condensed matter.
This work was supported by the NSF through ITR program and the grant to the ITAMP.
[99]{}
M. O. Scully and M. S. Zubairy, [*Quantum Optics*]{}, (Cambridge University Press, Cambridge, England, 1997).
S. E. Harris, Phys. Today [**50**]{}, No. 7, 36 (1997).
M. Fleischhauer and M. D. Lukin, Phys. Rev. Lett. [**84**]{}, 5094 (2000); M. D. Lukin, S. F. Yelin and M. Fleischhauer, Phys. Rev. Lett. [**84**]{}, 4232 (2000).
C. Liu [*et al.*]{}, Nature [**409**]{}, 490 (2001); D. Phillips [*et. al.*]{}, Phys. Rev. Lett. [**86**]{}, 783 (2001).
S. E. Harris and Y. Yamamoto, Phys. Rev. Lett. [**81**]{}, 3611 (1998).
S. E. Harris and L. V. Hau, Phys. Rev. Lett. [**82**]{}, 4611 (1999).
D. Bouwmeester, A. K. Ekert, A. Zeilinger (eds.), [*The Physics of Quantum Information*]{}, (Springer , New York, 2000).
M. A. Nielsen and I. L. Chuang., [*Quantum Computation and Quantum Information*]{}, (Cambridge University Press, New York, 2000).
J. D. Joannopoulos, R. D. Meade and J. N. Winn, [*Photonic Crystals: Molding the Flow of Light*]{}, (Princeton University Press, Princeton, 1995); J. P. Dowling and H. Everitt, Photonic and Sonic Bandgap Bibliography, URL http://home.earthlink.net/$\tilde{\;}$jpdowling/pbgbib.html
J. Vučković, M. Lončar, H. Mabuchi and A. Scherer, Phys. Rev. E [**65**]{}, 016608 (2001).
I. H. Deutsch, R. J. C. Spreeuw, S. L. Rolston and W. D. Phillips, Phys. Rev. A [**52**]{}, 1394 (1995).
O. Kocharovskaya, Y. Rostovtsev and M. O. Scully, Phys. Rev. Lett. [**86**]{}, 628 (2001); U. Leonhardt, Nature [**415**]{}, 406 (2002).
A. Yariv and P. Yeh, [*Optical Waves in Crystals*]{}, (Wiley, New-York, 1984).
M. Fleischhauer and Th. Richter, Phys. Rev. A [**51**]{}, 2430 (1995).
L. V. Hau [*et al.*]{}, Nature **397**, 594 (1999); M. Kash [*et al.*]{} Phys. Rev. Lett. [**82**]{}, 5229 (1999); D. Budker [*et al.*]{}, [*ibid*]{} [**83**]{}, 1767 (1999).
M. D. Lukin and A. Imamoğlu, Phys. Rev. Lett. [**84**]{}, 1419 (2000).
A. Kuhn, M. Hennrich and G. Rempe, e-print quant-ph/0204147; C.J. Hood, H.J. Kimble and J. Ye, Phys. Rev. A [**64**]{}, 033804 (2001).
|
---
abstract: 'We prove that if an involution in a ring is the sum of an idempotent and a nilpotent then the idempotent in this decomposition must be $1$. As a consequence, we completely characterize weakly nil-clean rings introduced recently in \[Breaz, Danchev and Zhou, Rings in which every element is either a sum or a difference of a nilpotent and an idempotent, J. Algebra Appl., DOI: 10.1142/S0219498816501486\].'
author:
- |
Janez Šter\
Faculty of Mechanical Engineering\
University of Ljubljana\
bibliography:
- 'References.bib'
date: 'December 7, 2015'
title: Nil Clean Involutions
---
In this note rings are unital. $U(R)$, $\operatorname{Id}(R)$, $\operatorname{Nil}(R)$ and $\operatorname{Nil}^*(R)$ stand for the set of units, the set of idempotents, the set of nilpotents and the upper nilradical of a ring $R$, respectively. $\mathbb{Z}_n$ stands for the set of integers modulo $n$. An *involution* in a ring means an element $a$ satisfying $a^2=1$.
Following [@diesl], we say that an element in a ring is *nil clean* if it is the sum of an idempotent and a nilpotent, and a ring is nil clean if every element is nil clean. The main result in this note is the following:
\[glavna\] Let $R$ be a ring with an involution $a\in R$. If $a$ is the sum of an idempotent $e$ and a nilpotent $q$ then $e=1$. In particular, every nil clean involution in a ring is unipotent (i.e. $1$ plus a nilpotent).
Write $a=e+q$ with $e\in\operatorname{Id}(R)$ and $q\in\operatorname{Nil}(R)$, and denote $f=1-e\in\operatorname{Id}(R)$ and $r=q(1+q)\in\operatorname{Nil}(R)$. From $fq=f(a-e)=fa$ we compute $fr=fq(1+q)=fa(1+a-e)=fa(f+a)=faf+fa^2=faf+f$, and similarly $rf=faf+f$. Hence $fr=rf$, so that $r$ is a nilpotent which commutes with $f$, $e$, $q$ and $a$. Accordingly, $$f=fa^2=fqa=fr(1+q)^{-1}a=f(1+q)^{-1}a\cdot r$$ is a nilpotent and hence $f=0$, as desired.
Following [@breazdanchevzhou], we say that a ring is *weakly nil-clean* if every element is either a sum or a difference of a nilpotent and an idempotent.
\[gllema\] If $R$ is a weakly nil-clean ring with $2\in U(R)$ then $R/\operatorname{Nil}^*(R)\cong\mathbb{Z}_3$.
Choose any idempotent $e\in\operatorname{Id}(R)$, and set $a=1-2e$. By assumption, either $a$ or $-a$ is nil clean. If $a$ is nil clean then, since $a^2=1$, Proposition \[glavna\] gives that $a-1=-2e$ is a nilpotent, so that $e$ is a nilpotent and hence $e=0$. Similarly, if $-a$ is nil clean then, since $(-a)^2=1$, Proposition \[glavna\] gives that $-a-1=-2(1-e)$ is a nilpotent, so that $1-e$ is a nilpotent and hence $e=1$. This proves that $R$ has only trivial idempotents. Accordingly, since $R$ is weakly nil clean, every element of $R$ must be either $q$ or $1+q$ or $-1+q$ for some $q\in\operatorname{Nil}(R)$. From this, one quickly obtains that $\operatorname{Nil}(R)$ must actually form an ideal in $R$, so that $R/\operatorname{Nil}^*(R)$ can have only $3$ elements and hence $R/\operatorname{Nil}^*(R)\cong\mathbb{Z}_3$, as desired. (Alternatively, considering that $R$ is abelian, $R/\operatorname{Nil}^*(R)\cong\mathbb{Z}_3$ can be also obtained from [@breazdanchevzhou Theorem 12].)
Using the above lemma, we have:
A ring is weakly nil clean if and only if it is either nil clean or isomorphic to $R_1\times R_2$ where $R_1$ is nil clean and $R_2/\operatorname{Nil}^*(R_2)\cong\mathbb{Z}_3$.
Follows from Lemma \[gllema\] together with [@breazdanchevzhou Theorem 5].
Proposition \[glavna\] can be generalized to arbitrary algebraic elements of order $2$ as follows. Let $R$ be an algebra over a commutative ring $k$, and let $a\in R$ be an element satisfying $\alpha a^2+\beta a+\gamma=0$, with $\alpha,\beta,\gamma\in k$, and suppose that $a=e+q$ with $e\in\operatorname{Id}(R)$ and $q^n=0$. Then one can show that $r=q(\alpha q+\alpha+\beta)$ is a nilpotent commuting with $e$, which yields, similarly as in Proposition \[glavna\], that $$(\alpha+\beta)^ne+(\alpha+\beta)^{n-1}\gamma$$ is also a nilpotent. Note that this result indeed generalizes Proposition \[glavna\] (taking $\alpha=1$, $\beta=0$ and $\gamma=-1$ yields that $e-1$ is a nilpotent, so that $e=1$). However, for orders of algebraicity higher than $2$ this argument no longer seems to work.
Acknowledgements {#acknowledgements .unnumbered}
----------------
The author is indebted to Professor T.Y. Lam for a helpful discussion on the previous version of this work.
|
---
abstract: 'The integrable Davey–Stewartson system is a linear combination of the two elementary flows that commute: $\mathrm{i} q_{t_1} + q_{xx} + 2q\partial_y^{-1}\partial_x (|q|^2) =0$ and $\mathrm{i} q_{t_2} + q_{yy} + 2q\partial_x^{-1}\partial_y (|q|^2) =0$. In the literature, each elementary Davey–Stewartson flow is often called the Fokas system because it was studied by Fokas in the early 1990s. In fact, the integrability of the Davey–Stewartson system dates back to the work of Ablowitz and Haberman in 1975; the elementary Davey–Stewartson flows, as well as another integrable -dimensional nonlinear Schrödinger equation $\mathrm{i} q_{t} + q_{xy} + 2 q\partial_y^{-1}\partial_x (|q|^2) =0$ proposed by Calogero and Degasperis in 1976, appeared explicitly in Zakharov’s article published in 1980. By applying a linear change of the independent variables, an elementary Davey–Stewartson flow can be identified with a -dimensional generalization of the integrable long wave–short wave interaction model, called the Yajima–Oikawa system: $\mathrm{i} q_{t} + q_{xx} + u q=0$, $u_t + c u_y = 2(|q|^2)_x$. In this paper, we propose a new integrable semi-discretization (discretization of one of the two spatial variables, say $x$) of the Davey–Stewartson system by constructing its Lax-pair representation; the two elementary flows in the semi-discrete case indeed commute. By applying a linear change of the continuous independent variables to an elementary flow, we also obtain an integrable semi-discretization of the -dimensional Yajima–Oikawa system.'
author:
- 'Takayuki <span style="font-variant:small-caps;">Tsuchida</span>'
title: ' Integrable semi-discretizations of the Davey–Stewartson system and a -dimensional Yajima–Oikawa system. I '
---
Introduction
============
The Davey–Stewartson system [@DS74] (also known as the Benney–Roskes system [@Benney69]) is a -dimensional generalization of the nonlinear Schrödinger equation; its mathematical form can be written as [@Nizh82]
\[continuousDS\] $$\mathrm{i} q_{t} + a \left( q_{xx} + 2F q \right) + b \left( q_{yy} + 2G q \right) =0.
$$ Here, $a$ and $b$ are real constants, the subscripts denote the partial differentiation and $F$ (in the case of ) and $G$ (in the case of ) are nonlocal real-valued potentials defined as $$F_y := (|q|^2)_x, \hspace{5mm} G_x := (|q|^2)_y.$$
The integrability of the Davey–Stewartson system (\[continuousDS\]) can be traced back to the work of Ablowitz and Haberman in 1975 [@Hab75] (also see [@Morris77; @Anker; @Ab78; @Cornille]), who gave its Lax-pair representation [@Lax] up to a coordinate transformation; note that the Lax-pair representation in dimensions expressed in operator form is often referred to as the Manakov triad representation [@Manakov_triad].
The Davey–Stewartson system (\[continuousDS\]) is a linear combination of the two elementary flows: $$\mathrm{i} q_{t_1} + q_{xx} + 2F q =0, \hspace{5mm} F_y= (|q|^2)_x,
\label{element1}$$ and $$\mathrm{i} q_{t_2} + q_{yy} + 2G q =0, \hspace{5mm} G_x = (|q|^2)_y.
\label{element2}$$ The elementary Davey–Stewartson flow (\[element1\]) (or (\[element2\])) is often referred to as the Fokas system because it appeared in his paper [@Fokas94] published in 1994. Note, however, that the elementary Davey–Stewartson flow (\[element1\]), as well as another integrable -dimensional generalization of the nonlinear Schrödinger equation , originally proposed by Calogero and Degasperis [@Calo76] in 1976, appeared explicitly in Zakharov’s article [@Zakh] published in 1980. In addition, a linear change of the independent variables: $$\widetilde{t} = t_1 + y, \hspace{5mm} \widetilde{x}=x, \hspace{5mm} \widetilde{y}= c y,
\label{Galilean-like}$$ with an arbitrary real constant $c$, converts the elementary Davey–Stewartson flow (\[element1\]) to a -dimensional generalization of an integrable long wave–short wave interaction model (known as the Yajima–Oikawa system [@YO76]): $$\mathrm{i} q_{t} + q_{xx} + u q =0, \hspace{5mm} u_t + c u_y= 2 (|q|^2)_x,
\label{2DYO}$$ which was proposed by Mel’nikov [@Mel83] in 1983. Here, and the tilde is omitted for notational brevity. The -dimensional Yajima–Oikawa system (\[2DYO\]) with is often referred to as the Maccari system [@Maccari1].
From (\[element1\]) and (\[element2\]), we obtain the relation , which implies that $$F_{t_2} = \mathrm{i} \left( q_y q^\ast - q q^\ast_y \right)_x + \xi, \hspace{5mm} \xi_y=0.
\label{F_{t_2}}$$ Here, the asterisk denotes the complex conjugate. Similarly, we also obtain the relation , which implies that $$G_{t_1} = \mathrm{i} \left( q_x q^\ast - q q^\ast_x \right)_y + \eta, \hspace{5mm} \eta_x=0.
\label{G_{t_1}}$$ With the aid of (\[F\_[t\_2]{}\]) and (\[G\_[t\_1]{}\]), we can show that the two elementary flows (\[element1\]) and (\[element2\]) commute [@Fokas94; @Kaji90], i.e., if and only if ; in this case, we can set without loss of generality by a change of variables . In short, the commutativity of the elementary flows[^1] is guaranteed only if the “constants” of integration in the nonlocal potentials $F$ and $G$ are chosen appropriately. More specifically, the two elementary flows commute if and only if the $y$-independent value of evaluated at any fixed value of $y$ is equal to the $x$-independent value of evaluated at any fixed value of $x$, which is thus -independent and can be set equal to zero by redefining the dependent variable $q$.
In this and the next paper, we propose integrable semi-discretizations of the Davey–Stewartson system (\[continuousDS\]) and the -dimensional Yajima–Oikawa system (\[2DYO\]) by constructing their Lax-pair representations. We consider the discretization of only one of the spatial variables $x$ and $y$, which is referred to as a “semi-discretization” in this and the next paper. This is in contrast to our previous work on an integrable discretization of the Davey–Stewartson system [@TD11] (also see some preceding studies in [@Hu06; @Hu07]) wherein both spatial variables are discretized. The semi-discrete Davey–Stewartson system proposed in this paper is not obtainable by taking a continuous limit of one spatial variable in the discrete Davey–Stewartson system proposed in our previous paper [@TD11].
This paper is organized as follows. In section 2, we propose an integrable semi-discretization for each of the elementary Davey–Stewartson flows (\[element1\]) and (\[element2\]) by constructing its Lax-pair representation. In section 3, we demonstrate that these semi-discretizations commute under a suitable choice of the “constants” of integration as in the continuous case. Then, we consider a linear combination of the semi-discrete elementary Davey–Stewartson flows to obtain an integrable semi-discretization of the full Davey–Stewartson system (\[continuousDS\]). Moreover, by applying the linear change of the independent variables like (\[Galilean-like\]) to one of the semi-discrete elementary Davey–Stewartson flows, we also obtain an integrable semi-discretization of the -dimensional Yajima–Oikawa system (\[2DYO\]) (see a relevant semi-discrete system in [@Yu15]). Section 4 is devoted to concluding remarks.
Integrable semi-discretizations of the two elementary Davey–Stewartson flows
============================================================================
Semi-discrete linear problem
----------------------------
The continuous Davey–Stewartson system (\[continuousDS\]) is obtained as the compatibility conditions of the overdetermined linear systems for $\psi$ and $\phi$ [@Hab75; @Nizh82]:
[\[clinear\_s\]]{} \_[y]{} = q , \[clinear\_s1\]\
\_[x]{} = -q\^, \[clinear\_s2\]
and
[\[clinear\_t\]]{} \_[t]{} = -a \_[xx]{} - 2a F + b q \_y - b q\_y , \[clinear\_t1\]\
\_[t]{} = a q\^\_x - a q\_x\^+ b \_[yy]{} + 2 b G . \[clinear\_t2\]
This Lax-pair representation for the Davey–Stewartson system can be straightforwardly generalized to the case of vector- or matrix-valued dependent variables [@Kono92; @Calogero91; @Fordy87; @March]. As a semi-discrete analog of the spatial part of the Lax-pair representation for (the vector generalization of) the Davey–Stewartson system, we consider the following linear problem in two spatial dimensions:
[\[sdlinear\]]{} \_[n,y]{} = q\_n ( \_[n]{} + \_[n+1]{} ), \[sdlinear1\]\
\_[n+1]{}-\_[n]{} = r\_n \_n. \[sdlinear2\]
Here, $n$ is a discrete spatial variable and $y$ is a continuous spatial variable; the subscript $y$ denotes the differentiation with respect to $y$. The constants $\gamma$ and $\delta$ should satisfy the condition ; can be fixed at any nonzero value by rescaling the dependent variable $q_n$, so only the ratio $\gamma/\delta$ is essential. The dependent variables $q_n$ and $r_n$ are $M$-component row and column vectors, respectively; a scalar component $\psi_n$ and an $M$-component column vector $\phi_n$ comprise the linear wavefunction. We do not distinguish between the left scalar multiplication and the right scalar multiplication, so, for example, in (\[sdlinear1\]). Note that using the second equation (\[sdlinear2\]), the first equation (\[sdlinear1\]) can be rewritten as $$\nonumber
\psi_{n,y}= \left( \gamma + \delta \right) q_n \phi_n + \delta q_n r_n \psi_n,$$ where $\psi_n$, $q_n \phi_n$ and $q_n r_n$ are scalar functions.
Semi-discretization of the elementary Davey–Stewartson flow (\[element1\])
--------------------------------------------------------------------------
One possible choice of time evolution of the linear wavefunction is
[\[sd\_time1\]]{} \_[n,t\_1]{} = u\_n\^\_[n-1]{} + u\_[n+1]{}\^\_[n+1]{} + w\_n \_[n]{}, \[sd\_time1\_1\]\
\_[n,t\_1]{} = -u\_n\^r\_n \_[n-1]{} + u\_[n]{}\^r\_[n-1]{} \_[n]{}, \[sd\_time1\_2\]
where $\alpha$ and $\beta$ are constants satisfying the condition , and $u_n$ and $w_n$ are scalar functions.
\[prop2.1\] The compatibility conditions of the overdetermined linear systems and for $\psi_n$ and $\phi_n$ are equivalent to the system of differential-difference equations: $$\label{first_flow}
\left\{
\begin{split}
& \mathrm{i} q_{n,t_1} - \alpha u_n^\gamma q_{n-1} - \beta u_{n+1}^\delta q_{n+1} - w_n q_n =0,
\\[2pt]
& \mathrm{i} r_{n,t_1} + \beta u_{n}^\delta r_{n-1} + \alpha u_{n+1}^\gamma r_{n+1} +w_n r_n =0,
\\[2pt]
& u_{n,y} = u_n \left( q_{n-1} r_{n-1} - q_n r_n \right),
\\[2pt]
& w_{n,y} = \alpha \delta \left( u_n^{\gamma} q_{n-1} r_{n}
- u_{n+1}^{\gamma} q_n r_{n+1} \right)
+ \beta \gamma \left( u_n^{\delta} q_{n} r_{n-1}
- u_{n+1}^{\delta} q_{n+1} r_{n} \right).
\end{split}
\right.$$
This proposition can be proved by a direct calculation. Indeed, using (\[sdlinear\]) and (\[sd\_time1\]), we have $$\begin{aligned}
0 & = \mathrm{i} \psi_{n,y t_1} - \mathrm{i} \psi_{n,t_1 y}
\nonumber \\
& = \left( \mathrm{i} q_{n,t_1} - \alpha u_n^\gamma q_{n-1} - \beta u_{n+1}^\delta q_{n+1} - w_n q_n \right)
\left( \gamma \phi_n + \delta \phi_{n+1} \right)
\nonumber \\
& \hphantom{=} \; \, \mbox{}
+ \alpha \gamma u_n^{\gamma-1} \left( - u_{n,y} + u_n q_{n-1} r_{n-1} - u_n q_n r_n \right) \psi_{n-1}
\nonumber \\
& \hphantom{=} \; \, \mbox{}
+ \left( - w_{n,y} + \alpha \delta u_n^{\gamma} q_{n-1} r_{n} - \alpha \delta u_{n+1}^{\gamma} q_n r_{n+1}
+ \beta \gamma u_n^{\delta} q_{n} r_{n-1} - \beta \gamma u_{n+1}^{\delta} q_{n+1} r_{n} \right) \psi_n
\nonumber \\
& \hphantom{=} \; \, \mbox{}
+ \beta \delta u_{n+1}^{\delta -1} \left( - u_{n+1,y} + u_{n+1} q_{n} r_{n} - u_{n+1} q_{n+1} r_{n+1} \right) \psi_{n+1},
\nonumber \end{aligned}$$ and $$\begin{aligned}
0 &= \mathrm{i} \left( r_{n} \psi_{n} + \phi_{n} - \phi_{n+1} \right)_{t_1}
\nonumber \\
& = \left( \mathrm{i} r_{n,t_1} + \beta u_{n}^\delta r_{n-1} + \alpha u_{n+1}^\gamma r_{n+1} +w_n r_n \right) \psi_{n},
\nonumber \end{aligned}$$ which imply (\[first\_flow\]) for generic $\psi_n$ and $\phi_n$.
Under the parametric conditions $$\beta=\alpha^\ast, \hspace{5mm} \gamma=\delta \in \mathbb{R},
\nonumber$$ the system (\[first\_flow\]) admits the Hermitian conjugation reduction: $$r_n = - \varDelta q_n^\dagger, \hspace{5mm} u_n^\ast=u_n, \hspace{5mm} w_n^\ast = w_n,
\nonumber$$ where $\varDelta$ is an arbitrary real constant that will be interpreted as a lattice parameter and the dagger denotes the Hermitian conjugation. In particular, if , this reduction simplifies (\[first\_flow\]) to $$\label{reduced_first_flow2}
\left\{
\begin{split}
& \mathrm{i} q_{n,t_1}
= u_{n+1} q_{n+1} + w_n q_n + u_n q_{n-1}
,
\\[2pt]
& u_{n,y} = \varDelta u_n \left( {\langle q_n, q_n^\ast \rangle}
- {\langle q_{n-1}, q_{n-1}^\ast \rangle} \right)
,
\\[2pt]
& w_{n,y} = \varDelta u_{n+1} \left( {\langle q_n, q_{n+1}^\ast \rangle} + {\langle q_{n+1}, q_n^\ast \rangle} \right)
- \varDelta u_n \left( {\langle q_{n-1}, q_n^\ast \rangle} + {\langle q_n, q_{n-1}^\ast \rangle} \right),
\end{split}
\right.$$ where , and stands for the standard scalar product. In the case where , i.e., $q_n$ is a scalar, (\[reduced\_first\_flow2\]) provides an integrable semi-discretization of the elementary Davey–Stewartson flow (\[element1\]). Indeed, by setting $$\nonumber
q_n = q(n \Delta, y, t_1), \hspace{5mm} u_n = \frac{1}{\Delta^2} + \frac{1}{2} u(n \Delta, y, t_1),
\hspace{5mm} w_n = -\frac{2}{\Delta^2} + w(n \Delta, y, t_1),
$$ and taking the continuous limit , (\[reduced\_first\_flow2\]) with scalar $q_n$ reduces to $$\label{reduced_first_flow4}
\left\{
\begin{split}
& \mathrm{i} q_{t_1}
= q_{xx} + \left( u+w \right) q,
\\[2pt]
& u_{y} = w_{y} = 2 \left( \left| q \right|^2 \right)_x,
\end{split}
\right.$$ where . Thus, (\[reduced\_first\_flow4\]) for the pair of dependent variables can be identified with the elementary Davey–Stewartson flow (\[element1\]), up to a rescaling of variables.
Actually, the differentiation operator $\partial_{t_1}$ in (\[sd\_time1\]) can be represented as a linear combination of two elementary differentiation operators as $$\nonumber
\mathrm{i} \partial_{t_1} = \alpha \partial_{t_\alpha} + \beta \partial_{t_\beta}.$$ Here, the two operators $\partial_{t_\alpha}$ and $\partial_{t_\beta}$ correspond to the case , and the case , , respectively, i.e.,
[\[sd\_time\_a\]]{} \_[n,t\_]{} = u\_n\^\_[n-1]{} + w\^[()]{}\_n \_[n]{}, \[sd\_time\_a\_1\]\
\_[n,t\_]{} = -u\_n\^r\_n \_[n-1]{}, \[sd\_time\_a\_2\]
and
[\[sd\_time\_b\]]{} \_[n,t\_]{} = u\_[n+1]{}\^\_[n+1]{} + w\^[()]{}\_n \_[n]{}, \[sd\_time\_b\_1\]\
\_[n,t\_]{} = u\_[n]{}\^r\_[n-1]{} \_[n]{}, \[sd\_time\_b\_2\]
where .
As special cases of Proposition \[prop2.1\], we obtain the following two propositions.
\[prop2.2\] The compatibility conditions of the overdetermined linear systems and for $\psi_n$ and $\phi_n$ are equivalent to the system of differential-difference equations: $$\label{first_flow_a}
\left\{
\begin{split}
& q_{n,t_\alpha} = u_n^\gamma q_{n-1} + w_n^{(\alpha)} q_n,
\\[2pt]
& r_{n,t_\alpha} = -u_{n+1}^\gamma r_{n+1} - w_n^{(\alpha)} r_n,
\\[2pt]
& u_{n,y} = u_n \left( q_{n-1} r_{n-1} - q_n r_n \right),
\\[2pt]
& w^{(\alpha)}_{n,y} = \delta \left( u_n^{\gamma} q_{n-1} r_{n}
- u_{n+1}^{\gamma} q_n r_{n+1} \right).
\end{split}
\right.$$
\[prop2.3\] The compatibility conditions of the overdetermined linear systems and for $\psi_n$ and $\phi_n$ are equivalent to the system of differential-difference equations: $$\label{first_flow_b}
\left\{
\begin{split}
& q_{n,t_\beta} = u_{n+1}^\delta q_{n+1} + w_n^{(\beta)} q_n,
\\[2pt]
& r_{n,t_\beta} = - u_{n}^\delta r_{n-1} - w_n^{(\beta)} r_n,
\\[2pt]
& u_{n,y} = u_n \left( q_{n-1} r_{n-1} - q_n r_n \right),
\\[2pt]
& w^{(\beta)}_{n,y} = \gamma \left( u_n^{\delta} q_{n} r_{n-1}
- u_{n+1}^{\delta} q_{n+1} r_{n} \right).
\end{split}
\right.$$
The system (\[first\_flow\_a\]) implies the relation: $$\nonumber
\left( \log u_n^\delta \right)_{y t_\alpha}= w^{(\alpha)}_{n-1,y} - w^{(\alpha)}_{n,y}.$$ Thus, by assuming that at some value of $y$ (say, $-\infty$, $0$, or $+\infty$), we obtain $$\label{u_n_t_alpha}
\left( \log u_n^\delta \right)_{t_\alpha}= w^{(\alpha)}_{n-1} - w^{(\alpha)}_{n}.$$ The system (\[first\_flow\_b\]) implies the relation: $$\nonumber
\left( \log u_n^\gamma \right)_{y t_\beta}= w^{(\beta)}_{n,y} - w^{(\beta)}_{n-1,y}.$$ Thus, by assuming that at some value of $y$ (say, $-\infty$, $0$, or $+\infty$), we obtain $$\label{t_n_t_beta}
\left( \log u_n^\gamma \right)_{t_\beta}= w^{(\beta)}_{n} - w^{(\beta)}_{n-1}.$$ From (\[first\_flow\_a\])–(\[t\_n\_t\_beta\]), we obtain $$\nonumber
\left( w_n^{(\alpha)} \right)_{y t_\beta}= \frac{\delta}{\gamma + \delta} \left( u_{n+1}^{\gamma + \delta}
- u_n^{\gamma + \delta} \right)_{y},$$ and $$\nonumber
\left( w_n^{(\beta)} \right)_{y t_\alpha}= \frac{\gamma}{\gamma + \delta} \left( u_{n}^{\gamma + \delta}
- u_{n+1}^{\gamma + \delta} \right)_{y},$$ which imply $$\label{w_n_t_beta}
\left( w_{n}^{(\alpha)} \right)_{t_\beta} = \frac{\delta}{\gamma + \delta} \left( u_{n+1}^{\gamma + \delta}
- u_n^{\gamma + \delta} \right) + J_n,$$ and $$\label{w_n_t_alpha}
\left( w_{n}^{(\beta)} \right)_{t_\alpha} = \frac{\gamma}{\gamma + \delta} \left( u_{n}^{\gamma + \delta}
- u_{n+1}^{\gamma + \delta} \right) + K_n,$$ respectively. Here, the “constants” of integration $J_n$ and $K_n$ are $y$-independent scalars. By a direct calculation, we arrive at the necessary and sufficient condition for the commutativity of the two operators $\partial_{t_\alpha}$ and $\partial_{t_\beta}$.
\[\] Equations – imply that the two differentiation operators $\partial_{t_\alpha}$ and $\partial_{t_\beta}$ commute, i.e., $$\nonumber
q_{n, t_\alpha t_\beta} = q_{n, t_\beta t_\alpha} \;\, \mathrm{and} \;\, r_{n, t_\alpha t_\beta} = r_{n, t_\beta t_\alpha},
$$ if and only if the “constants” of integration $J_n$ and $K_n$ satisfy the condition .
Semi-discretization of the elementary Davey–Stewartson flow (\[element2\])
--------------------------------------------------------------------------
Another possible choice of time evolution of the linear wavefunction is given by
[\[sd\_time2\]]{} k \_[n,t\_[2]{}]{} = ( + ) q\_n \_[n,y]{} - ( + ) ( q\_[n,y]{} - q\_n r\_n q\_n ) \_n\
- \_[n]{}, \[sd\_time2\_1\]\
k \_[n,t\_[2]{}]{} = \_[n,yy]{} + ( + ) v\_[n]{} \_[n]{}, \[sd\_time2\_2\]
where $k$ is an arbitrary (but nonzero) constant and $v_n$ is an square matrix.
\[\] The compatibility conditions of the overdetermined linear systems and for $\psi_n$ and $\phi_n$ are equivalent to the system of differential-difference equations: $$\label{second_flow}
\left\{
\begin{split}
& k q_{n,t_{2}} + q_{n,yy} + q_n \left( \gamma v_n + \delta v_{n+1} \right) + \gamma \delta \left( q_n r_n \right)^2 q_n =0,
\\[2pt]
& k r_{n,t_{2}} - r_{n,yy} - \left( \delta v_n + \gamma v_{n+1} \right) r_n - \gamma \delta r_n \left( q_n r_n \right)^2 =0,
\\[2pt]
& v_{n+1}-v_{n} = - 2 \left( r_n q_n \right)_y.
\end{split}
\right.$$
This proposition can be proved by a direct calculation. Indeed, using (\[sdlinear\]) and (\[sd\_time2\]), we have $$\begin{aligned}
0 & = k \left( \psi_{n,y t_2} - \psi_{n,t_2 y} \right)
\nonumber \\
& = \left[ k q_{n,t_{2}} + q_{n,yy} + q_n \left( \gamma v_n + \delta v_{n+1} \right) + \gamma \delta \left( q_n r_n \right)^2 q_n \right]
\left( \gamma \phi_n + \delta \phi_{n+1} \right)
\nonumber \\
& \hphantom{=} \; \, \mbox{}
+ \gamma \delta q_n \left[ v_{n+1}-v_{n} + 2 \left( r_n q_n \right)_y \right] r_n \psi_n,
\nonumber \end{aligned}$$ and $$\begin{aligned}
0 &= k \left( r_{n} \psi_{n} + \phi_{n} - \phi_{n+1} \right)_{t_2}
\nonumber \\
& = \left[ k r_{n,t_{2}} - r_{n,yy} - \left( \delta v_n + \gamma v_{n+1} \right) r_n - \gamma \delta r_n \left( q_n r_n \right)^2
\right] \psi_{n}
\nonumber \\
& \hphantom{=} \; \, \mbox{}
+ \left[ v_{n} - v_{n+1} - 2 \left( r_n q_n \right)_y \right] \left( \gamma \phi_n + \delta \phi_{n+1} \right),
\nonumber \end{aligned}$$ which imply (\[second\_flow\]) for generic $\psi_n$ and $\phi_n$.
By setting and and imposing the Hermitian conjugation reduction and on (\[second\_flow\]) where $\varDelta$ is a real-valued lattice parameter, we obtain $$\label{reduced_second_flow}
\left\{
\begin{split}
& \mathrm{i} q_{n,t_{2}} + q_{n,yy} + q_n \left( v_n + v_{n+1} \right) +
\varDelta^2 {\langle q_n, q_n^\ast \rangle}^2
q_n =0,
\\[2pt]
& v_{n+1}-v_{n} = 2 \varDelta \left( q_n^\dagger q_n \right)_y.
\end{split}
\right.$$ Clearly, in the case where , i.e., $q_n$ is a scalar, (\[reduced\_second\_flow\]) provides an integrable semi-discretization of the elementary Davey–Stewartson flow (\[element2\]), up to a rescaling of $q_n$.
Integrable semi-discretizations of the Davey–Stewartson system and the -dimensional Yajima–Oikawa system
========================================================================================================
We first establish the commutativity of the semi-discrete flow (\[first\_flow\]) and the semi-discrete flow (\[second\_flow\]). Using (\[first\_flow\]) and (\[second\_flow\]), we have the relations: $$\mathrm{i} v_{n+1, t_1} - 2 \left( \alpha u_{n+1}^\gamma r_{n+1} q_n - \beta u_{n+1}^\delta r_n q_{n+1} \right)_y
= \mathrm{i} v_{n, t_1} - 2 \left( \alpha u_{n}^\gamma r_{n} q_{n-1} - \beta u_{n}^\delta r_{n-1} q_{n} \right)_y,
\nonumber$$ and $$\begin{aligned}
k \left( \log u_n \right)_{y t_2} &= \left[ q_{n,y} r_n - q_n r_{n,y} + \left( \gamma - \delta \right) \left( q_n r_n \right)^2 \right]_y
\nonumber \\
& \hphantom{=} \; \, \mbox{ }- \left[ q_{n-1,y} r_{n-1} - q_{n-1} r_{n-1,y} + \left( \gamma - \delta \right) \left( q_{n-1} r_{n-1}
\right)^2 \right]_y,
\nonumber\end{aligned}$$ which imply $$\begin{aligned}
\mathrm{i} v_{n, t_1} &= 2 \left( \alpha u_{n}^\gamma r_{n} q_{n-1} - \beta u_{n}^\delta r_{n-1} q_{n} \right)_y + \cal{F}
\nonumber \\
&= 2 \alpha \gamma u_{n}^\gamma \left( q_{n-1} r_{n-1} - q_{n} r_{n} \right) r_{n} q_{n-1}
+ 2 \alpha u_{n}^\gamma \left( r_{n} q_{n-1} \right)_y
\nonumber \\
& \hphantom{=} \; \, \mbox{ } - 2 \beta \delta u_{n}^\delta \left( q_{n-1} r_{n-1} - q_{n} r_{n} \right) r_{n-1} q_{n}
- 2 \beta u_{n}^\delta \left( r_{n-1} q_{n} \right)_y + \cal{F},
\label{v_t_1}\end{aligned}$$ and $$\begin{aligned}
k \left( \log u_n \right)_{t_2} &= \left[ q_{n,y} r_n - q_n r_{n,y} + \left( \gamma - \delta \right) \left( q_n r_n \right)^2 \right]
\nonumber \\
& \hphantom{=} \; \, \mbox{ }- \left[ q_{n-1,y} r_{n-1} - q_{n-1} r_{n-1,y} + \left( \gamma - \delta \right) \left( q_{n-1} r_{n-1}
\right)^2 \right] + {\cal G}_n,
\label{log_u_n_t_2}\end{aligned}$$ respectively, where $\cal{F}$ is an $n$-independent matrix and ${\cal G}_n$ is a $y$-independent scalar. Using (\[first\_flow\]), (\[second\_flow\]) and (\[log\_u\_n\_t\_2\]), we also obtain $$\begin{aligned}
k w_{n,y t_2} &= \left( \left\{ \beta \gamma u_{n}^\delta
\left[ -q_{n,y} r_{n-1} + q_n r_{n-1,y} + \delta \left( q_{n-1} r_{n-1} + q_n r_n \right) q_n r_{n-1} \right] \right. \right.
\nonumber \\
& \hphantom{=} \; \, \left. \mbox{} + \alpha \delta u_{n}^\gamma
\left[ -q_{n-1,y} r_{n} + q_{n-1} r_{n,y} - \gamma \left( q_{n-1} r_{n-1} + q_n r_n \right) q_{n-1} r_{n} \right] \right\}_y
\nonumber \\
& \hphantom{=} \; \, \left. \mbox{} + \beta \gamma \delta u_{n}^\delta {\cal G}_n q_n r_{n-1}
+ \alpha \gamma \delta u_n^\gamma {\cal G}_n q_{n-1} r_n \right) - \left( n \to n+1 \right).
\label{w_n_y_t_2}\end{aligned}$$ Thus, we assume to obtain $$\begin{aligned}
k \left( \log u_n \right)_{t_2} &= \left[ q_{n,y} r_n - q_n r_{n,y} + \left( \gamma - \delta \right) \left( q_n r_n \right)^2 \right]
\nonumber \\
& \hphantom{=} \; \, \mbox{ }- \left[ q_{n-1,y} r_{n-1} - q_{n-1} r_{n-1,y} + \left( \gamma - \delta \right) \left( q_{n-1} r_{n-1}
\right)^2 \right],
\label{log_u_n_t_2'}\end{aligned}$$ and $$\begin{aligned}
k w_{n,t_2} &= \beta \gamma u_{n}^\delta
\left[ -q_{n,y} r_{n-1} + q_n r_{n-1,y} + \delta \left( q_{n-1} r_{n-1} + q_n r_n \right) q_n r_{n-1} \right]
\nonumber \\
& \hphantom{=} \; \, \mbox{} - \beta \gamma u_{n+1}^\delta
\left[ -q_{n+1,y} r_{n} + q_{n+1} r_{n,y} + \delta \left( q_{n} r_{n} + q_{n+1} r_{n+1} \right) q_{n+1} r_{n} \right]
\nonumber \\
& \hphantom{=} \; \, \mbox{}
+ \alpha \delta u_{n}^\gamma
\left[ -q_{n-1,y} r_{n} + q_{n-1} r_{n,y} - \gamma \left( q_{n-1} r_{n-1} + q_n r_n \right) q_{n-1} r_{n} \right]
\nonumber \\
& \hphantom{=} \; \, \mbox{} - \alpha \delta u_{n+1}^\gamma
\left[ -q_{n,y} r_{n+1} + q_{n} r_{n+1,y} - \gamma \left( q_{n} r_{n} + q_{n+1} r_{n+1} \right) q_{n} r_{n+1} \right] + {\cal H}_n,
\label{w_n_y_t_2'}\end{aligned}$$ where ${\cal H}_n$ is a $y$-independent scalar.
By a direct calculation, we arrive at the following proposition.
\[\] Equations , , , and imply that the two differentiation operators $\partial_{t_1}$ and $\partial_{t_2}$ commute, i.e., $$\nonumber
q_{n, t_1 t_2} = q_{n, t_2 t_1} \;\, \mathrm{and} \;\, r_{n, t_1 t_2} = r_{n, t_2 t_1},
$$ if and only if the “constants” of integration ${\cal F}$ and ${\cal H}_n$ satisfy the condition , where $I_M$ is the identity matrix. Therefore, ${\cal F}$ should be $y$-independent and ${\cal H}_n$ should be $n$-independent, namely, and ; by the change of dependent variables $$\nonumber
q_n= \widetilde{q}_n \exp \left( -\frac{\mathrm{i}}{k} \int \int {\cal H} \, \mathrm{d} t_1 \mathrm{d} t_2 \right),
\hspace{5mm}
r_n= \widetilde{r}_n \exp \left( \frac{\mathrm{i}}{k} \int \int {\cal H} \, \mathrm{d} t_1 \mathrm{d} t_2 \right),$$ we can set and without loss of generality.
The commutativity of the semi-discrete flow (\[first\_flow\]) and the semi-discrete flow (\[second\_flow\]) motivates us to consider a linear combination of these two flows: $$\nonumber
\mathrm{i} \partial_t := \mathrm{i} \partial_{t_1} +
bk \partial_{t_2},$$ where $b$ is a nonzero constant. The corresponding time evolution of the linear wavefunction reads
[\[sd\_time\_g\]]{} \_[n,t]{} = u\_n\^\_[n-1]{} + u\_[n+1]{}\^\_[n+1]{} + w\_n \_[n]{}\
+ b { ( + ) q\_n \_[n,y]{} - ( + ) ( q\_[n,y]{} - q\_n r\_n q\_n ) \_n .\
. - \_[n]{} }, \[sd\_time\_g1\]\
\_[n,t]{} = -u\_n\^r\_n \_[n-1]{} + u\_[n]{}\^r\_[n-1]{} \_[n]{} + b , \[sd\_time\_g2\]
where $\alpha$, $\beta$, $\gamma$ and $\delta$ are constants satisfying the conditions and . By a direct calculation, we can prove the following proposition.
\[prop3.2\] The compatibility conditions of the overdetermined linear systems and for $\psi_n$ and $\phi_n$ are equivalent to the system of differential-difference equations: $$\label{first+second_flow}
\left\{
\begin{split}
& \mathrm{i} q_{n,t} - \alpha u_n^\gamma q_{n-1} - \beta u_{n+1}^\delta q_{n+1} - w_n q_n
\\
& \hphantom{\mathrm{i} q_{n,t}}
+ b \left[ q_{n,yy} + q_n \left( \gamma v_n + \delta v_{n+1} \right) + \gamma \delta \left( q_n r_n \right)^2 q_n \right]
=0,
\\[3pt]
& \mathrm{i} r_{n,t} + \beta u_{n}^\delta r_{n-1} + \alpha u_{n+1}^\gamma r_{n+1} +w_n r_n
\\ & \hphantom{\mathrm{i} r_{n,t}}
- b \left[ r_{n,yy} + \left( \delta v_n + \gamma v_{n+1} \right) r_n + \gamma \delta r_n \left( q_n r_n \right)^2 \right]
=0,
\\[3pt]
& u_{n,y} = u_n \left( q_{n-1} r_{n-1} - q_n r_n \right),
\\[3pt]
& w_{n,y} = \alpha \delta \left( u_n^{\gamma} q_{n-1} r_{n}
- u_{n+1}^{\gamma} q_n r_{n+1} \right)
+ \beta \gamma \left( u_n^{\delta} q_{n} r_{n-1}
- u_{n+1}^{\delta} q_{n+1} r_{n} \right),
\\[3pt]
& v_{n+1} -v_{n} = - 2 \left( r_n q_n \right)_y.
\end{split}
\right.$$
Under the parametric conditions $$\beta=\alpha^\ast, \hspace{5mm} \gamma=\delta \in \mathbb{R},
\hspace{5mm} b \in \mathbb{R},
\nonumber$$ the system (\[first+second\_flow\]) admits the Hermitian conjugation reduction: $$r_n = - \varDelta q_n^\dagger, \hspace{5mm} u_n^\ast=u_n, \hspace{5mm} w_n^\ast = w_n, \hspace{5mm} v_n^\dagger = v_n,
\nonumber$$ where $\varDelta$ is a real-valued lattice parameter. In particular, if and , this reduction with a rescaling of $w_n$ as simplifies (\[first+second\_flow\]) to $$\label{semi-discrete_DS}
\left\{
\begin{split}
& \mathrm{i} q_{n,t}
+a \left[ u_{n+1} q_{n+1} + \mathcal{W}_n q_n + u_n q_{n-1} \right]
\\
& \hphantom{\mathrm{i} q_{n,t}} + b \left[ q_{n,yy} + q_n \left( v_n + v_{n+1} \right)
+ \varDelta^2 {\langle q_n, q_n^\ast \rangle}^2
q_n \right] =0,
\\[2pt]
& u_{n,y} = \varDelta u_n \left( {\langle q_n, q_n^\ast \rangle}
- {\langle q_{n-1}, q_{n-1}^\ast \rangle} \right)
,
\\[2pt]
& \mathcal{W}_{n,y} = \varDelta u_{n+1} \left( {\langle q_n, q_{n+1}^\ast \rangle} + {\langle q_{n+1}, q_n^\ast \rangle} \right)
- \varDelta u_n \left( {\langle q_{n-1}, q_n^\ast \rangle} + {\langle q_n, q_{n-1}^\ast \rangle} \right),
\\[2pt]
& v_{n+1}-v_{n} = 2 \varDelta \left( q_n^\dagger q_n \right)_y.
\end{split}
\right.$$ Here, $a$ and $b$ are nonzero real constants, , and $v_n$ is an Hermitian matrix.
In the case where , i.e., $q_n$ is a scalar, (\[semi-discrete\_DS\]) provides an integrable semi-discretization of the Davey–Stewartson system (\[continuousDS\]), up to a rescaling of $q_n$. Indeed, by setting $$\begin{aligned}
\nonumber
& q_n = q(n \Delta, y, t), \hspace{5mm} u_n = \frac{1}{\Delta^2} + \frac{1}{2} u(n \Delta, y, t),
\\[2mm]
& \nonumber
\mathcal{W}_n = -\frac{2}{\Delta^2} + \mathcal{W} (n \Delta, y, t), \hspace{5mm}
v_n = v (n \Delta, y, t),
$$ and taking the continuous limit , (\[semi-discrete\_DS\]) with scalar $q_n$ reduces to $$\nonumber
\left\{
\begin{split}
& \mathrm{i} q_{t}
+a \left[ q_{xx} + \left( u + \mathcal{W} \right) q \right] + b \left( q_{yy} + 2 q v \right) =0,
\\[2pt]
& u_{y} = \mathcal{W}_{y} = 2 \left( \left| q \right|^2 \right)_x,
\\[2pt]
& v_{x} = 2 \left( \left| q \right|^2 \right)_y,
\end{split}
\right.$$ where .
By applying the linear change of the independent variables $$\widetilde{t} = t_1 + y, \hspace{5mm} \widetilde{y}= c y,
\label{Galilean-like2}
$$ where $c$ is an arbitrary real constant, to the semi-discrete elementary Davey–Stewartson flow (\[reduced\_first\_flow2\]), we obtain $$\label{sd_YO1}
\left\{
\begin{split}
& \mathrm{i} q_{n,t}
= u_{n+1} q_{n+1} + w_n q_n + u_n q_{n-1}
,
\\[2pt]
& u_{n,t}+c u_{n,y} = \varDelta u_n \left( {\langle q_n, q_n^\ast \rangle}
- {\langle q_{n-1}, q_{n-1}^\ast \rangle} \right)
,
\\[2pt]
& w_{n,t}+c w_{n,y} = \varDelta u_{n+1} \left( {\langle q_n, q_{n+1}^\ast \rangle} + {\langle q_{n+1}, q_n^\ast \rangle} \right)
- \varDelta u_n \left( {\langle q_{n-1}, q_n^\ast \rangle} + {\langle q_n, q_{n-1}^\ast \rangle} \right),
\end{split}
\right.$$ where the tilde of the continuous independent variables is omitted for notational brevity. The system (\[sd\_YO1\]) with scalar and provides an integrable semi-discretization of the -dimensional Yajima–Oikawa system (\[2DYO\]), up to a rescaling of variables; in the absence of $y$-dependence, it reduces to the discrete Yajima–Oikawa system proposed in our previous paper [@Tsuchida18-2]. We remark that another integrable semi-discretization of the -dimensional Yajima–Oikawa system (\[2DYO\]) was proposed recently in [@Yu15].
Concluding remarks
==================
In this paper, we discussed the problem of how to discretize one of the two spatial variables in the Davey–Stewartson system (\[continuousDS\]) and the -dimensional Yajima–Oikawa system (\[2DYO\]). To preserve the integrability, we considered the linear problem (\[sdlinear\]); by assuming and imposing the Hermitian conjugation reduction where $\varDelta$ is a real constant, (\[sdlinear\]) can be understood as a semi-discrete analog of (the vector generalization of) the continuous linear problem (\[clinear\_s\]). By choosing the associated time evolution of the linear wavefunction appropriately, we obtain (\[reduced\_first\_flow2\]) (resp. (\[reduced\_second\_flow\])) as (a vector generalization of) an integrable semi-discretization of the elementary Davey–Stewartson flow (\[element1\]) (resp. (\[element2\])). The two flows (\[reduced\_first\_flow2\]) and (\[reduced\_second\_flow\]) commute under a suitable choice of the “constants” of integration, so we can naturally consider a linear combination of them to obtain (\[semi-discrete\_DS\]) as (a vector generalization of) an integrable semi-discretization of the Davey–Stewartson system (\[continuousDS\]). By applying the linear change of the independent variables (\[Galilean-like2\]) to the flow (\[reduced\_first\_flow2\]) and omitting the tilde of the continuous independent variables, we obtain the system (\[sd\_YO1\]); this system provides (a vector generalization of) an integrable semi-discretization of the -dimensional Yajima–Oikawa system (\[2DYO\]), which is a -dimensional generalization of the discrete Yajima–Oikawa system proposed in our previous paper [@Tsuchida18-2].
[99]{}
A. Davey and K. Stewartson: [*On three-dimensional packets of surface waves*]{}, Proc. R. Soc. Lond. A [**338**]{} (1974) 101–110.
D. J. Benney and G. J. Roskes: [*Wave instabilities*]{}, Stud. Appl. Math. [**48**]{} (1969) 377–385.
L. P. Nizhnik and M. D. Pochinaiko: [*Integration of the nonlinear two-dimensional spatial Schrödinger equation by the inverse-problem method*]{}, Funct. Anal. Appl. [**16**]{} (1982) 66–69.
M. J. Ablowitz and R. Haberman: [*Nonlinear evolution equations—two and three dimensions*]{}, Phys. Rev. Lett. [**35**]{} (1975) 1185–1188.
H. C. Morris: [*Prolongation structures and nonlinear evolution equations in two spatial dimensions. II. A generalized nonlinear Schrödinger equation*]{}, J. Math. Phys. [**18**]{} (1977) 285–288.
M. J. Ablowitz: [*Lectures on the inverse scattering transform*]{}, Stud. Appl. Math. [**58**]{} (1978) 17–94.
D. Anker and N. C. Freeman: [*On the soliton solutions of the Davey–Stewartson equation for long waves*]{}, Proc. R. Soc. Lond. A [**360**]{} (1978) 529–540.
H. Cornille: [*Solutions of the generalized nonlinear Schrödinger equation in two spatial dimensions*]{}, J. Math. Phys. [**20**]{} (1979) 199–209.
P. D. Lax: [*Integrals of nonlinear equations of evolution and solitary waves*]{}, Commun. Pure Appl. Math. [**21**]{} (1968) 467–490.
S. V. Manakov: [*The method of the inverse scattering problem, and two-dimensional evolution equations*]{}, Uspekhi Mat. Nauk [**31**]{}:5(191) (1976) 245–246.
A. S. Fokas: [*On the simplest integrable equation in* ]{}, Inverse Probl. [**10**]{} (1994) L19–L22.
F. Calogero and A. Degasperis: [*Nonlinear evolution equations solvable by the inverse spectral transform. I*]{}, Nuovo Cimento B [**32**]{} (1976) 201–242.
V. E. Zakharov: [*The inverse scattering method*]{}, “Solitons” edited by R. K. Bullough and P. J. Caudrey (Topics in Current Physics 17, Springer, Berlin, 1980) pp. 243–285.
N. Yajima and M. Oikawa: [*Formation and interaction of sonic-Langmuir solitons —Inverse scattering method—*]{}, Prog. Theor. Phys. [**56**]{} (1976) 1719–1739.
V. K. Mel’nikov: [*On equations for wave interactions*]{}, Lett. Math. Phys. [**7**]{} (1983) 129–136.
A. Maccari: [*The Kadomtsev–Petviashvili equation as a source of integrable model equations*]{}, J. Math. Phys. [**37**]{} (1996) 6207–6212.
K. Kajiwara, J. Matsukidaira and J. Satsuma: [*Conserved quantities of two-component KP hierarchy*]{}, Phys. Lett. A [**146**]{} (1990) 115–118.
T. Tsuchida and A. Dimakis: [*On a -dimensional generalization of the Ablowitz–Ladik lattice and a discrete Davey–Stewartson system*]{}, J. Phys. A: Math. Theor. [**44**]{} (2011) 325206.
Gegenhasi, X.-B.Hu and D. Levi: [*On a discrete Davey–Stewartson system*]{}, Inverse Probl. [**22**]{} (2006) 1677–1688.
Gegenhasi, X.-B. Hu, D. Levi and S. Tsujimoto: [*A difference analogue of the Davey–Stewartson system: discrete Gram-type determinant solution and Lax pair*]{}, J. Phys. A: Math. Theor. [**40**]{} (2007) 12741–12751.
G.-F. Yu and Z.-W. Xu: [*Dynamics of a differential-difference integrable $(2+1)$-dimensional system*]{}, Phys. Rev. E [**91**]{} (2015) 062902.
C. Athorne and A. Fordy: [*Integrable equations in dimensions associated with symmetric and homogeneous spaces*]{}, J. Math. Phys. [**28**]{} (1987) 2018–2024.
V. A. Marchenko: [*Nonlinear Equations and Operator Algebras*]{} (D. Reidel, Dordrecht, 1988).
F. Calogero: [*Why are certain nonlinear PDEs both widely applicable and integrable?*]{}, “What is integrability?” edited by V. E. Zakharov (Springer Series in Nonlinear Dynamics, Springer, Berlin, 1991) pp. 1–62.
B. G. Konopelchenko: [*Introduction to Multidimensional Integrable Equations: The Inverse Spectral Transform in Dimensions*]{} (Plenum, New York, 1992).
T. Tsuchida: [*On a new integrable generalization of the Toda lattice and a discrete Yajima–Oikawa system*]{}, arXiv:1808.03261 \[nlin.SI\] (2018).
[^1]: The author is indebted to Dr. Masato Hisakado and Professor Aristophanes Dimakis for the proof of commutativity.
|
---
abstract: 'The purpose of this note is to describe some algebraic conditions on a Banach algebra which force it to be finite dimensional. One of the main results in Theorem 2 which states that for a locally compact group $G$, $G$ is compact if there exists a measure $\mu$ in $\hbox{Soc}(L^{1}(G))$ such that $\mu(G) \neq 0$. We also prove that $G$ is finite if $\hbox{Soc}(M(G))$ is closed and every nonzero left ideal in $M(G)$ contains a minimal left ideal.'
address: |
Department of Mathematics, Semnan University, Semnan, Iran\
E-mail: [email protected]
author:
- ALI GHAFFARI and ALI REZA MEDGHALCHI
title: The Socle and finite dimensionality of some Banach algebras
---
\[theore\][**Theorem**]{} \[theore\][DEFINITION]{} \[theore\][Lemma]{} \[theore\][Remark]{} \[theore\][PROPOSITION]{} \[theore\][COROLLARY]{}
Introduction
============
Let $A$ be a Banach algebra. The first Arens multiplication on $A^{**}$ is defined in three steps as follows.
For $a, b$ in $A$, $f$ in $A^{*}$ and $F, G$ in $A^{**}$, the elements $fa$, $Ff$ of $A^{*}$ and $GF$ of $A^{**}$ are defined by $$\langle fa, b \rangle = \langle f, ab \rangle, \quad \langle Ff, a
\rangle = \langle F, fa \rangle, \quad \langle GF, f \rangle =
\langle G, Ff \rangle.$$ We know that $A^{**}$ is a Banach algebra with Arens multiplication. If $A$ has minimal left ideals, the smallest left ideal containing all of them is called the left Socle of $A$ and is denoted by $\hbox{Soc}(A)$. If $A$ does not have minimal left ideals, we define $\hbox{Soc}(A) = (0)$.
Let $G$ be a locally compact group, $L^{1}(G)$ be its group algebra, and $M(G)$ be its usual measure algebra. Let $\hbox{LUC}(G)$ denote the closed subspace of bounded left uniformly continuous function on $G$, i.e. all $f \in C_{b}(G)$ such that the map $x \rightarrow L_{x}f$ from $G$ into $C_{b}(G)$ is continuous. We know that $L^{1}(G)$ is semisimple and minimal ideals in $L^{1}(G)$ are generated by minimal idempotent [@4]. Filali [@6; @7] has studied all the finite dimensional left ideals of $L^{1}(G), L^{1}(G)^{**}$ and $\hbox{LUC}(G)^{*}$. He has shown that such ideals exist in $L^{1}(G)$ and $M(G)$ if and only if $G$ is compact. Baker and Filali [@2] proved that minimal left ideals can be of infinite dimension, and that compactness of $G$ is not necessary for these ideals to exist in $L^{1}(G)$ and $M(G)$. For a locally compact abelian group $G$, Filali [@8] has shown that $G$ is compact if and only if $M(G)$ has minimal ideals. In this paper we will show, among other things, that if there exists a measure $\mu$ in $\hbox{Soc}(L^{1}(G))$ such that $\mu(G)\neq 0$, then $G$ is compact ($G$ is an arbitrary locally compact group). Also some conditions which are equivalent to finite dimensionality for a Banach algebra $A$ are given.
Main results
============
In this section we will study a Banach algebra $A$ when we set some conditions on $\hbox{Soc}(A)$ and $\hbox{Soc}(A^{**})$. We know that $\hbox{Soc}(A)$ has been studied in [@3; @4; @5; @12].
$\left.\right.$
Let $B$ be an ideal in $A$ such that for any $a \in A$[,]{} $Ba = (0)$ implies $a = 0$. Then ${\rm Soc}(A) = {\rm Soc}(B)$.
Let $a \in A$ and $Aa$ be a minimal left ideal in $A$. Since $B$ is a left ideal in $A$, $Ba$ is a left ideal in $A$. By assumption, $Ba = Aa$. It is easy to see that $Ba$ is a minimal left ideal of $B$. It follows that $\hbox{Soc}(A)
\subseteq \hbox{Soc}(B)$.
To prove the converse, let $b\in B$ and $Bb$ be a minimal left ideal in $B$. Since $Bb \neq 0$, there exists $b_{1}\in B$ such that $b_{1}b\neq 0$. We have $Bb_{1}b\subseteq Ab_{1}b\subseteq
Bb$. By assumption $Bb_{1} b \neq 0$, and so $$Bb_{1} b = Ab_{1}b = Bb.$$ If $a \in A$ and $ab_{1}b \neq 0$, then $Bab_{1}b \subseteq
Aab_{1}b\subseteq Ab_{1}b \subseteq Bb$ and $Bab_{1}b\neq 0$. But $Bb$ is a minimal left ideal in $B$, hence $$Aab_{1}b = Ab_{1}b = Bb.$$ Therefore $Ab_{1}b = Bb$ is a minimal left ideal in $A$, which proves $\hbox{Soc}(B) \subseteq \hbox{Soc}(A)$. Consequently $\hbox{Soc}(A) = \hbox{Soc}(B)$.
$\left.\right.$
Let $G$ be a locally compact group. Then ${\rm Soc}
(L^{1}(G)) = {\rm Soc}(M(G))$.
It is known that $L^{1}(G)$ has a bounded approximate identity bounded by 1. Let $(e_{\alpha})$ be a bounded approximate identity in $L^{1}(G)$. Let $\mu \in M(G)$ and $L^{1}(G)\mu = 0$. Since $C_{0}(G) \subseteq \hbox{LUC}(G) \subseteq L^{\infty}(G)L^{1}(G)$ (see [@9; @10; @11]), for $\psi \in C_{0}(G)$, there exists $f\in
L^{\infty}(G)$ and $\nu \in L^{1}(G)$ such that $\psi = f\nu$. We have $$\begin{aligned}
\langle \mu, \psi\rangle &= \langle \mu, f\nu \rangle = \lim
\langle \mu, f\nu * e_{\alpha} \rangle\\
&= \lim \langle e_{\alpha} * \mu, f\nu \rangle = \lim\langle e_{a}
* \mu, \psi \rangle = 0.\end{aligned}$$ It follows that $\mu = 0$. By Proposition 1, we have $\hbox{Soc}(L^{1}(G)) = \hbox{Soc}(M(G))$.
In the following Theorem we will provide some conditions on $A$ and $A^{**}$ that are sufficient to guarantee finite dimensionality.
Let $A$ be a Banach algebra with a bounded approximate identity. If ${\rm Soc}(A^{**}) = A^{**}$[,]{} then $A$ is finite dimensional.
Let $(e_{\alpha})$ be a bounded approximate identity in $A$ and $E \in A^{**}$ be a weak$^{*}$ limit of a subnet $(e_{\beta})$ of $(e_{\alpha})$. Then $E$ is a right identity for $A^{**}$. For a nonzero ideal $\cal{J}$ of $A^{**}$, we take $$\Omega = \{\cal{K}; \cal{K} \ \hbox{is a left ideal in} \ A^{**} \
\hbox{and} \ \cal{J} \ \cap \cal{K} = (0)\}.$$ Let $\cal{M}$ be a maximal element in $\Omega$. Now, let $\cal{I}$ be a minimal left ideal of $A^{**}$. If $\cal{I} \cap (\cal{M} +
\cal{J}) = (0)$, then $(\cal{I} + \cal{M})\cap \cal{J} = (0)$ and so $\cal{I} + \cal{M} \in \Omega$, i.e, $\cal{M} + \cal{I} =
\cal{M}$. It follows that $\cal{I} \subseteq \cal{M} + \cal{J}$. If $\cal{I}\cap (\cal{M} + \cal{J}) \neq (0)$, then $\cal{I}\cap(\cal{M} + \cal{J}) = \cal{I}$ and so $\cal{I}
\subseteq \cal{M} + \cal{J}$. This shows that every minimal left ideal $\cal{I}$ must be contained in $\cal{M} \oplus \cal{J}$. Since $\hbox{Soc}(A^{**}) = A^{**}$, we have $\cal{M}\oplus
\cal{J} = A^{**}$. For some $J \in \cal{J}$ and $M \in \cal{M}$, we can write $E = M + J$. It follows that $\cal{J}^{2} = \cal{J}$, i.e. $\cal{J} \neq (0)$. This shows that $A^{**}$ is semiprime. By Theorem 5 of [@13], $A^{**}$ is finite dimensional and so $A$ is finite dimensional.
$\left.\right.$
Let $G$ be a locally compact group. Then $G$ is finite if and only if ${\rm Soc}(L^{1}(G)^{**}) = L^{1}(G)^{**}$.
Since $L^{1}(G)$ has a bounded approximate identity, by Theorem 1, $L^{1}(G)^{**}$ is finite dimensional. Therefore $G$ is finite. The converse is clear.
Let $A$ be a Banach algebra and let $\hbox{Comp}(A)$ be the compactum of $A$, that is the set of all $x$ in $A$ such that the mapping $a\rightarrow x ax$ is a compact operator of $A$ into itself. Al-Moajil [@1] gives some characterizations of a finite dimensionality of a semisimple Banach algebra in terms of its compactum and Socle.
For a locally compact abelian group $G$, Filali [@8] has shown that $G$ is compact if and only if $M(G)$ has minimal ideals. In the following Theorem we set a condition on $\hbox{Soc}(L^{1}(G))$ and prove that $G$ is compact.
Let $G$ be a locally compact group. Then $G$ is compact if any of the following conditions hold[:]{}
1. There exists a measure $\mu$ in ${\rm Soc}(L^{1}(G))$ such that $\mu (G) \neq 0$[;]{}
2. There exists a measure $\mu$ in ${\rm Soc}(M(G))$ such that $\mu (G) \neq 0$[;]{}
3. $G$ is an abelian group and ${\rm Soc}(L^{1}(G))\neq 0$.
Let (1) hold. We assume to the contray that $G$ is not compact. Let $\Omega$ denote the set of all compact subset of $G$ and we make $\Omega$ directed by $K_{1} \leq K_{2}$ if and only if $K_{1}\subseteq K_{2}$ for every $K_{1}$ and $K_{2}$ in $\Omega$. For every $K \in \Omega$, we can choose $g_{K} \notin K$. Without loss of generality, we may assume that $\delta_{g_{K}} \rightarrow m$ $(m
\in \hbox{LUC}(G)^{*})$ in the $\sigma (\hbox{LUC}(G)^{*}, \
\hbox{LUC}(G))$-topology. It is easy to see that $\langle m, \psi
\rangle = 0$ for every $\psi \in C_{0}(G)$. Also, we have $\langle
\mu m \mu, \psi \rangle = 0$, for every $\psi \in C_{0}(G)$ and $\mu \in L^{1}(G)$. (Indeed, the formulas which define the first Arens product in $L^{1}(G)^{**}$ can be used to define a Banach algebra structure on $\hbox{LUC}(G)^{*}$.)
Choose $\mu \in\hbox{Soc}(L^{1}(G))$ with $\mu(G)\neq 0$. For a bounded approximate identity $(e_{\alpha})$ in $L^{1}(G)$ with norm 1 and $g\in G$, we have $$\mu * e_{\alpha} * \delta_{g} * \mu \in \{\mu * \nu * \mu ; \nu
\in L^{1}(G), \| \nu \| \leq 1\}.$$ Therefore $$\mu * \delta_{g} * \mu \in cl \{\mu * \nu * \mu ; \nu \in
L^{1}(G), \| \nu \| \leq 1\},$$ where closure is taken in the norm topology. But by Proposition 3 of [@1], the set ; $\nu \in L^{1}(G), \| \nu
\| \leq 1\}$ is relatively compact. Hence without loss of generality we may assume that in the norm topology. On the other hand, $\mu\,*$ in the $\sigma (M(G), C_{0}(G))$-topology. It follows that $\eta = 0$. But $\mu (G) = \mu(G)^{2}\neq 0$. This contradicts the fact that $\eta = 0$. Hence $G$ is compact.
Now, let (2) hold. By Corollary 1, $\hbox{Soc}(L^{1}(G)) =
\hbox{Soc}(M(G))$. By (1), $G$ is compact. Let (3) hold. By [@8], $G$ is compact.
For a locally compact group $G$, $\hbox{Comp}(L^{1}(G))\subseteq
\hbox{Comp} (M(G))$. Indeed, since $L^{1}(G)$ has a bounded approximate identity with norm 1, for any $\mu \in L^{1}(G)$ we have $$\{\mu * \nu * \mu; \nu \in M(G), \| \nu\| \leq 1\} \subseteq
cl\{\mu
* \eta
* \mu; \eta \in L^{1}(G), \| \eta \| \leq 1\}.$$ If $G$ is a compact group, then for any $\mu \in L^{1}(G)$ both mapping $\rho_{\mu}$ and $\lambda_{\mu}$ from $L^{1}(G)$ into $L^{1}(G)$ are compact, where $\rho_{\mu}(\nu) = \nu * \mu$ and $\lambda_{\mu}(\nu) = \mu * \nu$ for $\nu \in L^{1}(G)$. It follows that $L^{1}(G) = \hbox{Comp}(L^{1}(G))\subseteq \
\hbox{Comp}(M(G))$ and so $\hbox{Soc}(M(G)) \neq 0$ (Proposition 3 of [@1]). By Corollary 1, $\hbox{Soc}(L^{1}(G))\neq 0$.
Let $A$ be a semiprime Banach algebra with an identity. If ${\rm
Soc}(A)$ is closed and every nonzero left ideal $I$ of $A$ contains a minimal left ideal[,]{} then $A$ is finite dimensional.
If $\hbox{Soc}(A) = A$, then $A$ is finite dimensional (Theorem 5 of [@13]). Otherwise we can find a sequence of pairwise orthogonal idempotents $(e_{n})$ such that $e_{n} \in \
\hbox{Soc}(A)$, since every nonzero left ideal $I$ of $A$ contains a minimal left ideal. By assumption $\hbox{Soc}(A)$ is closed, so $$a = \sum\limits_{n = 1}^{\infty} \frac{e_{n}}{2^{n}\| e_{n} \|}
\in \hbox{Soc} (A).$$ Therefore, the sequence $e_{n}$ is contained in $aAa$, and since it is an infinite set and linearly independent by the orthogonality of its elements, we have $aAa$ is infinite dimensional. This contradicts the fact that $a\in \hbox{Soc}(A)$ (Lemma 2 of [@1]). Consequently $\hbox{Soc}(A) = A$, and so $A$ is finite dimensional.
$\left.\right.$
Let $G$ be a locally compact group such that every nonzero ideal $I$ in $M(G)$ contains a minimal left ideal and let ${\rm Soc}
(M(G))$ be closed. Then $G$ is finite.
See Theorem 3.
[99]{} Al-Moajil A H, The compactum and finite dimensionality in Banach algebras, [*Int. J. Math. & Math. Sci.*]{} [**5**]{} (1982) 275–280
Baker J W and Filali M, On minimal ideals in some Banach algebras associated with a locally compact group, [*J. London Math. Soc.*]{} [**63**]{} (2001) 83–98
Bresar M and Semrl P, Finite rank elements in semisimple Banach algebras, [*Studia Math.*]{} [**128**]{} (1998) 287–298
Dales H G, Banach algebras and automatic continuity (New York, Oxford: Oxford University Press Inc.) (2000)
Dalla L, Giotopoulos S and Katseli N, The Socle and finite dimensionality of a semiprime Banach algebra, [*Studia Math.*]{} [**92**]{} (1989) 201–204
Filali M, Finite dimensional left ideals in some Banach algebras associated with a locally compact group, [*Proc. Am. Math. Soc.*]{} [**127**]{} (1999) 2325–2333
Filali M, Finite dimension right ideals in some Banach algebras associated with a locally compact group, [*Proc. Am. Math. Soc.*]{} [**127**]{} (1999) 1729–1734
Filali M, The ideal structure of some Banach algebras, [*Math. Proc. Camb. Philos. Soc.*]{} [**111**]{} (1992) 567–576
Ghaffari A, Convolution operators on semigroup algebras, [*Southeast Asian Bull. Math.*]{} [**27**]{} (2004) 1025–1036
Hewitt E and Ross K A, Abstract harmonic analysis (Heidelberg and New York: Springer-Verlag, Berlin) (1963) vol. 1
Hewitt E and Ross K A, Abstract harmonic analysis (Heidelberg and New York: Springer-Verlag, Berlin) (1970) vol. II
Takahasi S E, Finite dimensionality in socle of Banach algebras, [*Int. J. Math & Math. Sci.*]{} [**7**]{} (1984) 519–522
Tullo A W, Condition on Banach algebras which imply finite dimensionality, [*Proc. Edinburg Math. Soc.*]{} [**20**]{} (1976) 1–5
|
---
abstract: 'We report, based on its variation in electronic transport to coupled tension and shear deformation, a highly sensitive graphene-based strain sensor consisting of an armchair graphene nanoribbon (AGNR) between metallic contacts. As the nominal strain at any direction increases from 2.5 to 10%, the conductance decreases, particularly when the system changes from the electrically neutral region. At finite bias voltage, both the raw conductance and the relative proportion of the conductance depends smoothly on the gate voltage with negligible fluctuations, which is in contrast to that of pristine graphene. Specifically, when the nominal strain is 10% and the angle varies from $0^{\circ}$ to $90^{\circ}$, the relative proportion of the conductance changes from 60 to $\sim$90%.'
address:
- '$^1$Department of Mechanical Engineering, Boston University, Boston, MA 02215'
- '$^2$ Microsoft Corporation 15700 NE 39th St Redmond, WA 98052'
- '$^3$Department of Physics, Renmin University of China, Beijing 100872, China'
author:
- 'Zenan Qi$^{1}$,Jian Zhang$^{2}$,Guiping Zhang$^{3}$ and Harold S. Park$^{1}$'
title: 'Coupling Tension and Shear for Highly Sensitive Graphene-Based Strain Sensors'
---
Graphene has been proposed for many applications due to its unique physical properties [@graphene1; @graphene2; @graphene3; @gas-detector], in which the electronic transport through graphene nanoribbon would be affected by line defect [@R1]. Of specific interest to the present work, it has been proposed as a strain sensor due to change in the conductivity of graphene-based materials [@strain-e1; @strain-e2; @strain-e3; @strain-e4; @strain-e5; @strain-simulation1; @strain-simulation2]. The field of graphene-based strain sensing has rapidly developed since the experimental observation of the increase in resistance of CVD graphene samples when strain is applied in the direction of the electrical current [@strain-e1]. In order to accurately determine the direction and magnitude of strain, a triaxial graphene-based strain sensor composite was proposed; it was found that the resistance of graphene may be enhanced or reduced by the strain in certain directions [@strain-e2]. The sensitivity of graphene to strain originates from the deformation of carbon-carbon bonds, which alter the hopping integrals, and thus the electronic transport in graphene.
Though the effect of strain on the band structure of graphene and narrow graphene nanoribbons (GNRs) has been widely discussed [@strain-TB; @strain-band; @structure1; @strain-band; @structure2; @strain-band; @structure3; @R2], strain sensing based on electronic transport through graphene and GNRs has recently become of wide interest [@strain-transport1; @strain-transport2; @strain-transport3; @strain-transport4; @strain-transport-GNR1; @strain-transport-GNR2; @strain-transport5]. In the present work, motivated by recent experimental findings of the potential benefits of coupled tension/shear deformation to simulate the strain generated due to the movement of human fingers [@strain-e2], we theoretically study coupled tension and shear deformation on the transport through armchair graphene nanoribbons between metallic contacts.
 (a) Schematic illustration of AGNRs, connected to two semi-infinite quantum wires. There are $N$ and $M$ carbon atoms in $x$ and $y$ directions, respectively. $\theta$ is the angle between the direction of applied strain $S$ and $x$-axis. (b) The tensile and (c) the shear component of the strain at $30^{\circ}$ when the nominal strain is $\epsilon_{ns}=10\%$.](AGNR.eps "fig:")  (a) Schematic illustration of AGNRs, connected to two semi-infinite quantum wires. There are $N$ and $M$ carbon atoms in $x$ and $y$ directions, respectively. $\theta$ is the angle between the direction of applied strain $S$ and $x$-axis. (b) The tensile and (c) the shear component of the strain at $30^{\circ}$ when the nominal strain is $\epsilon_{ns}=10\%$.](30d_10_epsxx.eps "fig:")  (a) Schematic illustration of AGNRs, connected to two semi-infinite quantum wires. There are $N$ and $M$ carbon atoms in $x$ and $y$ directions, respectively. $\theta$ is the angle between the direction of applied strain $S$ and $x$-axis. (b) The tensile and (c) the shear component of the strain at $30^{\circ}$ when the nominal strain is $\epsilon_{ns}=10\%$.](30d_10_epsxy.eps "fig:")
Most previous theoretical studies of graphene strain sensors have adopted homogeneous junctions. However, no lattice mismatch occurs at the interfaces for homogeneous junctions, whereas the conductance in unstrained GNRs is either zero or one at the Fermi energy $E=0$ [@conductance-GNRs]. Unlike most experiments in which the gate voltage is only applied to graphene samples, the Fermi energy $E$ should vary to investigate electronic transport through GNRs. For heterogenous junctions of GNRs between quantum wire contacts, transport through GNRs is mediated by the gate voltage $V_{g}$ as in previous experimental measurement on electrical properties of graphene samples [@graphene1; @graphene2; @graphene3]. These heterogenous junctions are inspired by the fact that the contacts are metallic rather than carbon in experiments [@graphene1; @graphene2; @graphene3] and the conductance of quantum wire contacts is maximal at $E=0$ because all channels are available to electronic transport.
However, lattice mismatch may exist at the interfaces of heterogeneous junctions. Here we adopt heterojunctions of armchair-edged GNRs (AGNR) between quantum wire contacts as discussed in Ref. [@tm2] to minimize the effect of lattice mismatch at the interfaces and investigate the effect of uniaxial plus shear strain on electronic transport as illustrated in Fig. \[QW\_GRs\]. The strain is only applied to the AGNR and impacts the hopping integrals in the AGNR. The hamiltonian of the AGNR and contacts is described by the tight binding approximation as $$\label{eq:1}
\hat{H}=\sum_{\langle ij,i^{'}j^{'}\rangle}t_{ij,i^{'}j^{'}} \hat{c}^{\dag}_{ij}\hat{c}_{i^{'}j^{'}}
+V_{g}\sum_{ij}\hat{c}^{\dag}_{ij}\hat{c}_{ij},$$ where a pair of integers $ij$ indicates the lattice position $\vec{R}_{ij}$, and $\hat{c}_{ij}$ ($\hat{c}^{\dag}_{ij}$) is the electron annihilation (creation) operator. The summation is over the nearest neighbors indicated by $\langle \cdots\rangle$. $t_{ij,i^{'}j^{'}}$ is the hopping integral between nearest-neighboring sites indexed by $ij$ and $i^{'}j^{'}$. $V_{g}$ is the effective gate voltage applied to graphene, which is zero in contacts. When $V_{g}$ slightly varies at the interfaces, the conductance does not change much around $V_{g}=0$ [@note1].
The deformed configurations of the graphene nanoribbons were obtained by molecular mechanics simulations, where the strain was obtained via applied displacement loading [@strain-transport4; @strain-Qi02]. The rectangular AGNR consisted of 2832 atoms with a length of $L=10.224$ nm and width $W=7.018$ nm. Displacements were applied in increments of 0.01Å followed by a subsequent energy minimization and relaxation until the change in system energy was less than $10^{-7}$ compared with the previous step. The simulations were [performed]{} using the open source package LAMMPS [@lammps], and the AIREBO interatomic potential [@potential] with a cutoff of $0.68$nm. This potential has been shown to accurately describe carbon-carbon interactions resulting in accurate predictions of the mechanical properties of graphene [@MM-simulation]. We note that because molecular mechanics simulations were performed, which are intrinsically at 0K, and because all applied displacements were in-plane, there was no out-of-plane buckling during the simulation.
As shown in Fig. \[QW\_GRs\] (b,c), coupled tension and shear were applied onto AGNRs and the corresponding strains were calculated as discussed in previous works [@strain-transport4]. We also define the ‘nominal strain’ as the displacement applied (regardless of the direction) with reference to the original length. Once the carbon atomic positions are obtained at each value of strain, the hopping along each bond (with the length $l$) $V_{pp\pi}=t_{0}e^{-3.37(l/a-1)}$ [@strain-TB] ($t_{0}=2.7$ eV and $a=0.142$ nm) is used as the basis for the electronic structure and quantum transport calculations. Due to the 2D nature of our analysis, $\sigma$ bonds were not considered in our calculation.
Strain was applied with a tilted angle $\theta$ (Fig. \[QW\_GRs\]) from $0^{\circ}$ to $90^{\circ}$ at five different angles, i.e. $0^{\circ}$, $30^{\circ}$, $45^{\circ}$, $60^{\circ}$ and $90^{\circ}$, where $0^{\circ}$ loading represents pure tension, $90^{\circ}$ represents pure shear and the other [three]{} couple tension and shear. To simplify the notation, we also introduce a ‘nominal strain’ $\epsilon_{ns}$, which is defined as the displacement applied over the original nanoribbon length regardless of the loading angle. All cases with different loading angles are deformed at three stages, namely $\epsilon_{ns} = 2.5\%, 5\%, 10\%$, and we will refer exclusively to $\epsilon_{ns}$ in the following. The tension ($\epsilon_{xx}$) and shear ($\epsilon_{xy}$) strain components at the three stages for different loading angles are corresponding as: $\epsilon_{xx} = 2.5\%, 5\%, 10\%$ and $\epsilon_{xy} = 0\%, 0\%, 0\%$ for $0^{\circ}$; $\epsilon_{xx} = 2.1\%, 4.1\%, 8.7\%$ and $\epsilon_{xy} = 0.5\%, 0.9\%, 1.8\%$ for $30^{\circ}$; $\epsilon_{xx} = 1.6\%, 3.3\%, 7.8\%$ and $\epsilon_{xy} = 0.7\%, 1.4\%, 2.6\%$ for $45^{\circ}$; $\epsilon_{xx} = 1.1\%, 2.4\%, 7\%$ and $\epsilon_{xy} = 0.9\%, 1.8\%, 3.1\%$ for $60^{\circ}$; $\epsilon_{xx} = 0\%, 0\%, 0\%$ and $\epsilon_{xy} = 1.1\%, 2.2\%, 3.7\%$ for $90^{\circ}$. The deformed atomic configuration and the resulting components of the tensile and shear strain at $30^{\circ}$ when the nominal strain is $\epsilon_{ns}=10\%$ is shown in Fig. \[QW\_GRs\](b) and (c) [@graphene-MM-BU]. The left contact is fixed while the right one is shifted with strain similar to Ref. [@strain-e4] when the shear strain is present, and the hopping integral in contacts and interfaces are not affected by the shear/tension.
 The current through AGNR depends on the bias voltage $V_{b}$ at $V_{g}=0$ (a) and $V_{g}=t_{0}$ (b). The conductance of AGNR changes with $V_{b}$ at $V_{g}=0$ (lower lines) and $t_{0}$ (upper symbols) (c), and with the gate voltage in AGNR at $V_{b}=0$ and $V_{b}=0.2t_{0}$ (d). No strain is applied (black line) and the nominal strain $\epsilon_{ns}$ being $10\%$ is applied at $0^{\circ}$ (red line), $30^{\circ}$ (green line), $60^{\circ}$ (blue line) and $90^{\circ}$ (magenta line) respectively. The size of AGNRs is $M=29$ and $N=96$. $V_{b}$ and $V_{g}$ in figures is in the unit of $t_{0}$.](GNR-IV.eps)
At a finite bias voltage $V_{b}$, the current transfer from the left contact to the right one is expressed as $I(S,V_{b},V_{g})=2e/h\int_{E_{f}-V_{b}/2}^{E_{f}+V_{b}/2}T(S,E,V_{g})dE$ [@Landauer], where $S$ refers to the strain and $S=0$ stands for no deformation in the GNR, [e is the electron charge and $h$ is Planck’s constant.]{} $T(S,E,V_{g})$ is the transmission at the strain $S$, the energy $E$ and the gate voltage $V_{g}$. Since the effect of the gate voltage on the strain sensor has been explored in a dual-gate setup [@strain-e4], we include $V_{g}$ to estimate the stability and applicability of the graphene-based strain sensor. [Based on the tight binding hamiltonian in Eq.(1) and the transfer matrix method, the transmission $T(S,E,V_{g})$ is obtained through the scattering matrix by solving the Schr$\ddot{o}$dinger equations, which is described in detail in Refs. [@tm3; @tm4].]{}
$I(S,V_{b},V_{g})$ at $V_{g}=0$ at first increases slowly and then sharply with $V_{b}$ as shown in Fig. \[G-Vb\] (a), since the transmission increases as the GNR deviates from being electrically neutral, which has the lowest density of states [@strain-transport5]. $I(S,V_{b},V_{g})$ at $V_{g}=t_0$ at first increases linearly and then sub-linearly with $V_{b}$ as shown in Fig. \[G-Vb\] (b), since the transmission decreases as GNR deviates from the highest density of states [@strain-transport5]. The conductance is defined as $G(S,V_{b},V_{g})=I(S,V_{b},V_{g})/V_{b}$, and is directly related to the transmission at Fermi energy $T(S,E_{f},V_{g})$ at the limit of $V_{b}\rightarrow 0$ [@Landauer]. The conductance $G(S,V_{b},V_{g})$ slightly increases and decreases at $V_{g}=0$ and $V_{g}=t_0$ respectively when $V_{b}$ increases, as shown in Fig. \[G-Vb\](c). When $V_g$ changes, a large oscillation of the conductance at $V_{b}=0$ is induced by quantum interference when electrons are reflected at the GNR-contact interfaces [@strain-transport5; @tm2; @tm1; @tm3; @tm4], and electron-hole asymmetry in conductance is originated by odd-numbered rings at the interfaces as shown in Fig. \[QW\_GRs\] [@tm2]. The curve of the conductance versus the gate voltage at $V_{b}=0.2t_{0}$ overlaps the one at $V_{b}=0$ and the fluctuation becomes invisible, as shown in Fig. \[G-Vb\](d), due to the summation of the transmission among the energy range $[E_{f}-V_{b}/2, E_{f}+V_{b}/2]$.
In strained AGNRs, the transport depends on both the direction and magnitude of the strain $S$. When the nominal strength of the strain is $10\%$, the current at both zero and finite $V_{g}$ is lower than in undeformed AGNR, and gradually increases with the change in angle of the applied strain, when the strain varies from $0^{\circ}$ to $90^{\circ}$ combined with shear, as shown in Fig. \[G-Vb\](a,b). Under pure tension at $0^{\circ}$, the change in the current completely reproduces the observation [@strain-e1]. The slope of the $I$-$V_{b}$ curve (i.e., the conductance) for nearly electrical neutral graphene samples, which corresponds to our result at $V_{g}=0$ and which is dependent on the specific experimental setup, is $1.5\times 10^{-4}\Omega^{-1}$ [@strain-e1] and $2\times 10^{-5}\Omega^{-1}$ [@strain-e2]. Our results show that the ballistic conductance of the 7.018 nm wide AGNR is around $3\times 10^{-5}\Omega^{-1}$ and $1\times 10^{-3}\Omega^{-1}$ at $V_{g}=0$ and $V_{g}=t_{0}$ respectively. The conductance dependence on the strain shown in Fig. \[G-Vb\](c) is the same as the current dependence on the strain.
We compare the conductance dependence on pure tension and pure shear at $90^{\circ}$ to estimate the effect of tension and shear on transport through AGNR. On one hand, in contrast to the fact that the conductance at $-t_{0}<V_{g}<0.5t_{0}$ under pure tension at $90^{\circ}$ is higher than undeformed AGNRs as a result of an increase in the hopping integrals along the horizontal direction [@strain-transport5], the conductance under pure shear at $90^{\circ}$ is always lower than that in undeformed AGNRs as $|V_{g}|\ge 0.5t_{0}$ except for some fluctuations as shown in Fig. \[G-Vb\]. Our calculation is consistent with the observation of the conductance dependence on the shear strain [@strain-e4]. On the other hand, the conductance around the neutral point (i.e., at small $|V_{g}|$) increases with the angle of tension/shear as shown in Fig. \[G-Vb\](c), compared with that under pure tension at $0^{\circ}$. The maximal conductance of undeformed AGNR being $MG_{0}$ ($G_{0}=2e^{2}/h$) occurs at $E=\pm t_{0}$ and $V_{g}=0$ between graphite contacts and at $E=0$ and $V_{g}=t_{0}$ between quantum wire contacts. The maximal conductance of strained AGNR around $V_{g}=t_0$ decreases as a result of deformation in AGNR when tension and/or shear is applied to AGNR. The data indicates that electronic transport through AGNR can be easily mediated by the strain when the system deviates from the electrically neutral region.
 The ratio of the conductance in AGNRs before and after application of the strain with the nominal strain being 10% (a) and the nominal strain is 2.5%, 5% and 10% at an angle of $0^{\circ}$ (b), $45^{\circ}$ (c) and $90^{\circ}$ (d) between $x$-axis. The size of AGNR is $M=29$ and $N=96$. The bias voltage is $V_{b}=0.2t_{0}$ (main panels) and $V_{b}=0$ (insets). ](AGNR-strain-ratio-Vb-merged.eps)
In most experiments of graphene-based strain sensors [@strain-e1; @strain-e2], no gate voltage is applied and graphene may not be electrically neutral due to the doping from metallic contacts [@strain-e4; @dop-contact]. Recently the effect of the gate voltage has been explored [@strain-e4], and thus we set the gate voltage as a variable parameter to provide information such as the stability of the strain sensor under different gate voltages. The change of current is usually measured under strain and the percentage of the resistance is used to estimate the effect of the strain on transport through graphene samples [@strain-e1; @strain-e2]. Therefore, we use the ratio of the conductance compared with that in undeformed AGNR, $G(S,V_{b},V_{g})/G(0,V_{b},V_{g})$, to measure the sensitivity of the graphene-based strain sensor as shown in Fig. \[AGNR-strain-Vb0\].
Due to large oscillations in the conductance at $V_{b}=0$, a large oscillation is also seen in the ratio $G(S,V_{b},V_{g})/G(0,V_{b},V_{g})$ as shown in the insets of Fig. \[AGNR-strain-Vb0\]. However, the trends of $G(S,V_{b},V_{g})/G(0,V_{b},V_{g})$ are still clear. The conductance ratio at $V_{b}=0.2t_{0}$ shown in the main panels of Fig. \[AGNR-strain-Vb0\] is relatively smooth at negative gate voltage, shows a large dip or peak around zero gate voltage and slightly decreases as the gate voltage becomes more positive. When the nominal strain is 10% in Fig. \[AGNR-strain-Vb0\](a), the conductance slightly increases but is smaller than in the undeformed case, as the angle $\theta$ varies from 0$^{\circ}$ to 90$^{\circ}$. As $\theta$ is varied from 0$^{\circ}$, 45$^{\circ}$ and 90$^{\circ}$ in Figs. \[AGNR-strain-Vb0\](b-d), the conductance decreases as the nominal strain increases from 2.5% to 10%. It is found that the conductance shows little change under pure shear at 90$^{\circ}$ as shown in Fig. \[AGNR-strain-Vb0\](d). We demonstrate that this kind of strain sensor is robust since the relative proportion of the conductance is smooth within a wide gate voltage range [@graphene1; @graphene2].
In summary, we have studied a graphene-based strain sensor consisting of armchair graphene nanoribbon (AGNR) between metallic contacts in response to combined tension/shear. The conductance and the relative proportion of the conductance decreases as the strain increases. This kind of strain sensor has relatively higher sensitivity to the strength of the strain at finite bias voltage and a wide range of the gate voltage when the strain is parallel to the armchair edge.
Finally we comment on the performance of a strain sensor made from a zigzag graphene nanoribbon (ZGNR) between quantum wire contacts with a possible lattice mismatch at the interfaces. Compared with the case of AGNR, the fluctuation of the conductance of ZGNR is larger, when the gate voltage changes. The ratio of the conductance, $G(S,V_{b},V_{g})/G(0,V_{b},V_{g})$, ranges between 0.8 and 1.4 as $|V_{g}|\le 2t_{0}$, and the dependence of the conductance ratio on the strain are different from that seen in Fig. \[AGNR-strain-Vb0\] when $|V_{g}|\le t_{0}$.
*Acknowledgements* HSP and ZQ acknowledge support from the Mechanical Engineering and Physics Departments at Boston University. G. P. Zhang thanks support by NSF of China (Grant No. 11204372).
[99]{} Novoselov K S, Geim A K, Morozov S V, Jiang D, Zhang Y, Dubonos S V, Grigorieva I V and Firsov A A 2004 Science 306, 666 Novoselov K S, Geim A K, Morozov S V, Jiang D, Katsnelson M I, Grigorieva I V, Dubonos S V and Firsov A A 2005 Nature (London) 438, 197 Miao F, Wijeratne S, Zhang Y, Coskun U C, Bao W and Lau C N, 2007 Science 317 1530 Schedin F, Geim A K, Novoselov K S et al 2007 Nature Materials 6, 652-655 Dutta P, Maiti S K and Karmakar S N 2013 J. Appl. Phys. 114(3), 034306. Fu X W, Liao Z M, Zhou J X, Zhou Y B, Wu H C, Zhang R, Jing G Y, Xu J, Wu X S, Guo W L and Yu D P 2011 Appl. Phys. Lett. 99, 213107 Bae S H, Lee Y, Sharma B K, Lee H J, Kim J H and Ahn J H 2013 Carbon 51, 236 Chun S, Kim Y, Jin H, Choi E, Lee S B and Park W 2014 Carbon 78, 601-608 He X, Gao L, Tang N et al 2014 Appl. Phys. Lett. 105, 083108 Wang Y, Wang L, Yang T T et al 2014 Adv. Funct. Mat. 24, 4666-4670 Souma S, Ohmi Y, Ogawa M 2013 J. Comput. Electr. 12, 170-174 Mohammad Reza Moslemi, Mohammad Hossein Sheikhi, Kamyar Saghafi, Mohammad Kazem Moravvej-Farshi 2012 Microelectronics Reliability 52, 2579-2584 Pereira V M, Neto A H Castro and Peres N M R 2009 Phys. Rev. B 80, 045401 Li Y, Jiang X W, Liu Z F and Liu Z R 2010 Nano Res. 3, 545 Peng X, Tang F and Copple A 2012 J. Phys.: Condens. Matter 24, 075501 Lu Y and Guo J 2010 Nano Res. 3, 189 Sena S H R, Pereira Jr J M, Farias G A, Peeters F M and Costa Filho R N 2012 J. Phys. Condens. Matter, 24(37), 375301 Poetschke M, Rocha C G, Torres L E F Foa, Roche S and G. Cuniberti 2010 Phys Rev B 81, 193404 Wang J Y, Liu Z F and Liu Z R 2012 AIP Advances 2, 012103 Rasuli R, Rafii-Tabar H and Zad A I 2010 Phys Rev B 81, 125409 Qi Zenan, Bahamon D A, Perira Vitor M, Park Harold S and Campbell D K 2013 Nano Lett. 13, 2692-2697 Bahamon D A and Pereira V M 2013 Phys. Phys. Rev. B 88, 195416 Cosma Diana A, Mucha-Kruczy¨½ski Marcin, Schomerus Henning and Vladimir I. Fal’ko 2014 Phys. Rev. B 90, 245409 Wang J, Zhang G P, Ye F and Wang X Q arXiv:1411.1529 Peres N M R, Castro Neto A H and Guinea F 2006 Phys Rev B 73, 195411 Zhang G P and Qin Z J 2011 Chem. Phys. Lett. 516, 225 Wang J, Zhang G P, Ye F and Wang X Q unpublished. Qi Z, Campbell D K and Park Harold S 2014 Phys. Rev. B 90, 245437 http://lammps.sandia.gov (2012); Plimpton S 1995 J. Comput. Phys. 117, 1 Stuart S J, Tutein A B and Harrison J A 2000 J. Chem. Phys. 112, 6472 Zhao H, Min K, Aluru N R 2009 Nano Lett. 9, 3012-3015; Wang M et al 2012 Comput. Mater. Sci 54, 236-239 Qi Z, Kitt Alexander L, Park Harold S, Pereira Vitor M, Campbell David K and Neto A H Castro 2014 Phys. Rev. B 90, 125419 Los J H, Katsnelson M I, Yazyev O V, Zakharchenko K V and Fasolino A 2009 Phys. Rev. B 80, 121405 B$\ddot{u}$ttiker M, Imry Y, Landauer R and Pinhas S 1985 Phys. Rev. B 31, 6207 Yin Y and Xiong S J 2003 Phys. Lett. A 317, 507 Hu S J, Du W, Zhang G P, Gao M, Lu Z Y and Wang X Q 2012 Chin. Phys. Lett. 29, 057201 Gao M, Zhang G P and Lu Z Y 2014 Comput. Phys. Commun. 185, 856 Giovannetti G, Khomyakov P A, Brocks G, Karpan V M, Brink J van den and Kelly P J 2008 Phys. Rev. Lett. 101, 026803
|
---
abstract: 'Two coupled semiconductor nanolasers exhibit a mode switching transition, theoretically characterized by limit cycle –or mode-beating– oscillations. Their decay rate is vanishingly small in the thermodynamic limit, i.e. when the spontaneous emission noise $\beta$-factor tends to zero. We provide experimental evidence of mesoscopic limit cycles –with $\sim 10^3$ intracavity photons– through photon statistics measurements. We first show that the order parameter quantifying the limit cycle amplitude can be reconstructed from the mode intensity statistics. As a main result we observe a maximum of the averaged amplitude at the mode switching, accounting for limit cycle oscillations. We finally relate this maximum to a dip of mode cross-correlations, reaching a minimum of $g_{ij}^{(2)}=2/3$, which we show to be a mesoscopic limit. Coupled nanolasers are thus an appealing testbed for the investigation of spontaneous breaking of time-translation symmetry in presence of strong quantum fluctuations.'
author:
- Mathias Marconi
- Fabrice Raineri
- Ariel Levenson
- 'Alejandro M. Yacomotti'
- Julien Javaloyes
- 'Si H. Pan'
- Abdelkrim El Amili
- Yeshaiahu Fainman
title: Mesoscopic limit cycles in coupled nanolasers
---
How quantum fluctuations affect nonequilibrium periodic orbits? This question, intimately related to the spontaneous breaking of time translation symmetry, has strongly motivated a large community of physicists in the last few years. Although the spontaneous time symmetry breaking is well known in classical nonlinear dynamical science [@Guckenheimer:1986aa], its realization in the quantum world has been a subject of debate. In a seminal paper [@PhysRevLett.109.160401], F. Wilczek pointed out the existence of quantum periodic motion in a time-invariant Hamiltonian, launching a new field of research known as time crystals: the time counterparts of spatial crystals, for which the continuous spatial translation symmetry is spontaneously broken. Since then many efforts have been devoted to understand and implement time crystals in different domains such as condensed matter and QED systems (see, e.g., Ref. [@Sacha_2017] for a review).

Recently, F. Iemini et al. have proposed a new class of dissipative time crystals called boundary time crystals (BTC’s) [@PhysRevLett.121.035301]. In contrast to Floquet time crystals –or $\pi$-spin glasses– [@PhysRevA.91.033617; @PhysRevLett.117.090402; @PhysRevB.96.115127] subjected to a periodic forcing, in BTC’s the Hamiltonian is time-independent, and it is the *continuous* time symmetry which is spontaneously broken in a small though macroscopic fraction of a many body quantum system. The prediction is that, in the thermodynamic limit, a periodic solution emerges whose decay rate tends to zero, i.e. the amplitude of the oscillations becomes constant in time. Such a divergent time scale is related to a closure of a Liouvillian gap in the thermodynamical limit, hence to a dissipative phase transition [@PhysRevA.98.042118]. In a BTC this decay rate is associated to the lowest eigenvalue with nonzero imaginary part [@PhysRevLett.121.035301]. Hence, the persistent oscillations are associated to the spontaneous symmetry breaking since they only take place in the thermodynamic limit. The model developed in Ref. [@PhysRevLett.121.035301] accounts for cooperative emission in cavities, which can be realized by cold atoms in a cavity subjected to Raman driving. In addition, a number of many body limit cycles could be classified as BTCs [@PhysRevLett.111.073603; @PhysRevLett.116.143603; @PhysRevLett.110.163605]. An important conclusion is that the existence of BTCs is to be experimentally tested, since limit cycles might not survive in the presence of fluctuations.
In this work we propose coupled nanolasers (examples of recent realizations can be found in Refs. [@Hamel:2015vn; @Deka:17; @PhysRevX.8.011013]) as testbeds for limit cycles subjected to strong quantum noise –in this case due to spontaneous emission– and provide experimental evidence on the existence of limit cycles with a thousand photons inside the cavities.
Lasers are fascinating workbenches to study non-equilibrium statistical mechanics. A paradagmatic example, realized since the early days of laser theory, is the second order phase transition at the oscillation threshold in the thermodynamic limit, i.e. for vanishingly small spontaneous emission $\beta$ factor [@PhysRevA.2.1170; @RevModPhys.47.67; @PhysRevA.50.4318]; intracavity photon number scales as $\beta^{-1}$, which can be identified as the thermodynamic parameter [@PhysRevA.50.4318]. Here we explore limit cycle oscillations that emerge as mode beating when the two eigenmodes of the two coupled nanolasers operate simultaneously. Specifically, this occurs at a mode switching transition between the bonding and anti-bonding modes of a photonic crystal molecule [@Marconi:16].
We consider the photon statistics around a switching transition from the bonding ($B$) to the anti-bonding ($A$) modes of a nanolaser dimer formed by two evanescently coupled semiconductor nanocavities (coupling constant $K$, see Fig. \[Fig\_switching\]a) [@Marconi:16; @Hamel:2015vn] as the pump is increased. At the switching point, the high energy (blue-shifted, here $B$) mode switches off, and simultaneously the fundamental (red-shifted, here $A$) mode switches on [@Marconi:16].
Theoretically, lasers can be described by a quantum master equation using the density matrix approach [@scully_zubairy_1997; @takemura2019low]. Much more simplified models have been used in the past: among them, the semiclassical laser theory –which neglects quantum fluctuations– has the status of a mean field model in statistical mechanics [@PhysRevA.50.4318], which is a rough approximation for semiconductor nanolasers. A more realistic description needs to incorporate spontaneous emission fluctuations produced by the semiconductor emitters (such as quantum wells, QWs), which can be added to the semiclassical model in the form of Langevin noise terms. Two coupled nanolasers containing QWs can be thus modeled by the following nonlinear coupled stochastic differential equations [@Hamel:2015vn; @Marconi:16; @PhysRevX.8.011013]: $$\begin{aligned}
\dot{a}_{1,2} & = \left( \frac{1+i\alpha}{2} G_{1,2} -\kappa \right) a_{1,2}+ \left(\gamma+iK\right)a_{2,1} +F_{a_{1,2}}(t) \label{eq:aLR}\\
\dot{n}_{1,2} &= P -\gamma_{tot} n_{1,2}- G_{1,2}|a_{1,2}|^2 \label{eq:nLR}\end{aligned}$$ where $|a_{i,j}|^2=I_{i,j}$ and $n$ are normalized as the photon and carrier numbers in the cavities, respectively, $\kappa$ is the cavity loss rate, $\alpha$ the Henry factor, $P$ the pump rate and $\gamma_{tot}$ is the total carrier recombination rate. The complex inter-cavity coupling constant quantifies frequency ($K$) and loss ($\gamma$) splitting as a result of the evanescent coupling. $G_{1,2}$ = $\gamma_{\parallel}\beta (n_{1,2}-n_0)$ is the gain, $\gamma_{\parallel}$ is the two-level radiative recombination rate and $n_0$ the carrier number at transparency. $F_{a_i}(t)$ are Langevin noise terms accounting for spontaneous emission with rate $R_{sp}= \beta F_P B n_{1,2}^2/V_a$ where $B$ is the bimolecular radiative recombination rate, $F_P$ the Purcell factor and $V_a$ the volume of the active medium. We make the common assumption of uncorrelated (white) noise, i.e. $\langle F_\mu(t) F_\nu(t')\rangle=2D_{\mu \nu}\delta(t-t')$, where $D_{\mu \nu}$ are the following diffusion coefficients: $2D_{a_ia_i}=2D_{a_i^*a_i^*}=0$, $2D_{a_ia_i^*}=2D_{a_i^*a_i}=R_{sp}$, and zero otherwise. Importantly, the spontaneous emission factor $\beta$ is related to the intracavity saturation photon number as $I_{sat}=\gamma_{tot}/\gamma_{\parallel}\beta$; $\beta^{-1}$ can thus be identified as the thermodynamic parameter, since the characteristic photon number scales as $\beta^{-1}$.
![Experimental time traces as the pump power is quasi-statically ramped up in time. Top panel: Intensity traces for bonding (blue) and antibonding (red) modes measured with 600 MHz-APD detectors. Pump ramp duration=6 ns. Thick lines: average corresponding to $10^4$ time traces. Middle panel: second order correlations (left axis) and the two lowest moments of the mode population imbalance $x$ (right axis). Blue: $g^{(2)}_{BB}$, red: $g^{(2)}_{AA}$ and green: $g^{(2)}_{BA}$. Black: mean value, and grey: variance of $x$. Solid lines show the results using the full statistics; dashed lines compute the moments from $g^{(2)}_{ij}$ (Eqs. S3-S5 of the Supplementary Material), thus neglecting correlations between $I$ and $x$. Bottom panel: two first moments of the order parameter $\mathcal{A}$. Blue: mean value, $\langle \mathcal{A}\rangle$; yellow: variance, $(\Delta \mathcal{A})^2$. []{data-label="Fig_g2"}](switch_ramp6ns_g2_A_separated-eps-converted-to.pdf)
The two linear eigenmodes of Eqs. \[eq:aLR\] are $a_B=(a_1+a_2)/\sqrt{2}$ and $a_{A}=(a_1-a_2)/\sqrt{2}$, corresponding to bonding and anti-bonding modes of the coupled cavities system, respectively (Fig. \[Fig\_switching\]a). As it has been shown elsewhere [@PhysRevX.8.011013], the dynamics of this system can be separated in two subset of variables: the total intensity and carrier number on one side, and the relative intensities and phases of the cavities on the other side, which can be recast on the Bloch sphere as $\theta=2\arctan\left(\sqrt{I_2/I_1} \right)\in\left[0,\pi\right]$ and $\Phi=\psi_{1}-\psi_{2}$, where $a_j=\sqrt{I_{j}}\exp\left(i\psi_{j}\right)$. Remarkably, the $x$-coordinate of the Bloch sphere is nothing but the mode population imbalance, $x=(I_B-I_{A})/(I_B+I_{A})$, where $I_B$ and $I_{A}$ are the intensities of the two eigenmodes.
Above laser threshold, the laser molecule operates in the mode with higher net gain, which in our case has been designed to be the $B$-mode. Experimentally, a switching transition is observed, where the $B$-mode switches off, and the $A$-mode switches on as the pump power is ramped up [@Marconi:16]. Indeed, Eqs. \[eq:aLR\]-\[eq:nLR\] show mode switching dynamics. Interestingly, the mode switching transition is mediated by the emergence of a limit cycle in the thermodynamic limit, $\beta^{-1}\rightarrow \infty$ [@Marconi:16]: the $B$-mode loses stability expelling a limit cycle at a first Hopf bifurcation ($x= 1$, Fig. \[Fig\_switching\]b); these oscillations account for mode beating. The limit cycle amplitude rapidly increases up to a perfect mode beating situation in which both modes have the same intensity (dual-frequency laser), and each cavity intensity experiences $100\%$-contrast oscillation ($x= 0$, Fig. \[Fig\_switching\]b). Further increasing the pump parameter the limit cycle shrinks and coalesces at a second Hopf bifurcation, leading to a stable fixed point on the Bloch sphere corresponding to the $A$-mode ($x= -1$, Fig. \[Fig\_switching\]b). The limit cycle oscillations are long-lasting solutions of the mean field limit: the amplitude decay rate tends to zero. In the presence of noise, fluctuations increase the decay rate. We have quantified such an effect through simulations of the Langevin equations with different $\beta$-factors accounting for different spontaneous emission rates. In Fig. \[Fig\_switching\]c-d we show the ensemble average of the cavity 1-intensity, $\langle I_1\rangle$ for $\beta=1.7\times10^{-5}$ (Fig. \[Fig\_switching\]c) and $\beta=1.7\times10^{-2}$ (Fig. \[Fig\_switching\]d). In these simulations the initial conditions correspond to a maximum of photon number in cavity 1, i.e. $\Phi=0$ or $\pi$, and $\theta\leq \pi/2$. Figure \[Fig\_switching\]c corresponds to a macroscopic laser cavity: the effect of noise on the limit cycle is small, and the amplitude does not decay in the whole time window used for the calculations. However, in Fig. \[Fig\_switching\]d we observe a drastic reduction in the decay time of the amplitude; still, the limit cycle undergos thousands of oscillations during the damping time. It is important to point out that the period of oscillations is $T=\pi/K=0.26$ in units of the cavity photon lifetime, corresponding to a frequency of $f=545$ GHz in our example of strongly coupled cavities. Such a high frequency combined with a low output photon number, rule out the possibility of the direct observation of the limit cycle. However, we will show below that the order parameter accounting for the limit cycle formation can be quantified through the mode intensity statistics.
By construction, the limit cycle amplitude on the Bloch sphere of Fig. \[Fig\_switching\]b is $$\mathcal{A} = \sqrt{1-x^2}
\label{eq:A}$$ which is the natural order parameter for the limit cycle. Equation \[eq:A\] simply states that the limit cycle vanishes for single mode operation, $x=\pm 1$, and reaches a maximum order of $\mathcal{A}=1$ for the meridian“ limit cycle ($\Psi=\pi/2$) in the thermodynamic limit. In Fig. \[Fig\_switching\]e-f we show the mean value and variance of the order parameter as a function of the pump. Clearly, $\langle \mathcal{A}\rangle $ reaches a maximum at $P/P_0\approx 6.02$ for $\beta=1.7\times10^{-5}$ (Fig. \[Fig\_switching\]e) with two fluctuation maxima at the bifurcation points. In the nanolaser” case, $\beta=1.7\times 10^{-2}$, $\langle \mathcal{A}\rangle $ still presents a maximum at the mode switching point, but its value is smaller, $\langle \mathcal{A}\rangle \approx 0.8$, and the pumping range for nonzero $\langle \mathcal{A}\rangle $ is broadened with respect to the macroscopic" case. This last point is important since the nanolaser regime enhances the pumping interval of existence of the limit cycle. Note that the statistics of $\mathcal{A}$ can be obtained through the mode-intensity statistics. In the following we will present our experimental results showing the increase of the limit cycle amplitude at the mode switching point, together with an increase of the amplitude fluctuations at both sides, which is the signature of a limit cycle bifurcation.
![Joint histograms $P(x,I_{tot}; t)$ corresponding to the rumping-up intensity traces of Fig. \[Fig\_g2\]. Frames correspond to increasing pump powers at different times: from bottom-right to top-left corners t=6712.7, 6713.1, 6713.5, 6713.9, 6714.3, 6714.7. It can be observed that nearly flat $x$-statistics occur at the switching point, i.e. t=6713.5-6713.9. $x$-distributions are exponentially decaying from $x=1$ to $x=-1$ for small pump powers (t=6712.7, top-left), and from $x=-1$ to $x=1$ for large pump powers (t=6714.7, bottom-right). \[Fig\_hist\]](switch_ramp6ns_histox_2-eps-converted-to.pdf)
Figure \[Fig\_g2\]a shows the experimental time traces of the two eigenmodes. Modes are detected in the far field, in such a way that their emission can be spatially separated. Mode-intensities are then simultaneously measured using two fast (600 MHz-bandwidth) APD photodetectors as the pump power is ramped up (ramp duration= 6 ns). The time series have been used to reconstruct the statistics of the mode population imbalance (Fig. \[Fig\_g2\]b, right axis). It can be observed that $\langle x \rangle$ has a step-like variation with a zero-crossing –that we refer to as the switching point, $P_s$– as the pump power is increased. The full statistics of $x$ can be used to compute the statistics of $\mathcal{A}$. In Fig. \[Fig\_g2\]c we show the mean value $\langle \mathcal{A}\rangle $ together with the variance $(\Delta \mathcal{A})^2$. We observe a maximum of $\langle \mathcal{A}\rangle \approx 0.83$ at the switching point, in good agreement with the predictions of the Langevin-semiclassical model (Fig. \[Fig\_switching\]f). In addition, there is a peak of $(\Delta \mathcal{A})^2$ at each side of the switching point, also in agreement with the model. Therefore, our measurements reveal the emergence of a limit cycle, even though the direct measurement of the time oscillations of one cavity-intensity cannot be done due to both the extremely high oscillation frequency (of the order of hundreds of GHz) together with the weak output signals (in the sub-$\mu$W range).
We point out that the intensity fluctuation dynamics becomes extremely slow at the switching point. As a result the fluctuations could be accurately measured with our APD detectors. Indeed, the time-width of autocorrelation functions is typically $2-3$ ns at the switching point, while the timescales of the system are $\sim 10$ ps for the photons, and $200$ ps for the charge carriers in the QWs. Such a timescale stretching is seemingly due to the critically slowing down of the dynamics close to bifurcation points, which is also predicted by the model. It turns out that the proximity to the bifurcation point is translated into the real part of an eigenvalue approaching zero, which corresponds to a dramatic increase of the timescale. This can be theoretically confirmed in the limit of very strong coupling, where the dynamics can be reduced to a 1D Fokker-Planck equation. We have computed the spectrum of the Fokker-Planck operator, which shows a first exited eigenvalue that vanishes at the switching point (Sec. V, Supplementary Material). In this $K\gg 1$ limit the double bifurcation structure leading to the limit cycle (Fig.\[Fig\_switching\]b) is degenerated to a single parameter $P=P_s$, and only the stochastic dynamics of $x$ is considered. We expect that the Fokker-Planck operator of the full Langevin-semiclassical model should possess, at each Hopf bifurcation, two eigenvalues with nonzero imaginary parts and their real part tending to zero as $\beta\rightarrow 0$. We relate these features to a gapless Liouvillian spectrum having a nonzero imaginary part of a quantum master equation description, as it has been predicted for a large class of limit cycles such as BTCs [@PhysRevLett.121.035301].
The strong fluctuations in the limit cycle amplitude come from the strong, non-gaussian fluctuations of the mode population imbalance $x$. In order to further investigate the nature of such fluctuations, we first point out that the semiclassical model predicts exponential equilibrium distributions for $x$ in the $K\gg 1$ limit [@PhysRevX.8.011013] (see Supplementary Material, Sec. V), namely $$\rho_{eq}(x;\Lambda)=\mathcal{N} e^{-\Lambda x},
\label{equilibrium}$$ where $\mathcal{N}=\Lambda/\left(2\sinh\Lambda\right)$; $\Lambda$ can be approximated as a linear function of the pump close enough to the switching point, $\Lambda \sim (P/P_s-1)I/\beta^2$ (see Sec. V, Supp. Material). The experimental statistical distributions of $x$ are shown in Fig. \[Fig\_hist\]. Flat distributions can be observed at the switching point, in agreement with the theoretical prediction in the mesoscopic regime (Eq. \[equilibrium\]), with $\Lambda>0$ for $P<P_s$, $\Lambda=0$ at $P=P_s$, and $\Lambda>0$ for $P>P_s$.
Usually, the experimentally accessible quantity is the photon correlations rather than the order parameter. Nevertheless, both of them are related. First we note that the zero time-delay mode cross-correlations reach a limit of $g^{(2)}_{BA}(\tau=0)=2/3$ in the mesoscopic limit cycle regime. This $2/3$ limit can be easily deduced from the relation between the second order coherence and the two lowest order moments of $x$. Under the hypothesis of decorrelated total intensity $I=I_B+I_A$ and $x$-fluctuations, $\langle I x\rangle= \langle I \rangle \langle x\rangle$, it can be shown that $$\label{g+-}
g^{(2)}_{BA}=g^{(2)}_{II}\frac{1-\langle x^2\rangle}{1-\langle x\rangle^2}, \\$$ where we have removed $\tau=0$ to simplify the notation. We will further assume that the total intensity fluctuations are poissonian, hence $g^{(2)}_{II}=1$, in agreement with our measurements (Fig \[Fig\_hist\]). The ideal case of a flat statistical distribution of $x$ leads to $\langle x \rangle=0$ and $\langle x^2 \rangle=1/3$; hence $g^{(2)}_{BA}=2/3$. This theoretical prediction is also in good agreement with our experimental results: in Fig. \[Fig\_g2\] (middle panel, left axis) we show $g^{(2)}_{AA}$, $g^{(2)}_{BB}$ and $g^{(2)}_{BA}$. Importantly, $g^{(2)}_{BA}\approx 0.7$ is less that unity at the switching point, revealing mode anti-correlations. The mode cross-correlation minimum, min$[g^{(2)}_{BA}]$, is strongly influenced by noise. In Fig. \[Fig\_vsbeta\] we display numerical simulations decreasing the system size from $\beta^{-1}=5.9 \times10^4$ to $5.9$, showing a clear crossover between the macroscopic and the mesoscopic regimes. Interestingly, the cross-correlation functions have a double dip structure in the macroscopic regime, whereas there is a single dip in the mesoscopic regime, in particular for $\beta=0.017$ corresponding to our experimental situation.
Now, our measurements using classical APD detectors can be compared to photon correlation measurements using single photon detectors, for which the time resolution is as short as $50$ ps for single nanowire single photon detectors (SNSPDs) in the telecommunication band. Cross correlations measurements using SNSPDs under pulse pumping also show a minimum of $g^{(2)}_{BA}\approx 0.7$ close to the switching point (Sec. II, Supplementary Material).
The cross-correlation is related to the limit cycle amplitude as $\langle \mathcal{A}^2\rangle\approx g^{(2)}_{BA}$ at the switching point (Eq. \[g+-\]). In addition, neglecting limit cycle amplitude fluctuations we easily get $$\langle \mathcal{A}\rangle \approx \sqrt{g^{(2)}_{BA}}
\label{eq:A_g2}$$ close to the switching point; both are shown in Fig. \[Fig\_vsbeta\] for comparison. We point out that $g^{(2)}_{BA}=1$ corresponds to the uncorrelated limit, hence the non-trivial statistical information is contained in the depth of the cross-correlation dip (green area in Fig. \[Fig\_vsbeta\]). Since $\min [g^{(2)}_{BA} ]$ approaches unity in the thermodynamic limit (see Fig. \[Fig\_vsbeta\], bottom-right panel), no significant statistical information can be extracted from these measurements in the macroscopic regime. In contrast, the cross-correlation function is a good statistical indicator in the mesoscopic regime, where $g_{AB}^{(2)}<1$. However, it is important to point out that small modal cross-correlations ($g_{ij}^{(2)}<1$ with $i\neq j$) is not a sufficient condition for the presence of limit cycles. Indeed, mode anticorrelated fluctuations have already been reported in other photonic systems such as VCSELs or micropillar lasers, but no limit cycle dynamics has been reported in those examples. For instance, polarization switching dynamics in VCSELs have been performed in the past, showing strong mode anticorrelation [@PhysRevA.60.4105; @PhysRevA.68.033822; @PhysRevA.62.033810], with reported cross correlation functions below $g_{ij}^{(2)}=1/2$ [@1397875]. More recently, $g_{ij}^{(2)}<1$ has been shown at the polarization switching of bimodal micro-pillar lasers [@PhysRevA.87.053819], which has been explained as a statistical mixture of a thermal and a coherent state [@PhysRevX.7.021045].
In conclusion we have shown that mesoscopic limit cycles emerge at the mode switching of a nanolaser dimer. Such limit cycles are mode beating oscillations when two eigenmodes operate simultaneously. This has been possible thanks to photon statistics measurements, which allowed us to compute the order parameter $\mathcal{A}$ which is the amplitude of the limit cycle oscillation. We have shown that a maximum of $\langle \mathcal{A}\rangle$ is observed at the mode switching point, together with two maxima of the order parameter fluctuations at each side of the mode transition, which are signatures of limit cylce bifurcations in the presence of noise, as predicted by a Langevin-semiclassical model. We conjecture that this scenario may support vanishing eigenvalues of the Liouvillian within a quantum master equation description, with a nonzero imaginary part, which has been recently shown to characterize a large family of many body limit cycles [@PhysRevLett.121.035301]. In addition, we have related the order parameter to photon correlation measurements, and show that the mesoscopic limit cycle regime is associated with a $2/3$ limit of the mode cross-correlations. Therefore, a coupled nanolaser system proves useful as a testbed for the investigation of limit cycles subjected to strong quantum noise, and the spontaneous breaking of time translation symmetry. We acknowledge enlightening discussions with A. Biella, Z. Denis and C. Ciuti. This work has been partially funded by the Investissements d’Avenir" program (Labex NanoSaclay, Grant No. ANR-10-LABX-0035) and the ANR UNIQ DS078.
[26]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [**]{} (, , ) [****, ()](\doibase 10.1103/PhysRevLett.109.160401) [****, ()](\doibase 10.1088/1361-6633/aa8b38) [****, ()](\doibase
10.1103/PhysRevLett.121.035301) [****, ()](\doibase 10.1103/PhysRevA.91.033617) [****, ()](\doibase 10.1103/PhysRevLett.117.090402) [****, ()](\doibase 10.1103/PhysRevB.96.115127) [****, ()](\doibase 10.1103/PhysRevA.98.042118) [****, ()](\doibase 10.1103/PhysRevLett.111.073603) [****, ()](\doibase
10.1103/PhysRevLett.116.143603) [****, ()](\doibase
10.1103/PhysRevLett.110.163605) [****, ()](http://dx.doi.org/10.1038/nphoton.2015.65) [****, ()](\doibase
10.1364/OL.42.004760) [****, ()](\doibase
10.1103/PhysRevX.8.011013) [****, ()](\doibase 10.1103/PhysRevA.2.1170) [****, ()](\doibase 10.1103/RevModPhys.47.67) [****, ()](\doibase 10.1103/PhysRevA.50.4318) [****, ()](\doibase
10.1364/OL.41.005628) “,” in [**](\doibase
10.1017/CBO9780511813993.013) (, ) pp. @noop [ ()]{} [****, ()](\doibase 10.1103/PhysRevA.60.4105) [****, ()](\doibase
10.1103/PhysRevA.68.033822) [****, ()](\doibase 10.1103/PhysRevA.62.033810) [****, ()](\doibase
10.1109/JQE.2004.842312) [****, ()](\doibase 10.1103/PhysRevA.87.053819) [****, ()](\doibase 10.1103/PhysRevX.7.021045)
|
---
abstract: 'Recent progress in the quantitative analysis of Wolf-Rayet stars is reviewed, emphasising the role played by choice of spectral diagnostics, clumping and line blanketing on derived stellar properties. The ionizing properties of WR stars are discussed, based on clumped, line blanketed models for WN and WC stars. The role of metallicity and mass-loss is assessed, and the role of H[ii]{} regions as probes of predicted Lyman continuum distributions. Suggestions are made for differences in observed properties of WCE and WO subtypes.'
author:
- 'Paul A. Crowther'
title: Progress in Model Atmosphere Studies of WR stars
---
3[cm$^{-3}$]{}
Introduction
============
It is only through understanding the physics of massive stars, their atmospheres, radiation, and evolution, that we will be able to make progress in many aspects of astrophysics. Particularly important is the quantitative study of young starbursts, which are dominated by the effects of O-type and Wolf-Rayet (WR) stars. WR stars comprise only 10% of the massive stellar content in the Galactic mini-starburst, NGC3603, yet they contribute 20% of the total ionizing flux and 60% of the total kinetic energy injected into the ISM (Crowther & Dessart 1998). In order that young starbursts can be properly studied, both nearby, and at high-redshift, it is crucial that the properties and evolution of O-type and WR stars spanning a range of initial metallicities are determined. In this review, I will consider recent theoretical progress in WR analyses that has been made towards this goal, focusing especially on spectroscopic and ionizing properties.
Quantitative spectroscopy of WR stars
=====================================
Quantitative analysis of W-R stars represents a formidable challenge, since their stellar winds are so dense that their photospheres are invisible, so that the usual assumptions of plane-parallel geometry and local thermodynamic equilibrium (LTE) are wholly inadequate. A minimum requirement is to consider non-LTE in an extended, expanding atmosphere for multi-level atoms. At present, three independent model atmosphere codes are capable of routinely analysing the spectra of WR stars, considering complex model atoms of hydrogen, helium, nitrogen, carbon and oxygen, developed by W.-R. Hamann (Potsdam), W. Schmutz (Zurich) and D.J. Hillier (Pittsburgh), the latter also implemented by P.A. Crowther (London) and F. Najarro (Madrid). Each code solves the radiative transfer problem in the co-moving frame, subject to statistical and radiative equilibrium, including the effects of electron scattering and clumping. Overall, consistency between results from these codes is very good. Schmutz and Hillier have also accounted for line blanketing by heavy elements.
Individual calculations are computationally demanding, so that a large parameter space can not be readily explored. Consequently, computationally quick codes have been developed which solve the transfer problem in the Sobolev approximation and assume a wind temperature distribution (e.g. de Koter et al. 1997; Machado, these proc.) De Marco et al. (these proc.) compare the predictions of the code by de Koter with that of Hamann for WC stars.
Progress in the determination of stellar properties
===================================================
In the simplest case, the stellar properties of WR stars are derived from the following diagnostics: two spectral lines from adjacent ionization stages of helium (He[ii]{} $\lambda$5412 and He[i]{} $\lambda$5876 most commonly), plus the absolute magnitude in a standard filter (typically $M_{v}$) and the terminal wind velocity ($v_{\infty}$), often measured from UV resonance lines. The default number of model parameters available is therefore four, $R_{\ast}$, $T_{\ast}$, $\dot{M}$ and $v_{\infty}$. Stellar temperatures for extended atmospheres are related to the inner boundary of the model atmosphere (generally around Rosseland optical depth $\tau_{\rm Ross}$$\sim$20), which often deviates significantly from the ‘effective’ temperature, at $\tau_{\rm Ross}$=2/3 (Hamann 1994). Schmutz et al. (1989) identified the so-called transformed radius ($R_{t}$), a measure of wind density, which relates $R_{\ast}$, $\dot{M}$ and $v_{\infty}$ so that almost identical spectra are produced for fixed $R_{t}$, reducing the number of free parameters. In this way, a large number of WR stars in the Galaxy and Magellanic Clouds have been analysed by Hamann and co-workers, comparing observed line equivalent widths to interpolations of large model grids (see Hamann, these proc.). However, actual WR stars are not pure helium stars, so that the contribution of other elements is necessary.
The effect of including metals
------------------------------
It was soon established that the wind properties of WR stars were affected by the presence of metals, notably carbon and nitrogen (Hillier 1988; 1989). For WN stars, metals are trace elements ($\sim$1% in nitrogen by mass), so pure He analyses compare relatively well with those additionally including carbon and nitrogen, which control the outer wind properties. In late-type WN (WNL) stars hydrogen contributes significantly (up to $\sim$50% in extreme cases). Determination of atmospheric contents requires detailed analysis of individual stars through a comparison between theoretical line profiles (e.g. H$\alpha$ for hydrogen) and spectroscopic observations (e.g. Crowther et al. 1995a). In WC stars, it was soon realised that pure He studies were inadequate to obtain reliable stellar properties, since carbon mass fractions are $\sim$40–50% (Hillier 1989). WC analyses need to use He and C diagnostics simultaneously in order that the stellar and chemical properties may be determined. The degree of complexity in atomic data handled for carbon has a great influence on predicted line strengths (Hillier & Miller 1999).
=13.0cm
This technique has a major disadvantage in that it is time consuming and computationally expensive. Nevertheless, derived stellar properties can be considered robust, while model deficiencies may be readily identified. Studies of individual stars have now been performed for Galactic, Magellanic Cloud and other Local Group WR stars using 4m class ground-based telescopes (e.g. Smith et al. 1995 in M33). Source confusion becomes problematic at large distances from ground-based observations (0.$''$8 corresponds to a scale of 3 parsec at the modest distance of M33).
Choice of spectral diagnostics
------------------------------
Although helium and carbon diagnostics are combined to derive the properties of WC stars, the majority of WN studies use solely helium. Early results for weak-lined, early-type WN (WNE) stars led to surprisingly low stellar temperatures (e.g. Hamann et al. 1995), which were comparable with WNL stars instead of strong-lined WNE stars. Crowther et al. (1995b) demonstrated that the stellar temperatures of weak-lined WNE stars are in line with strong-lined examples, if nitrogen diagnostics (e.g. N[iv]{} $\lambda$4058, N[v]{} $\lambda$4603–20) are used instead of helium. Results from helium are more straightforward, since the availability and quality of its atomic data is superior to nitrogen. However, He[i]{} lines are typically formed at large radii from the core, and are extremely weak in hot WR stars, with the exception of 10830Å that is observationally challenging. In contrast, N[iv-v]{} lines originate from the inner wind and are readily observed. Consequently, nitrogen ought to serve as a more sensitive diagnostic of the stellar temperature, and circumvent the problem identified by Moffat & Marchenko (1996). They noted that stellar radii derived from He analyses were [*greater*]{} than the orbital radii of some short period WR+O binaries. Discrepancies for WN stars are not restricted to He and nitrogen diagnostics (see e.g. Crowther & Dessart 1998).
Hamann & Koesterke (1998a) have recently re-analysed a large sample of Galactic WN stars and arrived at similar conclusions to Crowther et al. (1995b). In Fig. \[fig1\], the stellar temperatures and luminosities of WN stars obtained by alternative helium and nitrogen diagnostics are compared. Consistent results are obtained for WNL stars, while higher temperatures and luminosities are obtained from nitrogen diagnostics for WNE stars, especially weak-lined stars. Differences in derived temperatures may be large, increasing by a factor of up to two (from 36kK to 71kK for the WN5(h) star WR49). The change in luminosity is greater still – increasing by a factor of six in this star because of the sensitive dependence of bolometric correction (B.C.) with temperature. Parameters derived from helium or nitrogen lines should be fully consistent, so that discrepancies indicate that something is missing in current models. Perhaps clumps in the wind affect the ionization balance in the He[i]{} line forming region of WNE stars – this may also be relevant to the (poorly predicted) strength of He[i]{} P Cygni absorption components. [*Whatever the cause, care should be taken when comparing results obtained from different spectral diagnostics*]{}.
Detailed analyses of WC-type stars have also been carried out (e.g. Koesterke & Hamann 1995; Gräfener et al. 1998) using helium, carbon and occasionally oxygen diagnostics. However, the additional number of free parameters (C/He, O/He) has restricted the sample analysed to date, and oxygen diagnostics lie in the near-UV, requiring space based observations (Hillier & Miller 1999). Since the UV and optical spectra of WC stars are dominated by overlapping broad emission lines, it is difficult to assign suitable continuum windows. Analyses typically consider the continuum and line spectra in isolation, namely that interstellar extinctions are obtained by matching continua to de-reddened observations, while theoretical line profiles are compared to normalized spectra. A less error-prone approach is the comparison between de-reddened fluxed observations and synthetic spectra, accounting for line overlap. In this way, erroneously defined continuum windows (e.g. at He[ii]{} $\lambda$5412, Hillier & Miller 1999), and unusual UV extinction laws, may be identified.
IR analyses
-----------
Studies discussed above rely exclusively on optical (or occasionally UV) spectral diagnostics. The first infra-red (IR) spectroscopy of WR stars was obtained by Williams (1982), although recent advances mean that this wavelength region can now be used to observe a large sample of WR stars, particularly those obscured at shorter wavelengths. Crowther & Smith (1996) have assessed the reliability of IR analyses of WR stars by studying two WNE stars for which UV and optical data sets were also available. They found that results from exclusively near-IR observations were in good agreement with optical studies, and with later Infra-red Space Observatory (ISO) spectroscopy for WR136 (R. Ignace, priv. comm.). Bohannan & Crowther (1999) have recently compared optical and IR analyses of Of and WNL stars.
Quantitative IR studies of WN-like stars at our Galactic Centre have recently been presented (e.g. Najarro et al. 1997a). Unfortunately, the majority of these stars are relatively cool, so that the sole K-band He[ii]{} diagnostic at 2.189$\mu$m is unavailable. Without a second ionization stage, a unique temperature may not be obtained, so that mass-loss rates and abundances are uncertain. Dessart et al. (these proc.) attempt to solve this by using the stronger He[ii]{} 3.09$\mu$m line in the thermal IR as a temperature diagnostic. Another limitation with the K-band is that the prominent He[i]{} line at $\lambda$2.058$\mu$m is strongly affected by (metallicity dependent) line blanketing effects, as shown by Crowther et al. (1995a, 1998). Problems with the quantitative analysis of low temperature stars are neatly summarised by Hillier et al. (1999) for the Galactic early B-type supergiant HDE316285. They obtained a wide range of possible mass-loss rates and surface H/He abundances for this star, despite the availability of high quality optical and near-IR spectroscopy.
Relaxing the standard assumptions
=================================
Model calculations so far discussed use $R_{\ast}$, $T_{\ast}$, $\dot{M}$ and $v_{\infty}$, plus elemental abundances as free parameters. However, observational evidence suggests that presently assumed quantities, such as the velocity law and homogeneity may be inappropriate. In addition, it is well known that line blanketing by thousands of transitions in the ultraviolet (UV) and extreme ultraviolet (EUV) need to be incorporated into calculations. Each additional relaxation adds (at least) one new variable parameter to the existing set. Consequently, of the several hundred WR stars thus far analysed quantitatively, to date studies of only two have included assorted elements, a variety of velocity laws, clumping and line blanketing (Schmutz 1997; Hillier & Miller 1999).
Variations in velocity law
--------------------------
Generally, a uniform form of the radial velocity field ($v \propto v_{\infty}(1-R/r)^\beta$) is assumed, of exponent $\beta$=1. Tailored analysis are required to test alternative laws. Unfortunately, different velocity forms are frequently able to reproduce the observed spectrum equally well (Hillier 1991a). In some cases, specific exponents produce optimum agreement, provided with a suitably large range of spectroscopic observations. From a careful comparison of the optical and far-red appearance of the Luminous Blue Variable (LBV) P Cygni, Najarro et al. (1997b) found that a $\beta$=4.5 law provided the best match. Including mid-IR ISO observations led to a revision to $\beta$=2.5 (Najarro et al. 1998). Unfortunately, a long wavelength observational baseline is rarely available. Schmutz (1997) went a stage further by [*deriving*]{} the form of the velocity law in WR6 from hydrodynamics, at least in the outer visible part of the wind, with $\beta$=3. As a indication of the reliability of this approach, the emission profile of He[i]{} $\lambda$10830 was reproduced better than in previous studies.
Wind inhomogeneities and departures from spherical symmetry
-----------------------------------------------------------
WR winds are known to be inhomogeneous, from both observational and theoretical arguments (Willis, these proc.). However, homogeneity has been assumed by the majority of atmosphere studies to date. Consideration of electron scattering – causing a frequency redistribution of line photons – provides the key to spectral synthesis (Hillier 1984; 1991b). Homogeneous models often overestimate the strength of electron scattering wings relative to the overall emission line intensity. Since free-free emission and radiative recombination both scale as the square of the density, whereas the electron scattering opacity scales linearly with density, estimates of wind inhomogenities may be estimated by varying volume filling factors and mass-loss rates so that line profiles and electron scattering wings are simultaneously reproduced. In line transfer calculations performed to date, several simplifying assumptions are made, namely that models are composed of radial shells of material, with no inter-clump medium. The variation of clumping factor with radius taken into consideration in some cases since radiative instabilities are not expected to be important in the inner wind.
Schmutz (1997) and Hillier & Miller (1999) have estimated mass-loss rates of WR6 and WR111 which are a factor of 3–4 lower than those resulting from homogeneous models. Hamann & Koesterke (1998b) have also applied an identical approach to a sample of four WR stars, with similar results obtained. Substantially lower mass-loss rates of WR stars has importance in evolutionary model calculations and in reducing the momentum (alternatively opacity) problem of driving WR winds (Gayley et al. 1995).
=13.0cm
To date, all spectroscopic studies have assumed spherical symmetry. Evidence from spectropolarimetry indicates that this is appropriate for $\sim$85% of cases (Harries et al. 1998). For the remaining stars, density ratios of 2–3 are implied from observations. Future calculations will need to consider departures from spherical symmetry. Indeed, the wind of the prototypical WNE star WR6 is grossly asymmetric.
Influence of line blanketing {#4.3}
----------------------------
Observations of the forest of iron lines in the UV spectra of WR stars, demonstrate the large influence that line blanketing by Fe-group elements has on the emergent spectrum. The neglect of blanketing reveals itself through inconsistencies of model fits, and results from comparison with ionized nebulae. The principal problem in accounting for line blanketing is being able to handle the effect of tens of thousands of line transitions in the radiative transfer calculations. To date, solely Schmutz and Hillier have made allowance for blanketing, albeit using different techniques, each with their own advantages and disadvantages. Monte Carlo sampling by Schmutz (1997) allows the opacity of a huge number of lines to be considered, although spectral synthesis of individual features in the UV is not possible, while the reverse is true for Hillier & Miller (1998) who use a ‘super-level’ approach, treating the transfer problem correctly.
In Fig. \[fig2\] models for a WCE star are compared, in which increasing number of elements are included, He, C, O, and Fe. Carbon and oxygen have a considerable effect on the UV and optical energy distribution of the models, with Fe modifying the energent UV flux distribution (Hillier & Miller 1998, 1999). What effect does allowing for clumping and line blanketing have on the resulting stellar properties? In Table \[table1\] the results of Schmutz (1997) and Hillier & Miller (1999) for WR6 (WN4b) and WR111 (WC5) are compared with earlier studies. Stellar temperatures and bolometric luminosities of the blanketed analyses are considerably greater than those from unblanketed models, with a significant EUV excess (and corresponding increase of B.C.), while mass-loss rates are significantly lower, as a result of considering clumped winds. For the case of WNL stars, Crowther et al. (1998) and Herald et al. (these proc.) find that blanketing has a minor influence on stellar temperatures (though the EUV energy distribution is affected). This result is in apparent contradiction with the analysis of LMC WN9–11 stars by Pasquali et al. (1997) using [*grids*]{} of line blanketed models. Pasquali et al. revealed considerably higher temperatures relative to earlier unblanketed tailored analyses (Crowther et al. 1995a; Crowther & Smith 1997). Subsequent [*tailored*]{} spectroscopic analyses including blanketing by Pasquali (priv. comm.), agree well with the parameters obtained by Crowther & Smith.
[l@l@l@ l@l@l@l]{} $T_{\ast}$ & log $L$ & log $\dot{M}$ & B.C. & $\underline{\dot{M}v_{\infty}}$ &Diagnostics&Ref.\
kK & $L_{\odot}$ & $M_{\odot}$yr$^{-1}$ & mag& $L/c$ &\
\
71 & 5.2 & $-$4.1 &$-$3.7 & 37 &He & HKW95\
84 & 5.7 & $-$4.5 (cl)&$-$4.9 & 6 &He & S97\
\
35 & 4.6 & $-$4.6 &$-$3.2& 90 &He & SHW89\
62 & 5.0 & $-$4.3 &$-$4.0& 50 &He+C & KH95\
90 & 5.3 & $-$4.8 (cl) &$-$4.7 & 10 &He+C+O & HM99\
Our ability to synthesise individual and groups of Fe lines in the spectrum of WR stars suggests that they can be used to derive Fe-group abundances. UV spectra of O stars (Haser et al. 1998) and optical spectra of A supergiants (McCarthy et al. 1995) have previously been used to determine Fe-contents in extra-galactic environments, though few detailed attempts have been made using WR stars (see Hillier & Miller 1999). As an indication of the potential for the future, Crowther et al. (1999) have recently used Hubble Space Telescope (HST) spectroscopy of the erupting LBV V1 in the giant H[ii]{} region NGC2363, within the Magellanic irregular galaxy NGC2366 (3.5Mpc) to determine its Fe-abundance.
What are the ionizing spectra of Wolf-Rayet stars?
==================================================
The ionizing flux distribution of WR stars has importance in the study of extra-galactic regions containing young massive stars (giant H[ii]{} regions, WR galaxies etc.). Recent results for O stars incorporating non-LTE and wind effects have resulted in improved agreement with observations of associated H[ii]{} regions (e.g., Stasińska & Schaerer 1997). It is equally important that suitable ionizing distributions for WR stars are used, which affect determinations of IMFs and ages. Since the Lyman continuum distributions of WR stars is not directly visible (due to absorption by intervening hydrogen), indirect methods need to be used to verify current models.
=17.0cm
Nebulae as probes of the Lyman continuum flux distribution
----------------------------------------------------------
The principal work in this field was that of Esteban et al. (1993) who combined pure helium, unblanketed WR model fluxes (Schmutz et al. 1992) with observed properties of WR ring nebulae, to investigate the properties of the central stars. Esteban et al. varied stellar temperatures until agreement was reached between the observed and predicted nebular properties. Comparisons with (independent) stellar analyses of the central stars was found to be reasonable, except that lower temperatures were required from the photo-ionization models for WNL stars. Recently, Crowther et al. (1998) and Pasquali et al. (these proc.) have returned to this technique, newly considering the influence of line blanketing using the Hillier and Schmutz codes. They depart from Esteban et al. in that ionizing flux distributions [*obtained*]{} from a stellar analysis of the central star are used in the photo-ionization modelling.
Crowther et al. (1998) compared line blanketed and unblanketed flux distributions resulting from stellar analyses of the WN8 star WR124, with observations of its associated nebula, M1–67. They found that the blanketed model predicted the nebular temperature and ionization balance much better than the unblanketed case. Allowance for improved nebular properties of M1–67 from Grosdidier et al. (1998), particularly the radial density distribution, leads to even better agreement with observations. Pasquali et al. (these proc.) find good agreement between the predicted and observed nebular properties of NGC3199, using stellar flux distributions from analyses of its central WNE star WR18 with the Schmutz and Hillier codes. Unfortunately, the observed properties ($T_{e}$, $N_{e}$, $\Delta R$, abundances etc.) of most WR nebulae at present are insufficiently well determined to use as tests of stellar models.
=13.0cm
The effect of blanketing on ionizing fluxes
-------------------------------------------
Overall, spectral synthesis and photo-ionization modelling results give us confidence in the validity of current line blanketed Wolf-Rayet codes. Since the only generally available WR models are unblanketed, pure helium energy distributions of Schmutz et al. (1992), how do new results compare? The calculation of a large multi-parameter grid of line blanketed models is a formidable computational challenge. For the moment, I have obtained models for WR stars with the Hillier & Miller (1998) code, varying solely temperatures (30 to 150kK). WN models span WN4 to WN9 spectral types and include the effects of complex model atoms of H[i]{}, He[i-ii]{}, C[ii-iv]{}, N[ii-v]{}, Si[iii-iv]{} and Fe[iii-vii]{}. In Fig. \[fig3\], selected synthetic UV, optical and near-IR spectra are presented. Similar calculations for WC stars spanned WC4 to WC9 and included He[i-ii]{}, C[ii-iv]{}, O[ii-vi]{} and Fe[iii-vii]{} in detail. Their predicted Lyman continuum distributions are fairly soft in all cases, with negligible emission above the He$^{+}$ edge at 54eV.
In Fig. \[fig4\] the ionizing fluxes of these models in the H$^{0}$ and He$^{0}$ continua (in units of photons$^{-1}$cm$^{-2}$) are compared with recent solar metallicity O-star models (Schaerer & de Koter 1997), plus the pure helium Schmutz et al. (1992) models. The line blanketed WN flux distributions support the pure helium Schmutz et al. (1992) predictions, although the additional blanketing from C and O in WC stars produces a softer ionizing spectrum at an identical temperature, with negligible flux emitted $\lambda \le$300Å. WR stars also compare closely with comparable temperature O stars in their ionizing flux per unit area.
=13.0cm
The effect of wind density
--------------------------
Schmutz et al. (1992) stress the importance of stellar wind density on the ionizing flux distributions of WR stars, such that emission at $\lambda \le$228Å relies on the WR wind being relatively transparent. Denser winds, such as those presented above for representative Galactic WR stars, destroy photons beyond this edge. To illustrate this, additional calculations have been performed for lower wind densities. Although a mass-loss versus metallicity ($Z$) scaling for WR stars has not been identified, let us assume that their winds are radiatively driven with a dependance of $\dot{M} \propto Z^{0.5}$ (as obtained for radiatively driven O-type stars by Kudritzki et al. 1989).
I have taken the 150kK WC model, whose synthetic spectrum approximates a WC4-type star, and solely reduced its mass-loss rate (by a factor of three) and Fe-content (by a factor of ten). The optical and ionizing spectrum of the low wind density model are compared with the WC4 model in Fig. \[fig5\], revealing a harder flux distribution (increasing the B.C. by 1.2 mag), and a dramatic change in the emergent optical spectrum. O[vi]{} emission is very strong so the low wind density case resembles a WO-type star. Consequently, [*a modest change in mass-loss rate has a major influence on the ionizing energy distribution and observed spectral appearance.*]{} Strong O[vi]{} emission in a WR spectrum is connected principally with the wind density, rather than elemental abundance (Smith & Maeder 1991 identified WO stars as the chemically evolved descendants of WC stars). In WC4 stars, the high wind density and consequently very efficient wind cooling means that O$^{6+}$ recombines to O$^{5+}$ and subsequently O$^{4+}$ [*interior*]{} to the optical line formation region, producing observed O[iv-v]{} lines. The less efficient cooling of WO winds, through a lower wind density (because of lower mass-loss rates and higher wind velocities) permits O$^{6+}$ recombination in the optical line formation region, producing strong O[vi]{} emission. In support of this, recall that WO stars outnumber the WC population at low metallicities (SMC, IC1613).
=13.0cm
Nebular He[ii]{} $\lambda$4686 and bolometric corrections
---------------------------------------------------------
For my second case, I have taken the earlier 130kK WN model, with a synthetic spectrum of a strong-lined WN4 star, and reduced its mass-loss rate by a factor of ten (scaling its metal content to 0.01$Z_{\odot}$). The resulting optical spectrum would be classified as a weak-lined WN2 star, as shown in Fig. \[fig6\]. Its ionizing flux distribution is extremely hard, with a very strong flux above 54eV ($\sim$40% of its entire luminosity!). [*If mass-loss rates of WR stars are driven by radiation pressure, their spectral appearance and ionizing properties will be very sensitive to metallicity.*]{} The low metallicity WR model presented here may have application in very metal-poor starbursts, such as I Zw 18 which is thought to contain WR stars (de Mello et al. 1998).
The above results suggest that solely hot WR stars with weak winds produce a significant flux in the He$^{+}$ continuum, most likely at low metallicities. This is supported by the known sample of WRs whose nebulae show strong He[ii]{} $\lambda$4686 emission by Garnett et al. (1991), namely WR102 (WO, Galaxy), Brey 2 (WNE, LMC), Brey 40a (WNE+O, LMC), AB7 (WNE+O, SMC), DR1 (WO, ICI613). Young, low metallicity starbursts would be expected to exhibit strong nebular He[ii]{} $\lambda$4686 emission from WR stars, in contrast with high metallicity starbursts.
For the grid of high wind density models, representative of strong-lined Galactic WR stars, B.C’s in the range $-$2.6 to $-$4.4 mag (WN), and $-$3.1 to $-$4.6 mag (WC) are obtained. Since wind density affects the ionizing spectrum of WR models, bolometric corrections are also affected. B.C’s for the WO and WN2 models are much higher and very wind density sensitive, ($-$5.8 and $-$7.1 mag, respectively). Smith et al. (1994) used observations of clusters in the Galaxy to estimate WR masses and B.C’s, namely $-$4.5 mag for WC stars, and $-$4 to $-$6 mag for WN stars, in fair accord with predictions. Massey (these proc.) has repeated this for the LMC, and finds B.C’s of $-6$ to $-$8 mag for cluster WNE stars. From calculations performed here, such stars would be expected to have low wind densities and emit strongly above 54eV. Detailed analysis of individual LMC WNE stars are sought in order to verify these predictions.
Overall, I have discussed the techniques used to derive stellar and chemical properties of WR stars, highlighting the importance of clumping, line blanketing on derived parameters, and the role of wind density and metallicity on the emergent spectrum and ionizing properties of WR stars.
I would like to thank my collaborators, especially John Hillier. Bill Vacca brought the importance of reliable ionizing fluxes of WR stars to my attention. PAC is a Royal Society University Research Fellow.
Bohannan, B., Crowther, P.A. 1999, ApJ 511 (Jan 20th) in press Crowther, P.A., Smith, L.J. 1996, A&A 305, 541 Crowther, P.A., Smith, L.J. 1997, A&A 320, 500 Crowther, P.A., Dessart, L. 1998, MNRAS 296, 622 Crowther, P.A., Hillier, D.J., Smith, L.J. 1995a, A&A 293, 403 Crowther, P.A., Smith, L.J., Hillier, D.J. 1995b, A&A 302, 457 Crowther, P.A., Bohannan, B., Pasquali, A. 1998, in: Proc. ‘Boulder-Munich II: Properties of Hot, Luminous Stars’, (Howarth I.D. ed.), ASP Conf. Series, 131, p.38 Crowther, P.A., Drissen, L., Smith, L.J. et al. 1999, in: Proc. Unsolved Problems in Stellar Evolution, CUP in press Esteban, C., Smith, L.J., Vílchez, J.M., Clegg, R.E.S. 1993, A&A 272, 299 Garnett, D.R., Kennicutt, R.C., Chu, Y-.H., Skillman, E.D. 1991, PASP 103, 850 Gayley, K.G., Owocki, S.P., Cranmer, S.R. 1995, ApJ 442, 296 Gräfener, G., Hamann, W.-R., Hillier, D.J., Koesterke, L. 1998, A&A 329, 190 Grosdidier, Y., Moffat, A.F.J., Joncas, G., Acker, A. 1998, ApJ 506, 127 Hamann, W.-R. 1994, Space Sci. Rev., 66, 237 Hamann, W.-R., Koesterke, L., Wessolowski, U. 1995, A&A, 299 151 Hamann, W.-R., Koesterke, L. 1998a, A&A 333, 251 Hamann, W.-R., Koesterke, L. 1998b, A&A 335, 1003 Harries, T.J., Hillier, D.J., Howarth, I.D. 1998 MNRAS 296, 1072 Haser, S.M., Pauldrach, A.W.A., Lennon, D.J. et al. 1998, A&A 330, 285 Hillier, D.J. 1984, ApJ 280, 744 Hillier, D.J. 1988, ApJ 327, 822 Hillier, D.J. 1989, ApJ 347, 392 Hillier, D.J. 1991a, in: Proc IAU Symp 143, Wolf-Rayet stars and Interrelations with other Massive Stars in Galaxies, (K.A. van der Hucht, B. Hidayat eds.), Kluwer, p.59 Hillier, D.J. 1991b, A&A 247, 455 Hillier, D.J., Miller, D.L. 1998, ApJ 496, 407 Hillier, D.J., Miller, D.L. 1999, ApJ in press Hillier, D.J., Crowther, P.A., Najarro, F., Fullerton, A.W.A. 1999, A&A in press Koesterke, L., Hamann, W.-R., Wessolowski, U. 1992, 261, 535 Koesterke, L., Hamann, W.-R. 1995 A&A 299, 503 de Koter, A., Heap, S.R., Hubeny, I. 1997, ApJ 477, 792 Kudridzki, R.-P., Pauldrach, A.W.A., Puls, J., Abbott, D.C. 1989, A&A 219, 205 McCarthy, J.K., Lennon, D.J., Venn, K.A. et al. 1995, ApJ 455, L135 de Mello, D.F., Schaerer, D., Heldman, J., Leitherer, C. 1998, ApJ in press Moffat, A.F.J., Marchenko, S.V. 1996, A&A 305, L29 Najarro, F. Krabbe, A., Genzel, R., et al. 1997a, A&A 325, 700 Najarro, F., Hillier, D.J., Stahl, O. 1997b, A&A 326, 1117 Najarro, F., Kudritzki, R.-P., Hillier, D.J., et al. 1998 in: Proc. ‘Boulder-Munich II: Properties of Hot, Luminous Stars’, (Howarth I.D. ed.), ASP Conf. Series, 131, p.357 Pasquali, A., Langer, N., Schmutz, W. et al. 1997, ApJ 478, 340 Schaerer, D., de Koter, A. 1997, A&A 322, 615 Schmutz, W. 1997, A&A 321, 268 Schmutz, W., Hamann, W.-R., Wessolowski, U. 1989, A&A 210, 236 Schmutz, W., Leitherer, C., Gruenwald, R. 1992, PASP 104, 1164 Smith, L.F., Maeder, A. 1991, A&A 241, 77 Smith, L.F., Meynet, G., Mermilliod, J-.C. 1994, A&A 287, 835 Smith, L.J., Crowther, P.A., Willis, A.J. 1995, A&A 302, 830 Stasińska, G., Schaerer, D. 1997, A&A 322, 615 Williams, P.M. 1982, in: Proc IAU Symp. 99, Wolf-Rayet Stars : Observations, Physics, Evolution, (de Loore, C.W.H., Willis, A.J., eds.) Reidel, Dordrecht, p. 73
|
---
abstract: 'We present the results of an experimental study of vortex dynamics in non-twinned $YBa_2Cu_3O_{6,87}$ crystal. It is found that critical currents $J_c$ and $J_{c,dyn}$, which correspond to the pinning force in the thermal creep and flux flow mode, respectively, non-monotonically vary with the magnetic field. However, the minimum in the $J_{c,dyn}(H)$ dependence is observed in higher fields, compared with the minimum position $H_{OD}$ in the $J_c(H)$ dependence. Considering that the field $H_{OD}$ corresponds to the static order-disorder transition, this difference is explained by partial dynamic ordering of the vortex solid. It is concluded that finite transverse barriers guarantee finite density of transverse displacements of vortex lines $u_t\simeq c_La_0$ suitable for preservation of the disordered state of the moving vortex solid.'
author:
- 'A. V. Bondarenko'
- 'A. A. Zavgorodniy'
- 'D. A. Lotnik'
- 'M. A. Obolenskii'
- 'R. V. Vovk'
- 'Y. Biletskiy'
bibliography:
- '/bondarenko/tex.sample/paper.bib'
title: 'Quasi-static and dynamic order-disorder transition in presence of strong pinning'
---
The interaction of static and dynamic elastic media with chaotic pinning potential is one the chapters of solid state physics, which includes dislocations in solids, charge density waves, Vigner crystals, and vortex lattices (VL’s) in Type-II superconductors. The VL’s are the most appropriate objects for the experimental study of elastic media, because it is easy to change the strength of pinning potential in superconductors, as well as the elasticity and motion velocity of VL’s. An important feature of the VL’s is the non-monotonous field variation of the pinning force $F_p$, which is observed in low-T$_c$ (NbSe$_2$ [@Bhattacharya93; @Higgins96], V$_3$Si [@Gapud03]) middle-$T_c$ (MgB$_2$ [@Pissas02; @Kim04]), and high-T$_c$ (BiSrCaCuO [@Khaikovich96], YBaCuO [@Kupfer98; @Pissas00]) superconductors. The increase of the pinning force can be explained by softening of the elastic moduli of VL’s in vicinity of the upper critical field $H_{c2}(T)$ [@Higgins96] or the melting line $H_m(T)$ [@Kwok94] that causes better adaptation of the vortex lines to the pinning landscape. Some alternative models [@Ertas97; @Rosenstein07] suggest formation of an ordered vortex solid (VS) in low fields, which transforms into a disordered one in some magnetic field $H_{OD}$, though the nature of the order-disorder (OD) transition and the mechanism of increasing the force $F_p$ may be different. These models are supported by correlation between the field $H_{OD}$ corresponded to the structural OD transition [@Cubbit93] and the onset of the $F_p$ increase [@Khaikovich96] in BiCaSrCuO crystals. An actual problem of the VS phase is the nature of its ordering under an increased vortex velocity $v$. The “shaking temperature” model [@Koshelev94] suggests that transverse vortex displacements $u_t$ induced by the disorder reduce with increased velocity, $u_t\propto 1/v$; and the increase of the velocity above some critical value $v_c$ results in a dynamic transition from the disordered to ordered state. It was later justified [@Giamarchi96] that the increase in $v$ leads to a suppression of the pinning in the longitudinal (with respect to $\textbf{v}$) direction only, while pinning barriers remain finite in the transverse direction. The effect of motion on the transverse barriers, phase state and pinning force of vortex solid is still controversial issue, and this subject first of all requires additional reliable experimental studies. The goal of this work is experimental study of vortex dynamics in the presence of strong pinning.
The measurements were performed on detwinned [YBa$_2$Cu$_3$O$_{7-\delta}$ ]{}crystal, annealed in an oxygen atmosphere at 500$^{\circ}$C for one week. Such anneal corresponded to an oxygen deficiency $\delta\simeq$ 0.13 [@Otterlo00] and $T_c\simeq$ 91.8 K. The crystal then was held at room temperature for 7 days to form clusters of oxygen vacancies, which reduced the tension of the field $H_{OD}$ [@Liang98]. The field variation of the pinning force was studied through measurement of the current-voltage characteristics, $E(J)$, using the standard four-probe method with dc current. The investigated sample had rectangular shape with smooth surfaces; its dimensions were 3.5$\times$0.4$\times$0.02 mm with the smallest dimension along the $c$ axis; the current was applied along the largest dimension; and the distance between the current and potential contacts, and between the potential contacts was about 0.5 mm. The measurements were performed at a temperature of 86.7 K in the field $\textbf{H}\parallel \textbf{c}$.
![\[fig:1\] $E(J)$ curves presented in the linear (a) and semi-logarithmical scale (b), and $\rho_d(J)$ curves presented in the semi-logarithmical scale (c). The inset in panel (b) shows the $E(J)$ dependencies measured upon increase (light symbols) and decrease (dark symbols) of the current.](Fig1){width="3.2in"}
Fig. \[fig:1\] shows the $v(J)=cE(J)/B$ dependencies and current variation of the normalized dynamic resistance $\rho_d(J)\equiv[dE(j)/dJ]/\rho_{BS}$, where $\rho_{BS}=\rho_N
B/B_{c2}$[@Kupfer98]. At low currents, the electric field increases exponentially with an increase in current and the resistance $\rho_d$ is much lower than one. This increase in $v$ and low dynamic resistance indicate the presence of thermally activated vortex creep. At high currents, the $v(J)$ dependence is linear and the value of $\rho_d$ is close to 1, indicating the presence of the flux flow mode. The critical current in the thermal creep mode $J_E$ can be characterized by the voltage criteria of $E = 1~\mu$V/cm and $E =100~\mu$V/cm, and the dynamic critical current $J_{c,dyn}$ can be determined by extrapolating the linear parts of the $v(J)$ dependence, corresponded to the flux flow mode, to zero voltage [@Kokubo07]. Field variation of the currents $J_E$ and $J_{c,dyn}$ normalized by their values in a field of 0.5 kOe are shown in Fig. \[fig:2\]a. It is seen that the currents $J_c$ and $J_{c,dyn}$ start to increase in the fields above 1.25 kOe and 2.5 kOe, respectively, which are substantially smaller in comparison with the fields $H_{c2}$ and $H_m$. Therefore this increase can not be caused by better adaptation of the vortices to the pinning landscape induced by softening of the elastic moduli. Obtained field variation of the currents $J_E$ and $J_{c,dyn}$, and the peculiarities of vortex dynamics can be explained in frames of the model proposed by Ertas and Nelson [@Ertas97]. It is assumed that the OD transition occurs when transverse displacements of vortex lines exceed the value of $c_La_0$, where $a_0\simeq\sqrt{\Phi_0/B}$ is intervortex distance, $\Phi_0$ is the flux quantum, and $c_L$ is the Lindemann number. The field is defined by equality of energies $E_{el}(H_{OD}) = E_p(H_{OD})$, where $E_p$ is the pinning energy, $E_{el}\simeq c_L^2\varepsilon\varepsilon_0a_0$ is increase of the elastic energy caused by displacements $u_{t}=c_{L}a_0$, $\varepsilon$ is the anisotropy parameter, $\varepsilon_0 =
(\Phi_0/4\pi\lambda)^2$ is the line tension of vortex line and $\lambda$ is the penetration depth. As evident from Fig. \[fig:2\]a, the minimum position does not depend on the driving force within the creep regime in agreement with magnetization measurements [@Kupfer98; @Pissas00]. This means that the value of ratio $E_{el}/E_p$, and, therefore the energy $E_p$, is not changed, indicating that minimum in the $J_E(H)$ curve corresponds to static OD transition, $H_{OD}\simeq$ 1.25 kOe.
![\[fig:2\] (a) Field variation of the current $J_{c,dyn}$ and $J_E$ normalized by their values in a field of 0.5 kOe. (b) Field variation of the velocities $v_p$ and $v_{min}$ correspondent to the peak and minimum position in the $\rho_d(J)$ dependencies, respectively. The inset in panel (b) shows sketch of the transverse vortex displacement $u_{t,L}$ correspondent to the Lindemann criteria. Dash and solid circles correspond to the lower and upper boundaries of the displacements $u_{t,L}$ (see the text), respectively, in the static VS in magnetic field $H >
H_{OD}$. Dot ellipses show evolution of the maximal displacements $u_{t,L}$ upon increase of the velocity $v$. Dashed region corresponds to the displacements $u_{t,L}$ in the dynamic VS. ](fig_22){width="3.2in"}
Estimations presented below show that static OD transition in our sample is caused by vortex interaction with the clusters of oxygen vacancies rather than with the isolated oxygen vacancies. Indeed, for the point disorder the pinning energy is [@Blatter94; @Ertas97] $E_p\simeq
(\gamma\varepsilon^2\varepsilon_0\xi^4)^{1/3}(L_0/L_c)^{1/5}$, where $L_0\simeq 2\varepsilon a_0$ is the length of longitudinal fluctuations, $L_c\simeq\varepsilon\xi (J_0/J_d)^{1/2}$ is the correlation length, $J_0=4c\varepsilon_0/3\sqrt{3}\xi\Phi_0$ is the depairing current, $\xi$ is the coherence length, and $\gamma\simeq(J_c\Phi_0/c)^2L_c$ is the disorder parameter. Using realistic for the [YBa$_2$Cu$_3$O$_{7-\delta}$ ]{}superconductor parameters ($\lambda$ = 500 nm, $\xi$ = 4 nm, and $\varepsilon$ = 1/7) and experimental value of the depinning current $J_{c,dyn}<$ 5 kA/cm$^2$ we obtain the energy $E_p <$ 2$\cdot$10$^{-16}$ erg, which is about 25 times smaller compared to the elastic energy $E_{el}\simeq
c_L^2\varepsilon\varepsilon_0a_0\simeq$ 5$\cdot$10$^{-15}$ erg estimated for the $c_L$ = 0.2 and $H_{OD}$ = 1.25 kOe. The pinning energy induced by vortex interaction with the clusters of oxygen vacancies equals the condensation energy $U_c\approx(H_c^2/8\pi)V_{cl}$, where $H_c=\Phi_0/2\sqrt{2}\pi\lambda\xi$ is the thermodynamic critical field and $V_{cl}$ is the volume of clusters. For spherical clusters with radius $r\simeq\xi$ we obtain the energy $E_p\simeq
U_c\approx$ 10$^{-14}$ erg, which is suitable for occurrence of the OD transition in the field of 1.25 kOe.
As it is shown in Fig. \[fig:2\]b, minimum in the $J_{c,dyn}(H)$ curve occurs in a field of 2.5 kOe, which is about two times exceeds the value of $H_{OD}$. Also, above the minimum position, the current $J_{c,dyn}(H)$ increases with the field more gradually in comparison with increase of the current $J_E$ above the OD transition. This difference can be explained by suppression of the longitudinal and conservation of the transverse pinning barriers, as it was theoretically predicted in [@Giamarchi96]. In frames of the “shaking temperature” model [@Koshelev94], this means conservation of the transverse (with respect to vector $\textbf{v}$) $u_{\perp}$ and reduction of the parallel $u_{\parallel}\propto 1/v$ component of the displacements $u_t =
\sqrt{(u_{\parallel})^2+(u_{\perp})^2}$ with increased velocity $v$. In magnetic field $\textbf{H}\parallel \textbf{c}$ and in presence of the chaotic pinning potential, spatial distribution of the displacements is isotropic; and in the field $H > H_{OD}$, the displacements $u_{t,L}$, which correspond to the Lindemann criteria, fall in the interval $c_La_0(H_{OD}) > u_{t,L} >
c_La_0(H)$, as it is shown schematically in the inset of Fig. 2b. Density of the displacements (the number of vortex displacements $u_{t,L}$ per unit length of vortex line) $n_{t,L}$ is proportional to the area of ring confined by the upper (solid circle) and lower (dashed circle) boundary of the displacements $u_{t,L}$. Reduction of the component $u_{\parallel}$ with increased velocity $v$ leads to reduction of the upper boundary (dotted lines for velocities $v_2 > v_1\neq 0$), and thus to reduction of the density $n_{t,L}$. It is important, that for any finite velocity $v$ the component $u_{\parallel}$ is finite, and thus the cross-hatched area at the diagram, which corresponds to the displacements $u_{t,L}$, and the density $n_{t,L}$ is also finite. Increase of the field reduces the lower boundary of the displacements $u_{t,L}$, and therefore the density $n_{t,L}$ increases.
Considering that the displacements $u_{t,L}$ produce the dislocations in the VS phase, and increase of the density $n_{t,L}$ results in an increase of the current $J_{c,dyn}$ [@Bondarenko08a], the field variation of the currents $J_E$ and $J_{c,dyn}$ can be explained in the following way. In low fields, the ordered VS phase, which is characterized by the absence of dislocation and realization of the 1D pinning, is formed, and the currents $J_E$ and $J_{c,dyn}$ decrease with increased field due to enhancement of the vortex-vortex interaction, making difficult to fit the vortices in the pinning landscape. Above the OD transition, the VS phase contains dislocations that results in occurance of the 3D pinning [@Ertas97], and thus the current $J_E$ increases at the transition point $H_{OD}$ due to dimensional crossover in the pinning [@Kes86; @Brandt86]. Further increase of the current $J_E$ with magnetic field is caused by increase of the density $n_{t,L}$, as it was found in [@Bondarenko08a]. The density $n_{t,L}$ is smaller in the moving VS phase than in the static VS phase, but it is finite and increases with the field. Therefore, the $J_{c,dyn}(H)$ dependence is determined by competition between decrease of the pinning force caused by enhancement of the vortex-vortex interaction and increase of the pinning force associated with increase of the density $n_{t,L}$. In our measurements, the former mechanism dominates in magnetic fields $H\leq$ 2 kOe, while the last one dominates in the fields $H\geq$ 3 kOe.
Proposed interpretation agrees with numerical simulations of 2D [@Faleski96; @Moon96; @Olson00; @Kolton99] and 3D [@Otterlo00] VL’s in the presence of strong pinning. First, it was shown that in the flux flow mode the disordered state of the VL’s is preserved [@Faleski96; @Kolton99; @Otterlo00; @Olson00], and the transverse barriers remain finite [@Moon96; @Olson00]. Second, the $v(J)$ curves cross one another near the OD transition [@Otterlo00]. Third, our interpretation implies that cross-hatched area in the diagram collapses to a segment at $v\rightarrow\infty$, indicating that moving VS can be ordered in agreement with conclusion in [@Otterlo00]. Finally, the onset of ordering of the moving VS phase is manifested as a peak in the $\rho_d(J)$ curves, and the end of ordering corresponds to value of the resistance $\rho_d(J)$=1 [@Faleski96; @Kolton99], and in our measurements peak in the $\rho_d(J)$ curves appears in the fields $H > H_{OD}$. Following computer simulations, we determined the field variation of the velocities $v_p$ and $v_{min}$, which correspond to the peak and minimum positions in the $\rho_d(J)$ curves respectively. As it is shown in Fig. 3b, the velocity $v_p$ and the difference $\Delta v = v_{min} - v_p$ increase with the field indicating that the critical velocity of the ordering as well as the interval of velocities $\Delta v$, in which the ordering realizes, increase with the field. This behavior is plausible considering that the lower boundary of the displacements $u_{t,L}$ decreases with the increased field that requires higher $v$’s to decrease the amplitude below this boundary. Also, the difference between the upper and lower boundary of the displacements $u_{t,L}$, $\Delta u = c_L[a_0(H_{OD})-a_0(H)]$, increases with the field that results in increase of the difference $\Delta v$.
Our interpretation allows explaining occurrence of the hysteresis effect in the curve $v(J)$ measured with the increased and decreased current in a field of 1.5 kOe, and absence of the hysteresis effect below and quite above the OD transition. Indeed, in close vicinity to the OD transition, $(H/H_{OD} - 1) << 1$, the density $n_{t,L}$ in the dynamic VS is much smaller than in static VS, and small increase of the velocity $v$ leads to dynamic transition into the ordered state. In this case the “shaking temperature” model predicts the hysteresis effect, which reflects the “overheated state” of the ordered dynamic VS. The decrease in density $n_{t,L}$ quite above the OD transition is not dramatic, and transition from strongly disordered static VS to less disordered dynamic VS occurs in a wide interval of velocities $\Delta v$ without hysteresis. It is important to notice that the $E(J)$ curves measured after zero field cooling coincide with the $E(J)$ curves measured after non zero field cooling, that indicates the absence of metastable states in the VS. This agrees with experimental studies of the YBaCuO crystals: the metastable states exist in vicinity of the vortex sold - vortex liquid transition, but they disappear below this transition [@Fendrich96].
Recent quantitative theory of the dynamic VS by Rosenstein and Zhuravlev [@Rosenstein07] predicts jump-like increase of the pinning force at the OD transition. It is evident that this theory does not describe our results because increase of the currents $J_E$ and $J_{c,dyn}(H)$ occurs in different fields, and field variation of the currents does not show the jump-like increase.
The obtained field variation of the currents $J_E$ and $J_{c,dyn}$, occurrence of the hysteresis effect in close vicinity to the OD transition, and absence of the metastable states in the VS are different from that in superconductors with weak bulk pinning. For example, in crystals NbSe$_2$ [@Henderson96] and MgB$_2$ [@Kim04], the current $J_{c,dyn}$ increases in a jump-like manner at the OD transition [@Henderson96; @Henderson98], and the hysteresis effect occurs in rather wide interval of magnetic fields and it is caused by presence of the metastable states in the VS [@Henderson96; @Kim04], which are induced the effect of surface barriers [@Paltiel00]. The surface barriers in the NbSe$_2$ [@Banerjee00] and MgB$_2$ [@Pissas02] cause asymmetry of the magnetization loops, and this asymmetry reflects a difference in the barriers for vortex entrance and exit of samples [@Burlachkov93]. The magnetization loops of the YBaCuO crystals are symmetric indicating negligible effect of the surface barriers. Therefore obtained field variation of the current $J_E$ corresponds to equilibrium quasistatic VS.
In conclusion, we determined field variation of the critical currents in the quasistatic and dynamic vortex solid. The currents non-monotonously vary with the field, but minimum position in the $J_{c,dyn}(H)$ dependence is shifted to higher fields in comparison with the minimum in the $J_E(H)$ dependence. The difference is interpreted by partial ordering of the vortex solid with increased vortex velocity. The disordered state of the dynamic vortex solid is attributed to preservation of finite transverse pinning barriers that guarantees presence of the transverse vortex displacements suitable for formation of dislocations. This interpretation allows explaining observed increase of the critical current of the dynamic ordering.
|
---
abstract: 'Binary mass transfer via Roche-lobe overflow (RLOF) is a key channel for producing stripped-envelope Wolf-Rayet (WR) stars and may be critical to account for Type Ib/c supernova progenitors. RY Scuti is an extremely rare example of a massive binary star caught in this brief but important phase. Its unusual toroidal nebula indicates equatorial mass loss during RLOF, while the mass-gaining star is apparently embedded in an opaque accretion disk. RY Scuti’s toroidal nebula has two components: an inner ionised double-ring system, and an outer dust torus that is roughly twice the size of the ionised rings. We present two epochs of $L$-band Keck natural guide star adaptive optics (NGS-AO) images of the dust torus, plus three epochs of [*Hubble Space Telescope*]{} ([*HST*]{}) images of the ionised gas rings. Proper motions show that the inner ionised rings and the outer dust torus, while having similar geometry, came from two separate ejection events roughly 130 and 250 yr ago. This suggests that WR star formation via RLOF in massive contact binaries can be accompanied by eruptive and episodic bursts of mass loss, reminiscent of luminous blue variables (LBVs). We speculate that the repeating outbursts may arise in the mass gainer from instabilities associated with a high accretion rate. In the case of RY Scuti, we know of no historical evidence that either of its mass-loss events were observed as luminous outbursts, but if discrete mass-loss episodes in other RLOF binaries are accompanied by luminous outbursts, they might contribute to the population of extragalactic optical transients. When RLOF ends for RY Scuti, the overluminous mass gainer, currently surrounded by an accretion disk, will probably become a B\[e\] supergiant and may outshine the hotter stripped-envelope mass-donor star that should die as a Type Ib/c supernova.'
author:
- |
Nathan Smith$^1$[^1], Robert D. Gehrz$^2$[^2], Randy Campbell$^3$, Marc Kassis$^3$, David Le Mignant$^4$, Kawailehua Kuluhiwa$^5$, & Alexei V. Filippenko$^6$\
$^1$Steward Observatory, 933 N. Cherry Ave., Tucson, AZ 85721, USA\
$^2$Astronomy Department, School of Physics and Astronomy, University of Minnesota, 116 Church Street SE, Minneapolis, MN 55455, USA\
$^3$Keck Observatory, 65-1120 Mamalahoa Hwy, Kamuela, HI 96743, USA\
$^4$Laboratoire d’Astrophysique de Marseille, UMR 6110, Univ. Aix-Marseille Provence, 38 rue F. Joliot-Curie, F-13388 Marseille, France\
$^5$Department of Physics and Astronomy, University of Hawaii, 200 West Kawili St., Hilo, HI 96720, USA\
$^6$Department of Astronomy, University of California, Berkeley, CA 94720-3411, USA
date: 'Accepted 0000, Received 0000, in original form 0000'
title: 'Episodic mass loss in binary evolution to the Wolf-Rayet phase: Keck and [*HST*]{} proper motions of RY Scuti’s nebula[^3]'
---
\[firstpage\]
binaries: eclipsing — binaries: general — circumstellar matter – stars: evolution — stars: mass loss — supernovae: general
INTRODUCTION
============
RY Scuti is a remarkable blue supergiant eclipsing binary system at a distance of 1.8 kpc (Smith et al. 2002). It has a well-determined period of only 11.1247 days (Smith et al. 2002), and the shape of the eclipse light curve suggests that it is in an advanced stage of Roche-lobe overflow (RLOF) (Antokhina & Cherepashchuk 1988; Guircin & Mardirossian 1981; Antokhina & Kumsiashvili 1999; Djurasevic et al. 2001; Melikian et al. 2010). RY Scuti belongs to the class of W Serpentis massive binaries, where significant mass transfer has led to an opaque disk around the mass-gaining star (Plavec 1980), and it has the shortest known period among examples of the class.
Most recent studies of the system’s radial velocity variations converge on a binary system with an $\sim$8 M$_{\odot}$ primary and a $\sim$30 M$_{\odot}$ secondary (Antokhina & Cherepashchuk 1988; Skul’skii 1992; Sahade et al. 1992; Antokhina & Kumsiashvili 1999; Grundstrom et al. 2007). The 8 M$_{\odot}$ primary is an O9/B0 supergiant, and is thought to have initially been the more massive of the two, but has transferred much of its mass to the secondary. (The likely initial masses of the primary and secondary were then of order 20-25 and 15-20 M$_{\odot}$, respectively.) It has been suggested that the mass-gaining secondary, probably an O5 star, is enshrouded by an opaque accretion disc (King & Jameson 1979; Antokhina & Cherepashchuk 1988; Antokhina & Kumsiashvili 1999). For recent discussions of the detailed properties of the circumstellar nebula and the binary system, we refer the reader to Smith et al. (2002) and Grundstrom et al. (2007), respectively. These authors review the literature concerning spatially resolved structure in the nebula (Hjellming et al. 1973; Gehrz et al. 1995, 2001; Smith et al. 1999, 2001, 2002), the unusual high-excitation spectrum with multiple-peak line profiles (Merrill 1928; Swings & Struve 1940; de Martino et al.1992; Skul’skii & West 1993; Smith et al. 2002), and the photometric and spectroscopic variability of the eclipsing binary (Cowley & Hutchings 1976; King & Jameson 1979; Antokhina & Cherepashchuk 1988; Skul’skii 1992; Kumsiashvili et al. 2007; Djurasevik et al. 2001, 2008; Grundstrom et al. 2007).
Detailed study of RY Scuti and its nebula are of broader interest to the evolution of massive stars in two chief respects, as follows.
\(1) Based on the He-rich abundances of its nebula, the masses of the stellar components, the orbital configuration, and the evolutionary state, it has been proposed that the O9/BO supergiant primary will soon evolve to a Wolf-Rayet (WR) star as a result of binary mass transfer, and that RY Scuti therefore represents an immediate precursor to a massive WR+OB binary system (Guircin & Mardirossian 1981; Antokhina & Cherepashchuk 1988; Smith et al. 2002). As such, it provides a rare glimpse at the formation of WR-like stars in binary systems, and hence, one of the two chief channels for producing progenitors of Type Ib/c supernovae (SNe). SNe Ibc are core-collapse SNe arising from massive “stripped-envelope” progenitors that have shed their outer H layers, and in some cases their He layers as well; see Filippenko (1997) for a review. The first evolutionary channel for making WR stars is where massive stars with initial masses above 30–35 M$_{\odot}$ shed their H envelopes by virtue of their own mass loss in stellar winds or eruptions (Conti 1976; Smith & Owocki 2006). A second evolutionary channel, which is the only one available to less massive stars whose winds are too weak to reach the WR phase on their own, is to have their H envelope (and possibly also the He envelope) stripped via RLOF in a close binary system (e.g., Paczyński 1967; Podsiadlowski et al. 1992; Petrovic et al. 2005). Recent evidence from the observed statistics of SNe argues that RLOF may be the dominant channel for producing progenitors of SNe Ibc (Smith et al.2011; see also Yoon et al. 2010; Dessart et al. 2011). The mass-transfer phase in massive binaries is thought to be brief, lasting only $\sim$10$^4$ yr (Petrovic et al. 2005). RY Scuti is an extremely rare example of a massive binary star caught in this critical phase, and it may be the only known example with a bright spatially resolved circumstellar nebula.[^4] Its properties are therefore valuable for checking the conclusions drawn from studies of WR+OB systems already in the post-mass-transfer phase.
\(2) RY Scuti may provide important clues to formation of toroidal and bipolar nebulae. Close binaries and mergers are often invoked to explain the formation of bipolar nebulae and rings like those around SN 1987A and other massive stars (Morris & Podsiadlowski 2006; Collins et al. 1999), although asymmetric mass loss from rapidly rotating stars has also been proposed for such nebulae and disks (Owocki 2003; Owocki et al. 1996; Dwarkadas & Owocki 2002; Smith 2007; Smith & Townsend 2007; Chiţǎ et al. 2008). One of the ambiguities for many nebulae is the lack of independent evidence that the central stars are (or were) binaries, and the role that binarity might have played in shaping the nebulae is therefore unknown. RY Scuti has the distinct advantage that it is an eclipsing system, so that its binary stellar parameters are known quite well. It is in a state of overcontact where RLOF is occurring and significant mass loss and mass transfer has taken place. Its nebula is toroidal, not bipolar, and so the observed morphology of RY Scuti’s mass loss may provide important constraints on models for shaping nebulae with close binary influence.
The structure and dynamics of RY Scuti’s nebula can aid our understanding of the role binarity plays in producing SNe Ibc progenitors and in determining nebular morphology. The structure, morphology, and kinematics of the nebula provide clues to its formation, while the age of its components give a record of the system’s recent mass-loss history.
Previous observations by Gehrz et al. (1995, 2001) and Smith et al.(1999, 2001, 2002) have established that RY Scuti is $1.8 \pm 0.1$ kpc distant and that its toroidal nebula is separated into two components: an outer dust torus with a diameter of $\sim$2 or 3600 AU, and an inner ionised torus with a diameter of $\sim$1 (1800 AU). The mass of the dust torus is $\sim$10$^{-6}$ M$_{\odot}$ (dust only), and the ionised inner component has a gas mass of at least 0.003 M$_{\odot}$. The ionised component has an unusually high-excitation spectrum (Merrill 1928; Swings & Struve 1940; de Martino et al.1992; Smith et al. 2002), and displays evidence for significant He and N enrichment (Smith et al. 2002). In high-resolution images taken with the [*Hubble Space Telescope*]{} ([*HST*]{}) by Smith et al. (1999, 2001), the inner ionised torus appears to break up into a pair of plane-parallel rings, analogous to the polar rings of SN 1987A, but confined much more closely to the equatorial plane (i.e., at latitudes of $\pm$14 from the equator, rather than $\sim$45 as in SN 1987A). The ionised rings are expanding with Doppler shifts of roughly $\pm$42 km s$^{-1}$, and an initial (although imprecise) measurement of their proper-motion expansion has been made in two epochs of [*HST*]{} images separated by $\sim$2 yr, combined with two epochs of radio continuum images obtained with the Very Large Array that were separated by 9 yr. These data implied an ejection episode sometime in the late 19th century (Smith et al.2001). The kinematics of the outer dust torus were unknown before the present study.
In this paper, we present a third epoch of [*HST*]{} images, extending the time baseline for proper motions made with the same instrument to more than a decade. We also present the first adaptive-optics (AO) images of RY Scuti obtained in the thermal infrared (IR), providing the sharpest picture yet of the structure in the outer dust torus. We obtained two epochs of AO images in the same filter with the same instrument, separated in time by 6 years, and we use these to measure for the first time the expansion rate of the dust torus separately from the ionised gas. We describe the observations in §2, the multi-wavelength morphology in §3, and the results from proper-motion measurements in §4. In §5 we discuss implications for the formation of non-spherical nebulae and for SN Ib/c progenitors, and speculate about optical transients associated with episodic RLOF events in massive binaries. We summarise our conclusions in §6.
Date Tel./Instr. Filter Exp.
------------- ----------------- -------- -----------------------------------
1997 Jun 01 [*HST*]{}/WFPC2 F656N 5s, 2$\times$20 s, 2$\times$120 s
2000 Feb 21 [*HST*]{}/WFPC2 F656N 2$\times$10 s, 2$\times$120 s
2009 Apr 19 [*HST*]{}/WFPC2 F656N 2$\times$10 s, 2$\times$260 s
2009 Apr 19 [*HST*]{}/WFPC2 F658N 2$\times$18 s, 2$\times$350 s
2003 Jun 11 Keck/NIRC2 AO $L_p$ 5$\times$10.6 s, 53 s total
2009 Aug 27 Keck/NIRC2 AO $L_p$ 10$\times$5.3 s, 53 s total
: Imaging Observations of RY Scuti
\[tab:letters\]
OBSERVATIONS
============
We obtained multi-epoch high-resolution observations of RY Scuti, using [*HST*]{} images to trace the inner ionised rings and Keck Observatory $L$-band images to trace the outer dust torus in the IR. The log of observations is listed in Table 1.
{width="5.1in"}
Multi-Epoch HST Imaging
-----------------------
Three epochs of images of RY Scuti were obtained with the [*HST*]{} Wide Field Planetary Camera 2 (WFPC2) using the F656N (H$\alpha$) filter, plus single epochs with the F658N (\[N [ii]{}\] $\lambda$6583; see below) and F953N (\[S [iii]{}\] $\lambda$9532) filters. The first two epochs of H$\alpha$ images were published and analyzed previously (Smith et al. 1999, 2001). For the third epoch of H$\alpha$ observations, we implemented the same observing strategy and followed the same data-reduction steps as for the first two. This involved combining a series of exposures with a range of exposure times to correct for CCD blooming from the bright central star, as well as a careful subtraction of a model point-spread function (PSF) generated by the TinyTim software, as described in the earlier papers. The PSF subtraction is necessary because the extended diffraction pattern in the PSF from the bright central star can interfere with and mask the nebular structures. After PSF subtraction, the newest epoch of images confirms the same structures seen in our earlier studies, but taken with a different roll angle and orientation of the PSF diffraction spikes.
Our goals in obtaining a third epoch were to confirm our previous detection of expansion of the rings made with a short temporal baseline (Smith et al. 2001) and to thereby improve the precision of the ejection age for comparison with that of the outer torus. For the proper-motion measurements, we use the F656N (H$\alpha$) filter as it is the only one available in all three epochs. The F658N image is useful to investigate any possible spatial gradients in ionization structure in the rings. The elapsed time between the second epoch and the first is 994.43 days, or $\Delta t$ = 2.723 yr. The last epoch extends this temporal baseline to 4340.13 days, or $\Delta t$ = 11.883 yr since epoch 1.
Keck NIRC2-AO Imaging
---------------------
Using the Keck II AO system (Wizinowich 1999) with the instrument NIRC2, we obtained two epochs of $L$-prime (hereafter $L_p$) images. They were acquired with the narrow camera (10 mas pixel$^{-1}$ scale) making use of a five-point box dither pattern with a 1$\farcs$0 step size. Due to the peak brightness of the RY Scuti central point source, a square subarray of 512 pixels (out of 1024) was employed so that the minimum exposure time could be set to 50 ms, thus avoiding saturation. The dominant source of background at 3.8 $\mu$m is the thermal radiation of the AO system telescope optics. Dust on the 11 ambient-temperature optical surfaces that are in the path of NIRC2 are a significant source of background. Frequent dithers were used to reduce time varying effects of the background. RY Scuti ($V = 9.14$ mag) provides plenty of visible-light flux to the AO wavefront sensor, allowing the frame rate to be set to a relatively high frequency; it was 700 Hz in 2003 and 1500 Hz in 2009. Higher frequencies became possible thanks to an upgrade of the wavefront controller (van Dam 2007). The AO performance was superb, in both cases resulting in diffraction-limited resolution of about 72 mas with a Strehl greater than 70%. The PSF is well sampled in the NIRC2 narrow-camera plate scale.
The data reduction was performed in the customary method for IR imaging data. Before shifting and coadding the 3-point dithered data, we flat-fielded the images using normalised flats constructed from sky-only data acquired 1$\arcmin$ north of the nebula. The flattened images were then background subtracted using the sky images. The images were aligned using the centroid of RY Scuti’s central point source. The initial goal was to produce the best image yet of the IR dust torus with diffraction-limited imaging in the near-IR to complement the previous Keck mid-IR images obtained with the long-wavelength spectrometer (Gehrz et al. 2001). The subsequent goal of the later-epoch observation was to measure the expansion of the outer dust torus separately from the inner ionised gas nebula.
{width="6.3in"}
MULTI-WAVELENGTH STRUCTURE
==========================
We investigated the multi-wavelength structure of the nebula by comparing the IR and optical images. The morphology of the optical images in H$\alpha$ was already discussed in detail in previous papers (Smith et al. 2002, 2001, 1999). The NIRC2-AO images in the $L$ band provide the highest-resolution images of the dust torus so far. They are comparable in spatial resolution to the [*HST*]{} images, permitting the first meaningful comparison between the two. Figure \[fig:color\] shows a colour composite comparing the [*HST*]{} image to the near-IR Keck-AO $L_p$ image, while Figure \[fig:contour\] provides various comparisons of multi-wavelength data from the present study and from a previous study of thermal-IR Keck images by Gehrz et al. (2001).
The first-epoch 2003 image of RY Scuti obtained with NIRC2-AO is shown in Figure \[fig:contour\]a. The combination of AO and the better diffraction limit at $\sim$4 $\mu$m as compared to $\sim$10 $\mu$m means that this $L_p$-band image provides the sharpest view yet of the structure in the dust torus around RY Scuti, improving the spatial resolution from our previous thermal-IR Keck images without AO (Gehrz et al. 2001) by a factor of 2–3. The warm-dust emission in this filter traces the same dust responsible for the torus observed at longer wavelengths. This is evident from Figure \[fig:contour\]b, which shows the same NIRC2-AO image from Figure \[fig:contour\]a with the contours of 11.7 $\mu$m emission from Gehrz et al. (2001) superposed. Allowing for differences in spatial resolution (and for stronger photospheric emission from the central star at shorter wavelengths), the $L_p$-band image has the same spatial distribution as the 11.7 $\mu$m thermal-IR emission. This was expected, since the 2–20 $\mu$m spectral energy distribution (SED) of the dust torus can be fitted with a single dust temperature (Gehrz et al. 2001). In other words, the $L_p$ filter at 3.8 $\mu$m samples the Wien tail of the same 300–400 K dust whose emission peaks at $\sim$10 $\mu$m, rather than sampling hotter dust closer to the star. The new AO image indicates a very thin distribution of dust in the radial direction, consistent with our findings below that the dust torus originated in an episodic ejection from the star.
{width="5.3in"}
The NIRC2-AO images show unprecedented detail of the structure in the IR torus. The toroidal nebula obviously appears pinched at the waist, flaring above and below the equator. This appearance could be due to the overlap of two inclined rings, as in the optical images, or it could be that the dust torus is akin to an hourglass structure with the top and bottom chopped off. In either case, the dust torus appears to have a sharp outer boundary, and we see no evidence in either the [*HST*]{} or near-IR images for faint extensions of a larger hourglass structure. Overall, the structure of the IR dust torus resembles the morphology seen in the inner ionised rings, but with roughly twice the size.
The near-IR AO images reveal no clear evidence for an enhancement of dust emission from parts of the nebula inside the dust torus, consistent with the single-temperature dust SED as noted above. In particular, there is no enhancement of IR emission coincident with the ionised structures of the inner rings seen in H$\alpha$ images with [*HST*]{} (except for some spurious features associated with the Keck PSF). Since the inner rings are closer to the star and any dust therein would be hotter and more easily detected at short IR wavelengths, we conclude that the inner rings have a much lower dust mass than the outer rings. The second mass-ejection event that produced the inner rings evidently did not form dust as efficiently – at least not yet. Given that the temperature of the outer dust torus is 300–400 K (Gehrz et al. 2001), the equilibrium grain temperature in the inner rings (at roughly half the distance from the same star) should be about 420–560 K. Since the condensation temperature of dust grains is typically $\ga$1000 K, any dust that was destined to form in the inner rings should have done so already. An interesting possibility is that the second ejection, which formed the inner ionised nebula, was able to shield the outer torus from ionising radiation, thereby allowing it to form dust. One could speculate, then, that a hypothetical future ejection might be needed to shield the inner torus seen now in [*HST*]{} images in order to facilitate dust formation in that feature.
The higher resolution of the new near-IR AO images reveals for the first time a clear gap between the inner ionised rings and the outer dust torus. Figure \[fig:model\] shows tracings across the middle of the Keck IR and [*HST*]{} images (see below), both as observed in 2009. For comparison, Figure \[fig:model\] also shows very simple, idealised models for the optically thin intensity from a cross section through the middle of the geometrically thin shell (see Smith et al.2007 for more details). These simple models are not perfect matches to the data due to the non-azimuthally symmetric density structure of the ionised rings (Smith et al. 2002). While the NE side of the nebula matches the simple models rather well, the SW portion deviates — and interestingly, both the inner ionised rings and the outer dust torus deviate from the model on the SW side in the same manner. This requires that both the inner and outer tori [*deviate from azimuthal symmetry in a similar way*]{}. This is probably an important clue to the ejection mechanism that formed them.
The simple models in Figure \[fig:model\] allow us to provide rough estimates of the inner and outer radii for both the ionised and dusty tori. These values are listed in Figure \[fig:model\], and we note that the values derived from images of the inner ionised component agree with the thickness inferred from STIS spectroscopy (Smith et al. 2002). The radial thicknesses of the model shells are 23–28% of their radii. More importantly, there is a clear gap between the inner edge of the IR torus and the outer edge of the ionised rings (i.e., the structures do not touch or overlap). This is most evident on the NE side of the nebula (Figure \[fig:model\]), and is clear in the colour image in Figure \[fig:color\] as well, indicating a spatial separation of the dust and ionised gas. Physically, this is quite meaningful; it indicates that the outer boundary of the ionised rings is not caused by a simple ionization front, because in that case we would expect non-ionising UV photons to penetrate the ionization front and heat the dust immediately outside it. The large separation instead suggests that the IR peak is a separate density enhancement at a larger radius where the remaining UV radiation is absorbed. (The inner ring may nevertheless play an important role in shielding the outer dust torus, but our point here is that the outer boundary of the ionised torus is caused by a drop in density, not an ionization front.) As we show below, proper motions indicate that the outer IR dust torus is older and originated in a separate mass-ejection event. The spatial gap between the inner and outer rings indicates that the outer ring was not responsible for shaping the inner ring, because they are not interacting hydrodynamically.
In the ionised inner torus, the NE peak is much brighter than the SW peak. For the IR torus, the reverse is true. Perhaps this is because the NE side of the IR torus is shielded by a denser inner nebula, as compared to the SW side. Of course, these differences may be due purely to different density distributions as well.
{width="4.9in"}
{width="4.9in"}
PROPER MOTIONS OF THE TWO COMPONENTS
====================================
The Inner Ionised Rings with HST
--------------------------------
By aligning the three epochs of [*HST*]{} images, we were able to confirm that the expansion of the rings is primarily homologous (i.e., self-similar radial expansion). When an image taken at an early epoch is magnified by some scaling factor, it looks identical to one taken at a later epoch. We detect no evidence for non-radial motion, acceleration, or deceleration in the expanding nebula.
To quantify the fractional increase in size between epochs, we scaled later images so that the nebular structure matched that in the first epoch, and then performed a cross correlation of the scaled image to estimate the best scaling factor. This is similar to the method employed by Morse et al. (2001) for measuring proper motions in [ *HST*]{} images of $\eta$ Carinae, except that here we adopted a multiplicative size-scaling factor for the whole nebula (with central regions near the star masked out) rather than measuring translational shifts independently for many individual small condensations. The former method is well suited to the compact size and simpler structure of RY Scuti’s nebula.
To illustrate the scaling between epochs, Figure \[fig:hstpm\] (middle panel) shows intensity tracings across the major axis of the nebula through the emission peaks on either side of the star at each epoch. One can see that the nebula is clearly expanding with time. The bottom panel in Figure \[fig:hstpm\] shows the same tracings, but with the pixel size scale of the 2009 and 2001 images reduced to match the structure in the 1997 image. Similar tracings for the Keck images are shown in Figure \[fig:keckpm\].
We define the scale factor $\epsilon$ by which the size of the nebula at epoch 2 ($R_2$) must be reduced to match the radius at epoch 1 ($R_1$) as $\epsilon$ = $R_1$/$R_2$. In terms of this scaling factor $\epsilon$, the age of the nebula $t$ (the time since ejection relative to epoch 2) is given by
$$t = \Delta t \ \times \ (1 - \epsilon)^{-1},$$
where $\Delta t$ is the time period that has elapsed between the images at epochs 1 and 2. We define epochs 1, 2, and 3 as the images taken in 1997, 2000, and 2009, respectively. From the dates of observations, $\Delta t_{1,2} = 2.7226$ yr and $\Delta t_{1,3} = 11.8826$ yr. By cross correlating the NE and SW peak structures at different epochs, we measure $\epsilon_{1,2} = 0.977 \pm 0.003$ and $\epsilon_{1,3} = 0.907 \pm 0.003$. Equation (1) then yields ages of $t_{1,2} = 118.4 \pm 16$ yr (relative to 2000) and $t_{1,3} = 127.8 \pm 4.2$ yr (relative to 2009). The uncertainties are Gaussian 3$\sigma$ error bars resulting from the spatial cross-correlation of each pair of epochs. Thus, the baselines between epochs 1 and 2 (1997–2000) and between epochs 1 and 3 (1997–2009) both agree on an ejection date around the year 1881 for the ionised torus. We adopt an uncertainty of $\pm$4.2 yr in the age and ejection date, since it corresponds to the longer temporal baseline with a more precise measurement. Note that this measurement is independent of the precision to which we can spatially align the central star in the images, since we are simply measuring the growth in size across the nebula, not the distance from the star.
![Plot of the increase in the width of the major axis of RY Scuti’s nebula with time; the width is measured consistently at half the peak intensity level. Unfilled points correspond to the IR torus measured in Keck $L_p$ images obtained in 2003 and 2009, while the filled points correspond to the ionised rings measured in archival VLA data from 1983 and 1992 (from Smith et al. 2001) and [*HST*]{}/H$\alpha$ images obtained in 1997, 2000, and 2009. The dotted line extrapolates the expansion rate measured from the two Keck points. The dashed and solid lines show least-squares fits for the expansion of the ionised rings with the first two VLA data points included and excluded, respectively, along with the subsequent three [*HST*]{} images. []{data-label="fig:pmplot"}](fig6.eps){width="3.1in"}
Figure \[fig:pmplot\] shows the expansion of RY Scuti’s nebula in a different way. This is an updated version of Figure 3 from Smith et al. (2001), including the new Keck data and the last epoch of [*HST*]{} imaging. It shows measurements of the width of the major axis of the nebula measured at half the peak intensity. For the ionised rings, this includes the images of radio free-free emission obtained with the VLA in 1983 and 1992, as well as the three epochs of H$\alpha$ imaging in 1997, 2000, and 2009. The dashed and solid lines show a linear least-squares fit (weight $\propto$ $\sigma^{-1/2}$) to the expansion rate of the ring with (dashed) and without (solid) the early VLA data included in the fit. These fits yield ejection dates of 1878$\pm$5 and 1883$\pm$6, respectively, for the ionised component of the nebula, consistent with the ejection date of 1881$\pm$4 derived from the first method discussed above. The dotted line extrapolates the expansion rate measured the same way in the IR Keck images; this is not a fit since there are only two points. The implied ejection date of 1794$\pm$30 yr for the dust torus is younger than that derived below, but is consistent within the uncertainty.
The Outer Dust Torus with Keck AO
---------------------------------
We employed the same method of measuring the expansion of the dust torus in IR images as described above for the [*HST*]{} images, except that for the IR images we had only two epochs. Figure \[fig:keckpm\] shows intensity tracings across the long axis of the dust torus at two positions, indicated in the top panel. The middle panel displays the tracings for the two epochs at the two sampled positions, while the bottom panel illustrates the same tracings except that the 2009 epoch has been scaled to match the 2003 image (see discussion above concerning the [*HST*]{} images and Fig. \[fig:hstpm\]). The two different position samples are consistent with the same expansion rate.
The comparison of the 2003 and 2009 Keck images of the IR dust torus shows that the size of the torus in 2003 was 97.6% ($\pm$0.3%) of the size observed in 2009 (i.e., $\epsilon_{1,2} = 0.976 \pm 0.003$). Since the time elapsed between images was $\Delta t = 6.216$ yr, Equation (1) indicates that the dynamical age of the dust torus is $255 \pm 32$ yr, implying an ejection date around the year 1754. This is more than twice the age of the inner ionised rings determined above from [*HST*]{} images, so the two components cannot be the result of the same mass ejection. The age derived from this method is consistent within the uncertainty with the age derived from the simpler method described above and shown in Figure \[fig:pmplot\], where we measured the changing width of the major axis of the nebula at half the peak intensity, following Smith et al. (2001). This simpler method is more susceptible to error caused by changes in spatial resolution, which is more of a concern in ground-based data than in [*HST*]{} data having a more consistent PSF. We therefore favour the ejection date around 1754.
DISCUSSION
==========
Formation of the Double Rings
-----------------------------
The mechanism for the formation of the rings around SN 1987A has presented an enduring mystery that has no obvious answer (e.g., Blondin & Lundqvist 1993; Burrows et al. 1995; Martin & Arnett 1995; Collins et al. 1999; Morris & Podsiadlowski 2005), but one expects that the formation mechanism of the rings might also be an important clue to understanding the peculiar nature of the blue progenitor star Sk $-$69 202. Smith et al. (2007) noted a population of stars with equatorial rings that are analogous to the equatorial ring around SN 1987A, including RY Scuti. Only one other object, the luminous blue variable (LBV) star HD 168625, is known to have triple rings similar to those of SN 1987A (Smith 2007). RY Scuti has a set of double ionised rings which are not identical to those of SN 1987A, but may be related.
One can test some models for formation of double rings by measuring the expansion dynamics of the rings – i.e., do they exhibit homologous expansion, non-radial expansions, or other peculiar motions or time-variable illumination? Models with aspherical ejection from the star system would predict homologous expansion at these large distances. On the other hand, models such as those of Chiţǎ et al. (2008) have the rings produced by a bipolar wind pushing through a previously ejected thin shell, with the two structures colliding and forming rings at their intersection. In this model, the apparent ring structure originates where the rings are observed, rather than the shaping mechanism arising close to the star. It would predict motion of the rings toward the equatorial plane with time as the inner bipolar wind sweeps through the thin spherical shell, with the intersection migrating from the pole to the equator (see Chiţǎ et al. 2008). We therefore conclude that models such as the one discussed by Chiţǎ et al. (2008) do not apply in the case of RY Scuti, since the nebula is expanding homologously.
A broad class of hydrodynamic models involves the formation of bipolar nebulae by the interaction of a wind with a previously ejected equatorial density enhancement; essentially, a pre-existing disk or torus pinches the waist of subsequently ejected material to produce an hourglass shape. A model by Morris & Podsiadlowski (2007) accounts for the formation of SN 1987A’s nebula with a merger of a binary system, where the merger ejects an equatorial torus or disk of material that might then divert a faster wind toward high latitudes, forming a pair of polar caps that could be seen as rings under proper circumstances of illumination. That merger model cannot apply here, since RY Scuti has not yet merged. More importantly, the gap between the two components indicates that the second mass ejection is not yet interacting hydrodynamically with the first one, even at this very young stage, so the formation of its rings must have a different origin that doesn’t depend on hydrodynamic shaping by a previously ejected disk.
Instead, the proper motions in RY Scuti’s nebula suggest that on at least two separate occasions separated by only $\sim 100$–200 yr, the star system suffered some sort of outburst that ejected mass near the equator at relatively slow speeds (i.e., much slower than the escape speed or normal wind speed of either star). It also suggests that whatever mechanism shaped the first ejection was able to persist and have the same influence on the second, because both ejections have the same basic geometry and structure. The cause of such an outburst is unknown, but some ideas are discussed in §5.3. Here we focus on the shape of the ejecta, and we discuss two potential scenarios, the second of which we deem to be more likely.
\(1) During an episode of increased mass loss or mass transfer from the primary star, RY Scuti may have suffered an enhancement of mass loss through the outer L2 Lagrangian point (see Figure \[fig:sketch\]). From the phase variability of absorption features such as He [i]{} lines, Grundstrom et al. (2007) inferred that RY Scuti does in fact have a mass-loss stream exiting the system through the L2 point in its present-day state. An increase in this mass-loss rate at two separate times in the past could lead to the creation of discrete toroidal structures that would expand outward to form the currently observed circumstellar nebula. One potential inconsistency is that the observed outflow through L2 has an expansion speed of $\sim 200$ km s$^{-1}$ (Grundstrom et al. 2007), whereas the toroidal circumstellar nebula has a much slower radial expansion of only $\sim 40$ km s$^{-1}$ (Smith et al. 2002).
Another problem with this scenario is that even with an enhancement of mass transfer and mass loss through L2, one would normally expect mass loss through the outer Lagrangian point to be a relatively slow process compared to the orbital period of only $\sim$11 days, leading to azimuthally symmetric mass loss. Thus, while this scenario accounts for equatorially enhanced mass loss, it provides no compelling explanation for the nature of sudden outbursts or for the azimuthal asymmetry observed in RY Scuti’s ionised rings. Furthermore, the origin of the double ionised ring structure is unclear in this scenario, unless the apparent separation between the rings is due to a shadow cast by an opaque circumbinary disk at inner radii much smaller than the size of the rings, as suggested by Grundstrom et al. (2007).
{width="4.3in"}
\(2) A different scenario may be that the accreting secondary star experiences an outburst, and that the outflowing ejecta are immediately shaped by the accretion torus within a few stellar radii around the secondary (Figure \[fig:sketch\]). As we explain further in §5.3, invoking an outburst from the secondary is motivated by our speculation that the secondary may encounter a cyclical instability associated with the high mass accretion and angular momentum accretion rates currently imposed on it during RLOF.
Suppose that the envelope of the accreting secondary star becomes unstable and suffers a sudden outburst of mass loss. The ejecta that follow mid- to high-latitude trajectories toward the pole will expand unimpeded (represented by the dashed arrows in Figure \[fig:sketch\]), probably at very high speeds of $\sim 1000$ km s$^{-1}$, close to the escape speed of a main-sequence O-type star (recall that the secondary star in the RY Scuti system is thought to be a main-sequence O-type star that is hidden by its cooler, opaque accretion torus; Grundstrom et al. 2007). On the other hand, the ejecta expanding at low latitudes near the equator must contend with the presence of the massive, dense accretion torus. The stellar ejecta will be decelerated by this interaction, and perhaps some of the material on the surface of the accretion torus will be entrained by the outflow that is diverted above and below the torus. This will result in a much slower outflow from the system at latitudes immediately above and below the edges of the accretion torus (this is depicted by the solid short arrows in Figure \[fig:sketch\]). If the ejection is a sudden event, it will produce an enhancement of mass at specific latitudes above and below the equatorial plane, at a specific distance from the star (i.e., plane-parallel rings rather than a shell or conical structures).[^5] In this scenario, it is the thickness of the accretion torus that sets the latitude and separation of the rings that we observe in the circumstellar nebula. The presence of a thick accretion torus reaching at least $\pm$15$\arcdeg$ is required by the fact that the accretion torus obscures the secondary in a system with an orbital inclination of $i \approx 75\arcdeg$ (Smith et al. 2002). This roughly matches the latitudes of the nebular rings at $\pm$14$\arcdeg$.
If a sudden (i.e., dynamical) outburst of the secondary star occurs on a time scale short compared to the orbital period, then it may provide a compelling explanation for the azimuthal asymmetry in the system as well: the parts of the outflowing ejecta that expand toward the bloated supergiant primary star must interact with that star and its wind. This would lead to a gap in the ejecta over a range of azimuthal angles covering 15–30% of the orbit. This is commensurate with the size of the gap on the near side of the nebula (Smith et al.2002). Without a sudden outburst, one would expect the outflow to be azimuthally symmetric, contradicting observations.
Because a sudden ejection by the secondary star can, in principle, account for both the latitudinal and azimuthal distribution of mass in the nebula around RY Scuti, we find this scenario to be more compelling than option (1) discussed above. Of course, our suggestion should be explored using detailed numerical hydrodynamic simulations. Some modelers have explored a wind interacting with a pre-existing disk or torus (e.g., Frank et al. 1995; Martin & Arnett 1995; Blondin & Lundqvist 1993). However, these simulations generally placed the constricting torus at a large distance from the ejection, rather than an accretion torus within a few stellar radii, and they adopted a continuous wind that rams into the torus rather than a sudden ejection (these simulations – usually in two dimensions – also do not account for the obstruction of a supergiant companion star, of course). The sudden ejection is key for both the ring structure and the azimuthal asymmetry. The potential for a sudden ejection by the secondary to account for the structure in RY Scuti’s circumstellar nebula motivates our speculation in §5.3 regarding possible physical causes of an outburst from the accreting secondary.[^6]
Significance in Pre-Supernova Evolution, and Post-RLOF Binaries as B\[e\] Supergiants
-------------------------------------------------------------------------------------
Despite the recent episodes of mass ejection, estimates of the nebular mass compared to the amount of mass exchanged between the stars suggests that RLOF has been mostly conservative so far for RY Scuti. The mass of the inner ionised gas torus is roughly 0.003 M$_{\odot}$ (Smith et al. 2002), while the mass of the outer dusty torus (from the measured dust mass multiplied by an assumed gas:dust mass ratio of 100 adopted by those authors) is at least $1.4 \times 10^{-4}$ M$_{\odot}$ (Gehrz et al. 2001). Thus, the total mass detected in the two-component toroidal nebula around RY Scuti is only of order 0.003 M$_{\odot}$. This is admittedly a lower limit to the total mass lost, since there may be dense clumps of neutral gas not measured in the tracers of ionised gas. There could also be more mass in the equatorial plane at larger distances that is shielded from the star’s radiation by the inner components of the nebula, but this mass has not been constrained by any previous study. In any case, the nebular mass ejected in the past $\sim$10$^4$ yr is probably far less than 1 M$_{\odot}$. With a current stellar-wind mass-loss rate of around (1–2) $\times 10^{-6}$ M$_{\odot}$ yr$^{-1}$ (typical for the wind of an O9 supergiant; Repolust et al. 2004), the mass lost in a fast line-driven stellar wind during this time should be comparable to that in the ionised nebula.
As noted in the introduction, however, modern observed parameters for RY Scuti favour present-day masses of $\sim$30 M$_{\odot}$ for the secondary, and $\sim$8 M$_{\odot}$ for the primary (originally the more massive star). Since the likely initial masses were of order 25 and 15 M$_{\odot}$, this suggests that about 15 M$_{\odot}$ has been shifted from the primary to the secondary during the brief ($\sim$10$^4$ yr) RLOF phase, whereas much less than 1 M$_{\odot}$ appears to have been lost from the system during that same time. From these rough observational estimates, we conjecture that the mass transfer in RY Scuti has been largely conservative — that is, unless a large amount of nebular material resides in the equatorial plane outside the dust torus where it may be shielded, and may therefore remain cold and largely neutral. Observations at far-IR and submm wavelengths may help constrain the amount of additional cold material in the system.
While conservative mass transfer rather than mass loss appears to dominate the stripping of the donor star’s H envelope, the mass ejected into an equatorial torus may play an important role in angular momentum loss. The resulting dusty toroid or ring may have observable consequences after the short phase of RLOF is complete, as noted earlier.
In the RY Scuti system, the primary has transferred much of its mass to the secondary, which as a result, is seen as a disk-enshrouded O-type supergiant star that will now be overluminous for its initial mass, and will be rapidly rotating due to the additional angular momentum of the accreted mass (see below). Eventually, when the opaque disk dissipates and thins, the secondary will be much brighter at visual wavelengths than its hotter WR-like primary that will have lost its envelope, it will have substantially reduced its mass and luminosity, and it will then radiate most of its luminosity in the far-UV. Thus, when RLOF is finished and the primary finally becomes a WR-like star depleted of H, the overluminous secondary might outshine the primary at visual wavelengths. Due to the presence of recently ejected circumstellar material in a surrounding nebula, the system will continue to have bright emission lines, radio emission, and IR excess from dust. In many observable respects, the system may therefore resemble a B\[e\] supergiant (e.g., Zickgraf et al. 1996).
If the primary in RY Scuti is, in fact, destined to die as a Type Ib/c SN, then it provides us with a real example of what binary progenitors of SNe Ib/c may look like. Additionally, SNe IIb are closely related to SNe Ib, except that they have a small residual H envelope; RY Scuti could therefore also die as a SN IIb if it fails to completely shed its outer H layers before core collapse. In that case, it provides us with a Galactic analog to the progenitor of the Type IIb explosion SN 1993J in M81 (Filippenko et al. 1994), which was inferred to be a close binary system of slightly lower initil mass that experienced an almost identical binary evolutionary path (Maund et al. 2004; see also Aldering et al. 1994; Van Dyk et al. 2002).
A distant extragalactic observer who witnesses the SN resulting from the explosion of the primary in RY Scuti might be able to infer, from appropriate pre-explosion archival data, that there was a blue star with $MV \approx -6$ mag at the same position before the SN. This would not have beeen the SN progenitor itself, but the overluminous mass-gainer secondary in the close binary system. The alien observer could verify this conjecture with late-time observations showing that the blue star was not destroyed in the SN explosion. The observer might infer from single-star evolution models that this star had a ZAMS mass of $\sim 30$ M$_{\odot}$, when in fact the masses of the primary and secondary have been substantially altered by RLOF. In this case, however, it would be a mistake to disregard this surviving source as a chance coincidence, since the overluminous blue companion provides an important clue that the primary was stripped of its H envelope via RLOF in a close binary system. Indeed, Maund et al.(2004) identified a massive blue star that might have been the surviving mass-gainer companion to the star that exploded as SN 1993J.
With only $\sim 0.003$ M$_{\odot}$ of ejected gas in the immediate circumstellar environment, the shock interaction with the surrounding torus will not be strong enough to produce a Type IIn supernova (see Smith et al. 2009). Thus, the resulting explosion from the primary in RY Scuti will likely be a Type Ib, or perhaps a Type IIb event like SN 1993J, depending on whether all or nearly all of the H envelope is transferred from the primary to the secondary. However, an interesting consequence of the toriodal circumstellar nebula around RY Scuti is that the resulting SN might produce a strong IR echo, due to the circumstellar dust getting heated by the SN’s pulse of UV/optical luminosity.
Outbursts from the Accreting Secondary and Massive RLOF Binaries as Optical Transients
--------------------------------------------------------------------------------------
Our study provides empirical evidence that the mass-transfer phase in massive binaries can in some cases be accompanied by episodic bursts of mass ejection, rather than just a continuous and steady transfer of mass. Here we speculate about the underlying physical cause of the outbursts, and we speculate about possible observed consequences of these events.
Previous studies of RY Scuti suggest that the initially more massive star ($\sim$25 M$_{\odot}$) has shed much of its H envelope, leaving an 8 M$_{\odot}$ stripped-envelope star. The secondary, initially the less massive of the two, has already accreted 10–15 M$_{\odot}$ through an accretion disk, yielding a 30 M$_{\odot}$ star that is still largely obscured by its surrounding accretion torus. It is probable that the accreting secondary in such a system will be significantly spun up due to mass and angular momentum accretion, and may therefore be at or near critical rotation (e.g., Struve 1963; Packet 1981; Langer & Petrovich 2007; Vanbeveren et al. 1998; Langer et al. 2008). Indeed, estimates suggest that accreting even a few to 10% of a star’s mass via an accretion disk in RLOF is enough to spin up a star to near the critical rotation limit (Packet 1981; Vanbeveren et al. 1998), and this is one of the leading ideas for the formation of Be stars in binary systems (e.g., Gies 2007; Dewi 2007). The secondary star in RY Scuti has accreted a large fraction of its current stellar mass (roughly half), suggesting that it must have already encountered an angular momentum catastrophe where it has reached critical rotation, perhaps repeatedly on several occasions. Moreover, the addition of mass and heating of the envelope via accretion luminosity on a short RLOF timescale of less than 10$^4$ yr \[shorter than the (2–3) $\times 10^4$ yr KH time of the entire star\] may leave the envelope overluminous and out of thermal equilibrium with the core. Unfortunately, this is difficult to test directly, since the secondary in the RY Scuti system is hidden by an opaque accretion disk in its present state.
While in this rapidly rotating nonequilibrium configuration, the accreting secondary star may be subject to a quasi-cyclical instability whose recurrence is set by either the angular momentum diffusion timescale or the thermal timescale in the star’s outer envelope, coupled to the mass and angular momentum accretion rate. The situation is reminiscent of rotational and thermal instabilities discussed for the envelopes of LBVs (Appenzeller 1986; Stothers 2000; Guzik et al. 1999; Davidson 1999; Smith et al. 2003). One example of these is the so-called Omega limit (Langer 1997, 1998; see also Glatzel 1998), where critical rotation combined with high luminosity leads to violent mass ejections from a massive star. Shedding mass and angular momentum in a shell ejection could temporarily alleviate the state of critical rotation — but if the angular momentum of the secondary is continually replenished with an accretion disk in RLOF, that star could repeatedly be driven to critical rotation and may therefore encounter recurring shell ejections. We suspect that this scenario might lead to repeated mass ejections like those experienced by RY Scuti. With a very high mass-transfer rate of order 10$^{-3}$ M$_{\odot}$ yr$^{-1}$ (i.e., 10–15 M$_{\odot}$ during the RLOF phase of $\sim
10^4$ yr), the secondary of RY Scuti will accrete about 0.1 M$_{\odot}$ over the observed time interval of $\sim$120 yr between mass ejections. This mass gained with high specific angular momentum at the equator is more than the amount of mass lost during the same time interval, suggesting that it is enough to replenish the amount of angular momentum that was lost, and would therefore be sufficient to drive the star back to critical rotation. Further work on thermal and dynamical instabilities in the envelopes of rapidly rotating accreting secondaries in binaries would be of considerable interest. In particular, a detailed dynamical treatment of the rotating stellar envelope is needed to constrain the physics of the instability and whether it can lead to a sudden outburst. If so, it might provide a possible explanation for the origin of LBV eruptions that occur in binary systems.
Recall that our suggestion is motivated in part by the fact that invoking repeated sudden outbursts from the accreting secondary star provides a reasonable explanation for several observed properties of RY Scuti’s nebula, including its double-ring toroidal nebula, its azimuthal asymmetry, and the fact that its repeated mass ejections that occurred $\sim$250 and $\sim$130 yr ago both had similar geometry. The slower leaking of mass through the outer L2 point provides for equatorially enhanced mass loss, but does not seem to account for other observed properties of the system (see §5.1). In the case of RY Scuti, the amount of mass lost appears to be much smaller than the amount of mass transferred from the primary to the secondary (i.e., recent RLOF appears to be nearly conservative in the case of RY Scuti).
Sudden mass ejections are often accompanied by luminous outbursts akin to the giant eruptions observed in LBVs. There is a wide diversity of LBV-like eruptions, sometimes called “SN impostors,” which has been reviewed recently by Smith et al. (2011). The famous massive binaries $\eta$ Car and HD 5980 both suffered LBV giant eruption events. There are a number of other events for which we do not know whether the progenitors are in binary systems, but two recent examples, SN 2000ch and SN 2009ip, have exhibited [*repeating*]{} LBV-like outbursts (Pastorello et al. 2010; Smith et al. 2011). Although the underlying cause of eruptions is not known, $\eta$ Car exhibited brief luminous peaks in the light curve at times of periastron in the eccentric binary (Smith & Frew 2011; Smith 2011). These reached absolute magnitudes of roughly $-$14 mag and lasted for about 100 days, very similar to several other SN impostors. The relatively small (0.003 M$_{\odot}$) mass ejections of RY Scuti are not known to have coincided with a major brightening event, but brief outbursts in the 18th and 19th centuries might easily have been missed if the brightening lasted only a few days. Other more extreme mass ejections in binaries are possible and do coincide with major brightening events.
We therefore speculate that some events in the population of observed extragalactic SN impostors could be related to mass ejections that are caused by an instability associated with mass and angular momentum accretion, similar to the one we outlined above for RY Scuti. Whether this is true requires more detailed theoretical study of the dynamical and thermal stability of accreting stars in RLOF. This hypothesis has the advantage that accretion-induced critical rotation can be reached for the mass gainers over a wide range in initial masses, not limited to the most massive stars near the Eddington limit. Many of the SN impostors and related transients appear to arise from stellar systems with initial masses below 20 M$_{\odot}$ (see Smith et al. 2011 and references therein), and this has been difficult to understand in the context of LBV eruptions. RY Scuti presents a concrete example of a RLOF system that has experienced repeating sudden mass ejections where the mass gainer is known to be surrounded by an accretion torus.
CONCLUSIONS
===========
We briefly summarise the main observational conclusions and their implications from our study of the expansion of RY Scuti’s nebula.
\(1) The expansion age of the inner ionised rings measured using [ *HST*]{} images indicates an ejection date of roughly 1881 ($\pm$4 yr).
\(2) The expansion age of the outer dusty IR torus measured using Keck AO images yields a likely ejection year of 1754 ($\pm$36 yr).
\(3) We therefore conclude that the two components of RY Scuti’s nebula (the inner ionised rings and the outer dust torus) were the result of two separate ejection events recurring on a timescale of $\sim 120$–130 yr. One may wonder whether RY Scuti is due for another such ejection event in the near future.
\(4) Conclusion (3) is supported by a clear spatial gap between the two structural components, indicating that they are not interacting hydrodynamically. Therefore, one cannot invoke a distant pre-existing disk as the shaping mechanism for the double-ring nebula. This gap also indicates that few ionising photons can penetrate the inner ionised rings. We speculate that the second ejection event, which is now seen as the expanding ionised rings in [*HST*]{} images, may have shielded the outer torus from the stars’ ionising radiation, thereby allowing dust to form in the outer torus.
\(5) We suggest a formation mechanism for the toroidal circumstellar nebula that involves matter ejected suddenly (i.e., dynamically) by the accreting secondary star, and immediately being shaped by the opaque accretion torus that is thought to surround the secondary star, although we encourage numerical simulations of this scenario.
\(6) The primary star in RY Scuti, which is in the process of losing its H envelope via RLOF, is our best-studied candidate for the progenitor of a Type IIb or Type Ibc supernova where the envelope stripping from close binary evolution is currently underway. While other post-RLOF systems are good candidates for SNe Ibc as well, RY Scuti is a rare example of a system that is currently in the critical mass transfer phase, and which is surrounded by a nebula that allows us to measure the mass lost from the system.
\(7) RY Scuti suggests, therefore, that the formation of SN Ibc progenitors via RLOF may in some cases be punctuated by sudden, repeating episodes of mass loss. We speculate that this may be the result of an instability associated with mass and angular momentum accretion by the secondary star, although this idea deserves additional study. In some cases these mass-ejection events may be seen as luminous outbursts, even though we are aware of no such record of an observed outburst in the specific case of RY Scuti. It is worth considering the possibility that other close binary systems may contribute to the diverse population of non-supernova optical transients now being discovered.
\(8) When RLOF finishes for RY Scuti, we speculate that the accretion torus around the secondary will become optically thin, revealing the rapidly rotating and overluminous mass gainer. The system will also be surrounded by a dusty torus resembling those seen in B\[e\] stars. If this overluminous 30 M$_{\odot}$ secondary star dominates the optical luminosity of the system, it may point toward an observed association between OB emission-line stars and some SN Ibc progenitors, even though it is the companion to the overluminous OB star that actually explodes.
[**ACKNOWLEDGMENTS**]{}
Support was provided by the National Aeronautics and Space Administration (NASA) through grants GO-11977, GO-8209, and GO-6492 from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS5-26555. RDG was supported by NASA and the United States Air Force. Some of the data presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and NASA. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation. The authors wish to recognise and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community; we are most fortunate to have the opportunity to conduct observations from this mountain.
Aldering, G., et al. 1994, AJ, 107, 662
Antokhina E.A., Cherepashchuk A.M. 1988, AZh Pis’ma, 14, 252
Antokhina E.A., Kumsiashvili M.I. 1999, Astron. Lett., 25, 662
Blondin J.M., Lundqvist P. 1993, ApJ, 405, 337
Burrows C.J., et al. 1995, ApJ, 452, 680
Chiţǎ S.M., Langer N., van Marle A.J., García-Segura G., Heger A. 2008, A&A, 488, L37
Collins T.J.B., Frank A., Bjorkman J., Livio M.1999, ApJ, 512, 322
Conti P.S. 1976, Mem. Soc. R. Sci. Liège, 9, 193
Cowley A.P., Hutchings J.B. 1976, PASP, 88, 456
de Martino D., Vittone A.A., Rossi C., Giovanelli F. 1992, A&A, 254, 266
Dessart. L., Hillier, D.J., Livne, E., Yoon, S.C., Woosley, S.E., Waldman, R., & Langer, N. 2011, MNRAS, tmp
Dewi J.D.M. 2007, in Massive Stars in Interacting Binaries, ed. N. St-Louis & A.F.J. Moffat (San Francisco: ASP), 315
Djurašević G., Eshankulova M., Erkapić S. 2001, A&A, 374, 638
Djurašević G., Vince I., Atanacković O. 2008, AJ, 136, 767
Dougherty S.M., Clark J.S., Negueruela I., Johnson T., Chapman J.M. 2010, A&A, 511, A58
Filippenko A.V. 1997, ARAA, 35, 309
Filippenko, A. V., et al. 1994, AJ, 108, 2220
Frank A., Balick B., Davidson K. 1995, ApJ, 441, L77
Gehrz R.D., Smith N., Jones B., Puetter R., Yahil A. 2001, ApJ, 559, 395
Gehrz R.D., et al. 1995, ApJ, 439, 417
Gies D.R. 2007, in Massive Stars in Interacting Binaries, ed. N. St-Louis & A.F.J. Moffat (San Francisco: ASP), 325
Glatzel W. 1998, A&A, 339, L5
Giuricin G., Mardirossian F. 1981, A&A, 101, 138
Grundstrom E.D., et al. 2007, ApJ, 667, 505
Hjellming R.M., Blankenship L.C., Balick B. 1973, Nature, 242, 84
King A.R., Jameson R.F. 1979, A&A, 71, 326
Kumsiashvili M., Natsvlishvili R., Kochiashvili N.2007, A&A Transactions, 26, 103
Langer N. 1997, in ASP Conf. Ser. 120, ed. A. Nota & H.J.G.L.M. Lamers (San Francisco: ASP), 83
Langer N. 1998, A&A, 329, 551
Langer N., Cantiello M., Yoon S.C., Hunter I., Brott I., Lennon D., de Mink S., Verheijdt M. 2008, in Massive STars as Cosmic Engines, ed. F. Bresolin, P. Crowther, & J. Puls (Cambridge: Cambridge University Press), 167
Langer N., Petrovich J. 2007, in Massive Stars in Interacting Binaries, ed. N. St-Louis & A.F.J. Moffat (San Francisco: ASP), 359
Martin C.L., Arnett D. 1995, ApJ, 447, 378
Maund, J.R., Smartt, S.J., Kudritski, R.P., Podsiadlowski, P., & Gilmore, G.F. 2004, Nature, 427, 129
Melikian N.D., et al. 2010, Astroph., 53, 202
Merrill P.W. 1928, ApJ, 67, 179
Morris T., Podsiadlowski P. 2006, MNRAS, 365, 2
Morse J.A., et al. 2001, ApJ, 548, L207
Owocki S.P. 2003, in A Massive Star Odyssey: From Main Sequence to Supernova, ed. K.A. van der Hucht, A.Herrero, & C. Esteban (San Francisco: ASP), 281
Owocki S.P., Cranmer S.R., Gayley K.G. 1996, ApJ, 472, L115
Packet W. 1981, A&A, 102, 17
Paczyński B. 1967, Acta Astron., 17, 355
Pastorello A., et al. 2010, MNRAS, 408, 181
Petrovic J., Langer N., van der Hucht K.A. 2005, A&A, 435, 1013
Plavec M., 1980, in Close Binary Stars: Observations and Interpretation, ed. M. Plavec, D.M. Hopper, & R.W.Ulrich (Dordrecht: Reidel), 251
Podsiadlowski P., Joss P.C., Hsu J.J.L. 1992, ApJ, 391, 246
Repolust T., Puls J., Herrero A. 2004, A&A, 415, 349
Sahade J., West R.M., Skul’skii M.Y. 2002, RevMexAA, 38, 259
Skul’skii M.Y. 1992, Soviet Astron., 36, 411
Skul’skii M.Y., West R.M. 1993, AZh, 70, 1177
Smith N. 2007, AJ, 133, 1034
Smith N., 2011, MNRAS, in press (arXiv:1010.3770)
Smith N., Bally J., Walawender J. 2007, AJ, 134, 846
Smith N., Frew D. 2011, MNRAS, in press (arXiv:1010.3719)
Smith N., Gehrz R.D., Goss W.M. 2001, AJ, 122, 2700
Smith N., Gehrz R.D., Humphreys R.M., Davidson K., Jones T.J., Krautter J. 1999, AJ, 118, 960
Smith N., Gehrz R.D., Stahl O., Balick B., Kaufer A. 2002, ApJ, 578, 464
Smith N., Hinkle K.H., Ryde N. 2009, AJ, 137, 3558
Smith N., Li W., Filippenko A.V., Chornock R. 2011, MNRAS, in press (arXiv:1010.3718)
Smith N., Owocki S.P. 2006, ApJ, 645, L45
Smith N., Townsend R.H.D. 2007, ApJ, 666, 967
Struve O. 1963, PASP, 75, 207
Swings P., Struve O. 1940, ApJ, 91, 546
Tokunaga A.T., Simons D.A., Vacca W.D. 2002, PASP, 114, 180
Vanbeveren D., Van Rensbergen W., de Loore C. 1998, The Brightest Binaries (Dordrecht: Kluwer)
van Dam M., et al. 2007, Performance of the Keck II AO system, Keck Adaptive Optics Note 489 ([ www2.keck.hawaii.edu/optics/aodocs/KAON489.pdf]{})
Van Dyk, S. D., et al. 2002, PASP, 114, 1322
Yoon S.C., Woosley S.E., Langer N. 2010, ApJ, 725, 940
Wizinowich P., et al. 2006, PASP, 118, 297
Zickgraf F.J., Humphreys R.M., Lamers H.J.G.L.M., Smolinski J., Wolf B., Stahl O. 1996, A&A, 315, 510
[^1]: Email: [email protected]
[^2]: [email protected]
[^3]: Based in part on observations made with the NASA/ESA [*Hubble Space Telescope*]{}, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555.
[^4]: Another interesting example may be the radio-bright source W9 in the Galactic Centre region (Dougherty et al. 2010), but that source is much farther away, its nebula has not been spatially resolved, and it is not an eclipsing system.
[^5]: Note that if the mass ejection is related to critical rotation, as we speculate in §5.3, then it is possible that the mass ejection itself will be inherently non-spherical.
[^6]: Note that the apparent thicknesses of the rings and dust torus, which are 20–25% of their respective radii, may simply be due to the sound speed multiplied by their ages. The radial expansion speeds are 40–50 km s$^{-1}$, which is 4–5 times the sound speed when the ejected gas is ionised. In other words, they are as thin as they can be, even for an instantaneous ejection.
|
---
abstract: 'We assess the reliability of the one-crossing approximation (OCA) approach in quantitative description of the Mott transition in the framework of the dynamical mean field theory (DMFT). The OCA approach has been applied in the conjunction with DMFT to a number of heavy-fermion, actinide, transition metal compounds, and nanoscale systems. However, several recent studies in the framework of impurity models pointed out to serious deficiencies of OCA and raised questions regarding its reliability. Here we consider a single band Hubbard model on the Bethe lattice at finite temperatures and compare the results of OCA to those of a numerically exact quantum Monte Carlo (QMC) method. The temperature-local repulsion $U$ phase diagram for the particle-hole symmetric case obtained by OCA is in good agreement with that of QMC, with the metal-insulator transition captured very well. We find, however, that the insulator to metal transition is shifted to higher values of $U$ and, simultaneously, correlations in the metallic phase are significantly overestimated. This counter-intuitive behavior is due to simultaneous underestimations of the Kondo scale in the metallic phase and the size of the insulating gap. We trace the underestimation of the insulating gap to that of the second moment of the high-frequency expansion of the impurity spectral density. Calculations for the system away from the particle-hole symmetric case are also presented and discussed.'
address:
- 'Depto de Física CAC-CNEA and Consejo Nacional de Investigaciones Científicas y Técnicas, CONICET, República Argentina'
- 'Centre de Physique Théorique, École Polytechnique, CNRS, 91128 Palaiseau, France'
- 'Instituto de Física Rosario, Consejo Nacional de Investigaciones Científicas y Técnicas and Universidad Nacional de Rosario, Bvd. 27 de Febrero 210 Bis, 2000 Rosario, República Argentina'
- 'Depto de Física CAC-CNEA and Consejo Nacional de Investigaciones Científicas y Técnicas, CONICET, República Argentina'
author:
- 'V. Vildosola'
- 'L. V. Pourovskii'
- 'L. O. Manuel'
- 'P. Roura-Bas'
title: 'Reliability of the one-crossing approximation in describing the Mott transition'
---
Introduction
============
In the past years, many efforts have been devoted to the implementation of calculation techniques to describe the electronic structure of strongly correlated complex materials. This is a complicated and challenging task in view of the many degrees of freedom involved. One of the most successful approaches in this direction was the implementation of the dynamical-mean field theory (DMFT) [@dmft-1; @dmft-2; @dmft-3]. Numerically, the most challenging part of DMFT is the solution of the Anderson impurity model [@anderson] within the DMFT self-consistent loop that maps the lattice problem into a single impurity one.
There are two well-known numerically exact techniques to solve this impurity model, namely, the quantum Monte Carlo (QMC) in its Hirsch-Fye (HF-QMC) or continuous time (CT-QMC) versions [@hirsch; @ct-qmc], and the numerical renormalization group (NRG) [@nrg; @nrg-dmft]. Recently, a substantial technical progress [@andreas-1] has been achieved in both approaches. On one hand, the advent of continuous-time quantum Monte Carlo methods [@Gull_QMC_review] eliminated the time discretization error, inherent to the HF-QMC, and extended the range of applicability of QMC to much lower temperatures and realistic Coulomb repulsion vertices. On the other hand, very fast implementations of NRG applied to multi-band systems has been developped using Abelian and non-Abelian symmetries on a generic level [@andreas-2].
In spite of recent technical improvements, those exact methods still encounter certain difficulties. QMC solvers suffer from the well known ’fermion sign problem’, which can be especially severe when the degeneracy of the correlated shell is large and significant off-diagonal terms are present in the hybridization function. Moreover, QMC calculations are carried out in the imaginary-time domain and an analytic continuation is required to obtain real-energy spectral functions from QMC data. The NRG approach becomes computationally expensive in multiorbital cases with broken orbital symmetries (for instance, when interactions, like pair-hopping, prohibit the use of symmetries that reduce the size of the matrix to be diagonalized [@pruschke-bulla], leading to an exponential increase of the Hilbert space). Because of these limitations the necessity to have faster and reliable impurity solvers is evident.
Hence, several approximate schemes have been proposed for solving the DMFT impurity problem, like the local moment approximation (LMA)[@lma], iterative perturbation theory (IPT)[@ipt], exact diagonalization[@ed], rotationally invariant slave bosons [@lechermann], conserving diagrammatic approximations based on self-consistent hybridization expansion (SCH) [@conserving], among others.
Regarding the SCH, the non-crossing approximation (NCA) [@nca] represents the simplest family of these self consistent treatments and provides an accurate calculation of the impurity Green function, as well as many other properties, when the Coulomb repulsion is taken to be large enough as compared with the other energy scales involved in the problem. However, when more than one charge fluctuation needs to be included ($N\rightarrow N-1$ and $N\rightarrow N+1$, being $N$ the impurity valence), NCA has failed to give the correct Kondo scale ($T_K$). The next leading order in the self consistent expansion, that partially solves this pathology, is often known as the one-crossing approximation, OCA [@oca-1; @oca-2; @oca-3]. Within this extended formalism other classes of problems have been investigated [@haule-2; @haule-1; @schmitt-1]. Among them, its major application is in the context of the dynamical mean field theory as an impurity solver [@dmft-3].
In particular, the OCA solver has the advantage of being formulated at the real frequency axis and it gives the correct order of magnitude for the Kondo scale of the impurity problem. It successfully captures the correct temperature dependence of transport properties of a single impurity level [@oca-3], and it has been employed as the DMFT impurity solver in a search for signatures of a non-Fermi liquid behavior in the Hubbard model with van Hove singularities [@schmitt-1]. Furthermore, it has been generalized to an arbitrary number of orbitals and interactions [@haule-1]. Multiorbital generalization of OCA were employed in a study of the itinerant and local-moment magnetism in the three-band Hubbard model [@schmitt-2]. In combination with *ab-initio*$+$DMFT calculations, the OCA solver has been applied to real strongly correlated materials, for example, to heavy-fermion compounds [@dmft-3; @haule-1; @haule-1-2; @haule-2].
However, the OCA solver has also several limitations. It cannot be applied to arbitrary low temperatures due to violations of the Fermi-liquid properties (in the impurity model, OCA works well for $T > 0.1 T_K$) [@oca-3; @grewe-1; @grewe-2; @grewe-3], and it also violates the sum rules for the coefficients of the high-frequency expansion of the self-energy [@millis]. While the former pathology can be controlled by restricting its application to high enough temperatures, the later one is intrinsic and will always be present. As has been pointed out recently, the OCA method is more accurate in the strongly-correlated limit [@millis], and it describes the insulating phase particularly well [@ruegg]. It has also been shown that OCA overestimates the correlations in th metallic phase and it has been conjectured that this overestimation of correlation effects reflects the fact that the OCA tends to favor the insulating state.
One important issue that has not been studied up to date is the actual quantitative performance of the OCA solver within DMFT in describing the metal-insulator Mott transition[@mott]. Hence, we address this issue in the present work by calculating the critical $U_c$ values for the Mott transition within DMFT as a function of temperature using OCA as the impurity solver, and comparing them with the corresponding ones obtained with the CT-QMC. We have also compared the DMFT local self-energies obtained within the two approaches as well as the corresponding quasi-particle effective masses in the metallic phase. Our calculations have been carried out for the single band Hubbard model with a semicircular non-interacting density of states.
Our main conclusion is that the OCA metal-to-insulator transition for the particle-hole symmetric case is in remarkably good agreement with that of CT-QMC. However, we find that insulator-to-metal transition is shifted to higher values of $U$ despite the fact the correlations of the metallic phase are overestimated. This counter-intuitive behaviour is explained as a combination of two factors: the underestimation of the effective Kondo temperature in the metallic phase and the underestimation of the gap in the insulating one. The fact that OCA underestimates the gap in the insulating regime comes out from an analysis of the high-frequncy expansion sum rules of the Green function. Our results are in contradiction to the conjecture of OCA favoring the insulating phase. We show that although OCA overestimates the strength of correlations in the metallic phase, it does not favor the insulating one because the critical values of the metal-to-insulator transition are very well captured.
We have also study the same model in the non-symmetric case, obtaining similar agreement between both techniques. We verify that the OCA approximation does not violate the Friedel sum rule in the metallic phase for the range of temperatures of the obtained phase diagram, and that the interacting part of the OCA self-energy always remains causal.
The paper is organized as follows: we describe the theoretical formalism in section \[model\], we present the numerical results for the particle-hole symmetric case in section \[symm\], we discuss the results obtained for the system away from half-filling in section \[non-symm\] and finally we conclude in section \[conclusions\].
Model and Formalism {#model}
===================
\[modelo\]
We start with the single-band Hubbard Hamiltonian,
$$\begin{aligned}
\label{Hubbard}
H= -\frac{t}{\sqrt{z}}\sum_{<ij>\sigma}( c^{\dagger}_{i\sigma}c_{j\sigma} +
c^{\dagger}_{j\sigma}c_{i\sigma} ) +
U\sum_{i} n_{i\uparrow}n_{i\downarrow}, \end{aligned}$$
where the first term is the kinetic energy, $t$ is the hopping between nearest neighbors on a lattice, $z$ is the coordination number, and $U$ is the energy of the on-site Coulomb repulsion. The operator $c^{\dagger}_{i\sigma}$ creates an electron with the spin $\sigma$ on the site $i$ and $n_{i\sigma}=c^{\dagger}_{i\sigma}c_{i\sigma}$. We use the semicircular non-interacting density of states $N(\omega)=\frac{1}{2\pi t^2}\sqrt{4t^2-\omega^²}$, $\vert\omega\vert<2t$ corresponding to a Bethe lattice with coordination $z \rightarrow\infty$, for which the DMFT approximation becomes exact. In the following we use the half bandwidth as our unit of energy $D=2t=1$.
We solve the Hamiltonian \[\[Hubbard\]\] by means of DMFT, which maps the lattice model onto a single-impurity Anderson one within a self-consistent cycle. The hopping between the impurity and the conduction band, $V_k$, defines the hybridization function for the single-impurity problem $\Gamma(i\omega)=\sum_{k}V_k^2 / (i\omega-\epsilon_k)$, where $\epsilon_k$ is the conduction energy of the impurity model. Within the DMFT and in the case of the Bethe lattice, the DMFT hybridization function is given by the self-consistency condition $\Gamma(i\omega)=t^2G[\Gamma(i\omega)]$, where $G(\omega)$ is the local Green function obtained from the impurity model.
Starting from the metallic non-interacting solution of the model, the system turns into an insulator for large enough values of the Coulomb repulsion $U$ due to the vanishing of the quasiparticle weight. The value of $U=U_{c2}$ defines this transition. On the other hand, starting from an insulating solution, the systems turns metallic due to the collapse of the gap between the Hubbard bands, for $U \le U_{c1}$, with $U_{c1} < U_{c2}$ when $T$ is lower than the second-order end point of the first-order Mott transition $T_c$. The critical values $U_{c1}<U<U_{c2}$ as function of the temperature $T$ determine a phase diagram.
The phase diagram of the Mott transition for the present model have been previously obtained using the QMC [@rozenberg; @oudovenko; @blumer], IPT [@dmft-1], exact diagonalization [@caffarel; @rozenberg94; @tong], and NRG[@nrg-dmft] impurity solvers. The determination of the exact boundaries of the coexistence region has previously required a significant effort due to their sensitivity to calculational parameters, as well as due to the critical slowing down of the DMFT convergence close to those boundaries [@oudovenko]. Hence, we have employed up to 220 DMFT cycles for each point in the $\{U,T\}$ space and used a dense mesh along the $U$ axis with the spacing between $U$ values down to 0.005 in the vicinity of the $U_{c1}$ line. We have used the CT-QMC implementation provided by the TRIQS package [@triqs; @triqs_paper]. The DMFT impurity problem has been solved by CT-QMC using $\sim 10^9$ CT-QMC moves with each 200 moves followed by a measurement. The resulting CT-QMC phase diagram is in agreement with the extensive HF-QMC calculations of Blümer [@blumer]. Within the OCA solver we have used the procedure described by Hettler *et al.* for regularizing the spectral functions [@oca-reg] and the numerical convolution sketched in Ref.[@haule-2] when computing the self-energies and the Green function.
Numerical Results {#results}
=================
In this section, we present the numerical results obtained using the OCA solver for the DMFT loop and a detailed comparison with CT-QMC calculations.
Mott transition for the particle-hole symmetric case {#symm}
----------------------------------------------------
In order to get the critical values $U_{c1}(T)$ and $U_{c2}(T)$ for a given temperature $T$ within the OCA solver, we take advantage of its self-consistent nature building an external loop running in the $U$ values. Starting from a metallic solution we slowly increase $U$ by $\delta U$ retaining the previous ionic self-energies and Green function as the initial guess for the following $U+\delta U$ DMFT cycle, until an insulator solution is reached, and then we decrease $U$ by $-\delta U$ until we go back to the initial $U$.
In Fig.\[\[Fig1\]\] we show the spectral weight at the zero-frequency, $A(\omega=0)=-\mathcal{I}m[G(\omega=0)]/\pi$, as a function of $U$ for an inverse temperature $\beta=80$. We show both the increasing $U$ results from the metallic to insulator solutions as well as the decreasing ones. An hysteresis curve is formed, giving rise to two different critical values, $U_{c1}(T)$ and $U_{c2}(T)$. We define these critical values following the criteria given in Ref.[@nrg-dmft], from the $U$-value for which $\vert A'(\omega=0)\vert$ reaches its maximum intensity.
![(Color online) a). Spectral weight $A(\omega=0)$ for the inverse temperature $\beta=80$ as a function of $U$ both for increasing (black lines, squares) and decreasing (red lines, circles) $U$ values. The CT-QMC (OCA) data are displayed with the solid (dashed) lines and empty (filled) symbols, respectively. b). The quasi-particle residue $Z$ as function of $U$ for the same temperature. The notation is the same as in panel a). []{data-label="Fig1"}](Fig1.eps){width="7cm"}
In Fig.\[\[Fig1\]\] we show the variation of the quasiparticle weights, $Z=[1-\frac{\partial Re \Sigma(\omega)}{\partial\omega}\vert_{\omega=0}]^{-1}$, as a function of $U$ for $\beta=80$. In order to compared with CT-QMC, we first obtain the interacting part of the OCA self-energy $\Sigma(\omega)$, removing the non-interacting offset given by the hybridization term. Secondly, from a Hilbert transform of $Im\Sigma(\omega)$, we compute the corresponding self-energy in the Matsubara domain,
$$\Sigma(i\omega_n) = -\frac{1}{\pi}\int d\omega~ \frac{Im\Sigma(\omega)}{i\omega_n-\omega}.$$
Finally, we approximate the derivative $\frac{\partial Re \Sigma(\omega)}{\partial\omega}\vert_{\omega=0}=
\frac{\partial Im \Sigma(i\omega_n)}{\partial i\omega_n}\vert_{i\omega_n\rightarrow0}$ by a cubic fitting of the first four Matsubara’s frequencies of $Im \Sigma(i\omega_n)$.
Although the vanishing of $Z$ defines the critical value $U_c$ only at zero temperature [@nrg-dmft], it has been used as a common criteria even for finite temperatures (see for instance Ref.[@ansgar]) From Fig.\[\[Fig1\]\] it can be seen that both approaches (from $A(\omega=0)$ or from $Z$) define the same energy scales for $U_{c1}$ and $U_{c2}$. More importantly, the OCA critical $U$-values are in a reasonable agreement with the CT-QMC ones. While the OCA value for $U_{c2}$ is obtained within an error of less than 0.5% with respect to the CT-QMC one, the calculated $U_{c1}$ is larger than the CT-QMC one by around 3% . We will discuss the origin of this discrepancy for $U_{c1}$ later in this section.
It is important to remark that the OCA values of $Z$ in the metallic region, i.e. $U<U_{c1},U_{c2}$, are smaller than the CT-QMC ones. The same behavior was found by Schmitt *et al.* [@schmitt-2] using OCA for a body-centered-cubic lattice in comparison with NRG calculations. While OCA gives the correct low energy scale for the impurity model, this energy scale is still slightly underestimated [@oca-2], and therefore within OCA the system feels a larger effective Coulomb repulsion giving rise to a reduced quasiparticle weight. However, it is important to remark that the underestimation of Z is less important close to the transition.
In Fig.(\[Fig2.eps\]) we show the imaginary part of the self-energy in the imaginary frequency domain for the increasing $U$ regime at $\beta=60$ and for two different values of $U$, one below and one above $U_{c_2}$, $U=2.3$ and $U=2.4$. As it can be observed from this plot, for the metallic case, OCA overestimates the absolute magnitude of the self-energy at low frequencies. Similarly to the underestimation of the quasiparticle weigth at low temperatures described above, this behavior of $Im\Sigma(i\omega_n)$ can be also understood as arising due to an effectively larger value of $U$. On the other hand, in the insulating region the agreement between OCA and CT-QMC is remarkable. We found that for a correct comparison between the two techniques it was very important to have the same degree of precision of the convergence criterion of the DMFT loops, especially for points close to the Mott transtion. For large frequencies, an additional test can be done using the sum rules that $\Sigma(i\omega_n)$ should satisfy.
![(Color online) Comparison of the imaginary part of the self-energy as a function of the Matsubara frequency between OCA and CT-QMC at $\beta=60$ for two different values of $U$, one below the $U_{c2}$ and the other one above. The inset shows the imaginary part of the OCA self-energy scaled by $\omega_n$. The dashed and solid lines indicate their expected theoretical values given by the high frequency expansion sum rule, $\Sigma_1 = -U^2/4$.[]{data-label="Fig2.eps"}](Fig2.eps){width="7cm"}
In the inset of Fig.(\[Fig2.eps\]) we plot the imaginary part of the OCA self-energy scaled by $\omega_n$ for $U=2.3 < U_{c2}$ and $U=2.4 > U_{c2}$, together with the exact coefficient $\Sigma_1$ for each $U$, that corresponds to the first moment in the self-energy high frequency expansion, $\Sigma_1=\int\frac{d\omega}{\pi}~Im\Sigma(\omega),$ and that determines the asymptotic $1/\omega_n$ behavior. In Ref.[@millis], Rüegg *et al.* have calculated the exact value expected for $\Sigma_1$, being $\Sigma_1=-U^2/4$ for the symmetric case [@sigma-1]. For the parameters shown in Fig.(\[Fig2.eps\]), we obtain a deviation of the OCA $\Sigma_1$ coefficient of the order of $5\%$ in the metallic phase, while in the insulator one the error is reduced to less than $2\%$.
In what follows we discuss the phase diagram of the Mott transition. In Fig.\[\[Fig3\]\] we show the $T$ vs. $U$ diagram with the calculated $U_{c_1}$ and $U_{c_2}$ obtained from the zero-frequency spectral function $A(\omega=0)$ (upper panel), as well as the quasiparticle residue $Z$ (lower panel). The general trend of the critical $U_c(T)$ obtained by OCA is in reasonable agreement with the corresponding CT-QMC ones. Even though a very well defined coexistence region is captured by OCA, this coexistence region is reduced with respect to the CT-QMC one. While the agreement is remarkable for the $U_{c2}(T)$ transition, the $U_{c1}(T)$ values are slightly shifted to higher energies in OCA.
![(Color online) The $T$ vs. $U$ phase diagram of the Mott transition obtained from the zero-frequency spectral function $A(\omega=0)$ (upper panel) and the quasiparticle residue $Z$ (lower panel). The inset in the upper panel indicates the phase diagram obtained using the finite-U NCA in the symmetric case as the impurity solver.[]{data-label="Fig3"}](Fig3.eps){width="7cm"}
Regarding the critical temperature ($T_c$) below which two different spinodal lines define the coexistence region of the insulating and metalic regimes of the Mott transition, OCA gives $T_c\sim 0.02$ in reasonable agreement with the CT-QMC $T_c\sim0.025$. The slight underestimation of $T_c$ is a consequence of the corresponding underestimation of $T_K$ by OCA at the effective impurity level. For comparison, we also include in the inset of the upper panel of Fig.\[\[Fig3\]\] the finite-U NCA phase diagram for the particle-hole symmetric case. We stress here that this simple approximation severely underestimates all the energy scales involved, $T_c$ and both $U_{c1}(T)$ and $U_{c2}(T),$ as a consequence of the underestimated Kondo scale. On the other hand, we want to mention here that the IPT results [@dmft-1; @nrg-dmft] are considerably shifted to higher energies overestimating both, $U_{c1}(T)$ and $U_{c2}(T)$, due to the exaggerated overestimation of the Kondo scale at the impurity level.
Despite its approximate nature, the coexistence region given by OCA is in the correct energy range and the critical temperature $T_c$ is in very good agreement with the CT-QMC results. We want to remark that for the whole range of temperatures studied in the presented phase diagram, the OCA self-energy remains causal, that is, $Im \Sigma(i\omega_n)$ is negative. For very low temperatures ($T\lesssim 1/500 \sim 0.1 \;T_c^{OCA}$), it can turn positive signaling the breakdown of the approximation.
We turn now to the discussion regarding the slight overestimation of $U_{c1}$ that can be observed in Fig.(\[Fig3\]). While the value of $U_{c2}$ is given by the critical $U$ for which the quasiparticle weight at zero frequency vanishes, the $U_{c1}$ is related to the corresponding $U$ for which the Hubbard bands collapse and the gap in the spectral function is closed. We found that the size of the gap in the insulator regime given by OCA is somewhat underestimated and therefore it closes for a larger value of $U$ than expected for CT-QMC. This statement follows from an analysis of the high frequency expansion of the local Green function. As described in Ref.[@millis], the high frequency expansion in the imaginary domain of $G(i\omega_n)$ is given by
$$G(i\omega_n) = \sum_{k=1}^{\infty} \frac{M_{k-1}}{(i\omega_n)^k},$$
where, in the spectral representation of the Green function, the coefficients are related to the moments of the spectral density as $M_k= \int_{\infty}^{\infty}d\omega~\omega^kA(\omega)$ [^1] . Exact relations for the coefficients can be found from thermodynamic expectation values [@millis]: $M_0 = 1$, $M_1=\epsilon_d+Un_d/2$ ($0$ at half filling), and $M_2=\epsilon_d^2+\Delta_1+U(2\epsilon_d+U)n_d/2$. Here, $\epsilon_d$ and $n_d$ are the energy level and total occupancy of the effective Anderson model. $M_0$ and $M_1$ are related to the normalization and parity of $A(\omega)$ so that they are exactly reproduced by OCA.
Regarding the coefficient $M_2$, the parameter $\Delta_0$ represents the zero moment in the hybridization high frequency expansion, $\Delta_0=-\frac{1}{\pi}\int_{\infty}^{\infty}d\omega~\mathcal{I}m\Gamma(\omega)
=\frac{1}{\pi}\int_{\infty}^{\infty}d\omega~\Delta(\omega)$, where $\Delta(\omega)=\pi V^2 \rho_c(\omega)$, and $\rho_c$ is the conduction density of states. Using the self-consistency condition $\Gamma(i\omega)=t^2G[\Gamma(i\omega)]$ for the present case of the Bethe lattice, we arrive to the following relation: $\Delta(\omega)=\pi t^2 A(\omega)=\frac{\pi D^2}{4} A(\omega)$. Therefore, $\Delta_0=\frac{D^2}{4}\int_{\infty}^{\infty}d\omega~A(\omega)=\frac{D^2}{4}$. Taking into account that for the symmetric situation $2\epsilon_d+U=0$ and $M_1=0$, the coefficient $M_2$ reads
$$\label{c3}
M_2 = \frac{U^2}{4} + \frac{D^2}{4}.$$
![(Color online) Spectral density in the insulating region when decreasing the Coulomb repulsion from $U=3$ to $U=2.6$. The inset shows the ratio of the second moment obtained within OCA and its exact value from Eq.(\[c3\]) (squares) as a function of $U$ and its deviation from the unity (solid line). []{data-label="Fig4"}](Fig4.eps){width="7cm"}
The second moment $M_2$ of the spectral function contains indirect information about the size of the Mott gap. In fact, it carries information about the center position and width of each Hubbard band. For instance, in the simplest case in which the Hubbard bands have the semicircular shape centered at $\pm \omega_0$ and width $D$, the second moment becomes $M_2 = \omega_0^2 + D^2/4$ by comparing with Eq.(\[c3\]), one can infer that $\omega_0=U/2$. In this simple picture, the gap is opened when $U$ is larger than $2D$ and the size of the gap is of the order of $\delta=U-2D$. In Fig.(\[Fig4\]), we show the spectral density in the insulating region when decreasing the Coulomb repulsion from $U=3$ to $U=2.6$. It can be observed that the gap is continuosly closed when $U$ is lowered until the critical value $U_{c1}$ is reached. In the inset of Fig.(\[Fig4\]), we show the values of $\frac{4}{U^2+D^2}\int_{\infty}^{\infty}d\omega~\omega^{2}A(\omega)$ (squares), which represents the ratio of the second moment obtained within OCA and its exact value from Eq.(\[c3\]), as a function of $U$ and its deviation from the unity (solid line). It can be seen that OCA underestimates the second moment of the spectral function by $\sim15\%$.
Unfortunately, the center position and width of each Hubbard band enter in a combination within $M_2$ and we cannot know from this coefficient alone, if OCA underestimated the center position or width or even both. However, an underestimation in both quantities bring about a reduccion of the gap that gives rise to larger values of $U_{c1}$ as compared with the exact CT-QMC ones.
Non-symmetric case {#non-symm}
------------------
In this subsection, we compare the calculations done by OCA and CT-QMC for the one band Hubbard model on the Bethe lattice away from half-filling. We consider $2.5 < U < 5.0$ and the impurity level of the effective Anderson model at $\epsilon_d=-\frac{U}{2}+\Delta \mu$, with $\Delta\mu$=-1.0 and $\beta= 60$.
In Fig. \[Fig5\], the spectral densities calculated by OCA for different values of $U$ are shown. One may see that for the smallest value of $U$, the system is metallic with a large quasiparticle resonance that overlaps with the upper Hubbard band giving rise to large charge fluctuations pertaining to a mixed valence regime. In the other extreme, for the largest value of $U$, the systems is an insulator with the Hubbard bands located symmetrically with respect to $\Delta\mu$. The value of the gap in this case is of the order of $2D$. In order to be able to describe accurately solutions with large gaps, we implemented a three-centered logarithmic mesh.
By integrating $A(\omega)$ weigthed by the Fermi function for the corresponding temperature, we obtained the local occupancies in a very good agreement with the CT-QMC ones. It is not obvious that this quantity can be correctly evaluated within approximate analytical solvers. Hence, the fact that it is captured within OCA is important for the applicability of the method to non-symmetric cases.
In the inset of Fig.\[Fig5\] we show $A(\omega=0)$ as a function of $U$ in comparison with CT-QMC. One sees that both the OCA and CT-QMC indicate that the system turns an insulator for $U \geq 4.5$. For this level of doping there is no coexistence region and the OCA critical value $U_c$ agrees with the CT-QMC one within a 5%.
![(Color online) Spectral density $A(\omega)$ calculated by OCA for a non-symmetric case taking $2.5 < U < 5.0$ and an energy shift of -1.0 from the corresponding symmetric case for each value of $U$. The inverse temperature is $\beta$ = 60. In the inset we show $A(\omega=0)$ as a function of $U$. The CT-QMC (OCA) data are displayed with the solid (dashed) lines and empty (filled) symbols, respectively. []{data-label="Fig5"}](Fig5.eps){width="7cm"}
Overall, we show that OCA also gives a very reasonable description of the Mott metal-insulating transition for the Hubbard model away from half-filling.
Summary and conclusions {#conclusions}
=======================
The self-consistent hybridization expansions in their different forms (NCA, OCA, symmetric finite-U NCA, etc) have been widely used not only in the context of the impurity problem, but also in the framework of DMFT applied both to different lattice models and realistic cases, describing strongly correlated materials from first-principles. However, to the best of our knowledge, a detailed and quantitative study of the Mott transition, one of the essential problems of strongly correlated systems, has not been carried out up to now with these kind of approximate techniques.
In this work, we asses the reliability of OCA impurity solver in the context of the DMFT method to describe the Mott metal-insulator for the one band Hubbard model in the Bethe lattice at half-filling within DMFT. We present the temperature-local repulsion $U$ phase diagram in comparison with the numerically exact CT-QMC. We show that OCA can provide a very good quantitative description of the metal-insulator transition of the present model. We obtain the metal-to-insulator transition, $U_{c_2}$, within an error of less than 0.5% while the insulator-to-metal $U_{c_1}$ values are shifted to higher $U$ (about a 3%) with respect to the CT-QMC one. We explain the overestimation of $U_{c_1}$ from an analysis of the second moment of the spectral density, $M_2$. We find that the expected theoretical value for $M_2$ is underestimated by OCA. Since $M_2$ is equal to the second moment of the spectral weight, we infer that the size of the gap in the insulating phase is also underestimated so that the Hubbard bands collapse for higher values of $U$ than for CT-QMC.
Aside from the Mott transition itself, we confirm previous results[@millis; @ruegg] regarding the better performance of OCA in the insulating phase than in the metallic one. The high-frequency sum rules for the imaginary part of $\Sigma(i\omega)$ are obtained reasonably well in both phases, with the deviation in the insulating case being a bit smaller than in the metallic one. On the other hand, in the small frequency region the correlations are overestimated in the metallic case. This effect is also apparent in the value of the quasiparticle weigth that is underestimated by OCA, specially far away from the transition. This overestimation of the correlations in the metallic phase does not imply that OCA favors the insulating state, as has been previously stated in Ref. [@ruegg], since we show the transition $U$ is well reproduced, especially the $U_{c_2}$ values. Furthermore, we show that the gap of the insulating phase is underestimated by OCA.
Finally, we study the perfomance of OCA for a non-symmetric case obtaining an overall reasonable agreement with CT-QMC, and a very similar critical value of $U$ for the Mott transition at the considered temperature. The study of non-symmetric cases are particularly relevant for applications to real materials.
Despite the above mentioned deviations of OCA from exacts results, we are not aware of any other approximated technique yielding a phase-diagram with this level of agreement with numerically-exact many-body methods.
Acknowledgments
===============
This work was partially supported by CONICET, PIP 00273 and 01060 and MINCYT-ANPCyT, program ECOS-MINCyT France-Argentina (project A13E04), PICT 1875 and R1776, Argentina.
Bibliography
============
[99]{}
Metzner W and Vollhardt D, 1964 *Phys. Rev. Lett.* **62**, 324
Georges A, Kotliar G, Krauth W, and Rozenberg M J, 1996 *Rev. Mod. Phys.* **68** 13
Kotliar G, Savrasov S Y, Haule K, Oudovenko V S, Parcollet O, and Marianetti C A, 2006 *Rev. Mod. Phys.* **78**, 865
Anderson P W, 1961 *Phys. Rev.* **124**, 41.
Hirsch J E and Fye R M, 1986 *Phys. Rev. Lett.* **56** 2521.
Werner P, Comanac A, de Medici L, Troyer M, and Millis A J, 2006 *Phys. Rev. Lett.* **97**, 076405 ; 2007 Haule K, *Phys. Rev. B* **75**, 155113.
Wilson K G, 1975 *Rev. Mod. Phys.* **47**, 773 ; 2008 Bulla R, Costi T A, and Pruschke T, *Rev. Mod. Phys.* **80**, 395.
Bulla R, Costi T A, and Vollhardt D, 2001 *Phys. Rev. B* **64**, 045103.
Weichselbaum A, 2012 Annals of Physics **327**, 2972.
Gull E, Millis A J, Lichtenstein A I, Rubtsov A N, Troyer M, and Werner P, 2011 *Rev. Mod. Phys.* **83**, 349.
Stadler K M, Weichselbaum A, Yin Z P, von Delft J, and Kotliar G, arXiv:1503.06467.
Pruschke T and Bulla R, 2005 *Eur. Phys. J. B.* **44** 217.
Logan D E, Eastwood M P, and Tusch M A, 1998 *J. Phys.: Condens. Matter* **10**, 2673; Dickens N L and Logan D E, 2001 *J. Phys.: Condens. Matter* **13**, 4505; Smith V E, Logan D E, and Krishnamurthy H R, 2003 *Eur. Phys. J. B* **32**, 49; Vidhyadhiraja N S, Smith V E, and Logan D E, 2003 *J. Phys.: Condens. Matter* **15**, 4045.
Muller-Hartmann E, 1989 *Int. J. Mod. Phys.* **3**, 2169; Vollhardt D, 1991 *Physica B* **169**, 277.
Caffarel M and Krauth W, 1994 *Phys. Rev. Lett.* **72**, 1545.
Lechermann F et al. 2007 *Phys Rev. B* **76** 155102.
Kroha J and Wölfle P, 2005 *J. Phys. Soc. Jpn.* **74**, 16-26.
Bickers N E, 1987 *Rev. Mod. Phys.* **59**, 845; Coleman P, 1983 *Phys. Rev. B* **29**, 3035.
1996 *Phys. Rev. B,* **54**, 6494; *ibid.*, 1997 **55**, 12 594; Han J E *et al.*, 1997 *Phys. Rev. Lett.* **78** 939; Vildosola V L, Alouani M and Llois A M, 2005 *Phys. Rev. B* **71**, 184420; Roura-Bas P, Vildosola V and Llois A M, 2007 *Phys. Rev. B* **75**, 195129.
Hettler M H, Kroha J, and Hershfield S, 1994 P*hys. Rev. Lett.* **73** 1967; Roura-Bas P, 2010 *Phys. Rev. B* **81**, 155327.
Pruschke Th and Grewe N, 1989 *Z. Phys. B - Condensed Matter* **74**, 439.
Haule K, Kirchner S, Kroha J, and Wölfle P, 2001 *Phys. Rev. B* **64**, 155111. Tosi L, Roura-Bas P, Llois A M, and Manuel L O, 2011 *Phys. Rev. B* **83**, 073301.
Jacob D, Haule K and Kotliar G, 2009 *Phys. Rev. Lett.* [**103**]{}, 016803.
Haule K, Yee C -H, and Kim K, 2010 *Phys. Rev. B* **81**, 195107.
Schmitt S, 2010 *Phys. Rev. B* **82**, 155126.
Yin Q, Kutepov A, Haule K, and Kotliar G, 2011 *Phys. Rev. B* **84**, 195111; Choi H Ch, Min B I, Shim J H, Haule K, and Kotliar G, 2012 Phys. Rev. Lett. 108, 016402.
Kotliar G, Savrasov S Y, Haule K, Oudovenko V S, Parcollet O, and Marianetti C A, 2006 *Rev. Mod. Phys.* **78**, 865.
Schmitt S, Grewe N, and Jabben T, 2012 *Phys. Rev. B* **85**, 024404.
Grewe N, Schmitt S, Jabben T, and Anders F B, 2008 *J. Phys.: Condens. Matter* **20**, 365217.
Grewe N, Jabben T, and Schmitt S, 2009 *Eur. Phys. J. B* **68**, 23.
Schmitt S, Jabben T, and Grewe N, 2009 *Phys. Rev. B* **80**, 235130.
Rüegg A, Gull E, Fiete G A, and Millis A J, 2013 *Phys. Rev. B* **87**, 075124.
Rüegg A, Hung H -H, Gull E, and Fiete G A, 2014 *Phys. Rev. B* **89**, 085122.
Mott N F, 1968 *Rev. Mod. Phys.* **40**, 677
Rozenberg M J, Chitra R, and Kotliar G, 1999 *Phys. Rev. Lett.* **83**, 3498.
Joo J and Oudovenko V, 2001 *Phys. Rev. B* **64** 193102.
Blmer N, [*Metal-Insulator Transition and Optical Conductivity in High Dimensions*]{}, Shaker Verlag, Aachen 2003.
Caffarel M and Krauth W, 1994 *Phys. Rev. Lett.* **72** 1545.
Rozenberg M J, Moeller G, and Kotliar G, 1994 *Mod. Phys. Lett. B* **8** 535 1994.
Tong N, Shen S, and Pu F, 2001 *Phys. Rev. B* **64** 235109 2001.
Ferrero M and Parcollet O, “Triqs: a toolkit for research in interacting quantum systems”, http://ipht.cea.fr/triqs.
Parcollet O, Ferrero M, Ayral T, Hafermann H, Krivenko I, Messio L, and Seth P, arXiv:1504.01952.
Hettler M H, Kroha J, and Hershfield S, 1994 *Phys. Rev. Lett.* **73** 1967 1994.
Liebsch A, 2004 *Phys. Rev. B* **70** 165103 2004.
The rule $\Sigma_1=-U^2/4$ follows from the spinless model analyzed in Ref.[@millis] (with a minus missed sign).
[^1]: With our notation the moments $M_k$ are equal to the coefficients $c_{k+1}$ defined in Ref. [@millis]
|
---
abstract: 'In this paper, an economic model is proposed for joint time resource allocation and energy trading between two service providers, i.e., IoT service provider (ISP) and energy service provider (ESP), in a heterogeneous IoT wireless-powered communication network. In particular, IoT devices (with various communication types and energy constraints) are assumed to belong the ISP who collects sensing data from IoT devices for its services. Meanwhile, the ESP utilizes a power beacon to provide energy services for the ISP. A Stackelberg game model is formulated to jointly maximize the revenue of both the ISP and ESP (i.e., network throughput and energy efficiency) through investigating the energy interaction between them. Specially, the ISP leads the game by requesting an optimal energy price and service time that maximize its revenue to the ESP. Following the requested deal from the ISP, the ESP supplies an optimized transmission power which satisfies the energy demand of the ISP while maximizes its utility. To obtain the Stackelberg Equilibrium, we first derive a closed-form solution for the ESP. Then two relaxed schemes (i.e., partial or joint energy price and service time adjustments) based on *block coordinate descent* (BCD) and *convex-concave procedure* (CCCP) techniques are proposed to solve the non-convex optimization problem for the ISP. Due to the selfish behavior of both players, we investigate the inefficiency of the proposed approach by proposing two baseline scenarios, i.e., non-negotiated energy trading and social welfare scenarios, and the *Price of Anarchy* (PoA). Finally, numerical results reveal that our approach can achieve significant improvements in revenues of both providers compared with conventional transmission methods, e.g., bistatic backscatter, and harvest-then-transmit communication methods.'
author:
- 'Ngoc-Tan Nguyen, Dinh Thai Hoang, Diep N. Nguyen, Nam-Hoang Nguyen, Quoc-Tuan Nguyen, and Eryk Dutkiewicz, [^1] [^2] [^3]'
title: 'Energy Trading and Time Scheduling for Heterogeneous Wireless-Powered and Backscattering-based IoT Networks'
---
**Stackelberg game, bistatic backscatter, low-power communications, heterogeneous IoT networks.**
Introduction
============
Emerging Internet of Things (IoT) is a smart network which converges modern technologies to connect various smart devices to the Internet and enables information sharing and exchange among IoT devices [@Bandyopadhyay2011]. Over the last decade, with rapid development, IoT has been applied almost everywhere such as smart city, home, agriculture, healthcare, and transportation to facilitate our lives [@Bandyopadhyay2011]-[@Fuqaha2015]. To meet low-cost and lightweight requirements, IoT devices are usually powered by batteries with small capacities to support their operations. However, frequent recharging/replacing batteries for a massive number of such IoT devices is ineffective because it is costly, inconvenient, and impractical in some cases (e.g., biomedical implants) [@Derrick2019].
A promising technology, called *Harvest-then-transmit* (HTT) [@Ju2014HTT]-[@Salem2016], can be a possible solution for self-supplied IoT networks. However, due to low efficiency in harvesting energy from surrounding radio frequency (RF) signals and imperfect battery storage, energy achieved in the harvesting phase is typically low to sustain the RF communication phase on IoT devices [@Ku2016]. Recently, another capable solution developed based on reflecting the incident RF signal is backscatter communications. There are three types of backscatter communications listed in [@Huynh2018] are monostatic [@Bletsas2009], bistatic [@Kimionis2004], and ambient backscatter communications [@Liu2013]. Yet backscatter efficiencies of these systems are not high enough to completely replace the HTT technology. Hence, the aforementioned technologies, i.e., the HTT and backscatter communications, can be integrated to complement to each other in a hybrid system, called a wireless-powered backscatter communication (WPBC) network [@Gong2018]-[@Wang2018Stackelberg].
In a WPBC network, a wireless-powered device (WPD), e.g., an IoT device, is designed to perform either backscatter communications (i.e., passive transmissions) or transmissions using its RF circuit (i.e., active transmissions) and the energy harvested from a power beacon (PB). In order to improve the performance of the WPBC system, there is a need for a mechanism to flexibly schedule energy harvesting, passive and active transmission operations of IoT devices [@Wang2019], [@Hoang2017Stackelbergame], [@Chen2019]. Most existing work on the WPBC optimize time allocation for IoT devices’ operations under the TDM framework with the assumption of homogeneous IoT devices [@Gong2018]-[@Wang2018Stackelberg]. The experimental results show that the network throughput can be significantly improved by using this method due to absolute interference cancellation. In practice, however, various types of IoT devices with different hardware capabilities and configurations, e.g., performing backscattering or HTT or both can coexist. In such a circumstance, these devices must be taken different energy and communications constraints into account.
Game theory-based time scheduling optimization in WPBC networks has been investigated in the literature [@Hoang2017Stackelbergame]-[@Wang2018Stackelberg]. In [@Hoang2017Stackelbergame], the authors propose a Stackelberg game to formulate network throughput as the profit of the network, in which the gateway is the leader and the IoT devices are the follower. Simulation results reveal the impact of the competition on the profit of players. The authors in [@Wang2018Stackelberg] model a single-leader-multiple-follower Stackelberg game which takes the interference’s impact into account. The proposed scheme achieves a higher throughput compared with fixed transmission modes. However, a large number of IoT devices can belong to an IoT service provider (ISP) who is required to pay for energy to operate its service (e.g., a contractor that provides data collecting/monitoring services for smart cities). In such a case, the energy cost/negotiation between the ISP and an energy service provider (ESP) should be taken into account while optimizing the scheduling of IoT devices.
In this paper, our work aims to address the above by studying the self-interest interaction between the ISP and ESP (via the PB) and its implication on optimizing the energy trading and time scheduling for a heterogeneous WPBC (HWPBC) network. Specifically, we use the Stackelberg game to capture the strategic interaction between the PB and the IoT devices. Under such a game, the ISP that acts as the leader can proactively select the best energy service from the ESP by sending its energy request with a price and charging time (i.e., energy service time). The ESP modeled as the follower then finds the optimal transmission power which can maximize its benefits while meeting requirements from the ISP. A quadratic price model for energy trading [@Mohsenian2010] is developed to optimize the profit of the ESP, i.e., the follower, achieved by selling energy based on the requested price and operation time of the ISP, i.e., the leader. Thus, the optimal transmission power of the PB is derived in a close-form. In addition, the profit function of the ISP is the difference between the revenue from providing services (i.e., collecting data) and the energy cost. It is non-convex and contains multi-variables (i.e., the requested price, and operation times of the PB and IoT devices). To address the maximization problem of ISP’s profit, we propose two relaxed schemes, called partial adjustment (PA) and joint adjustment (JA) of energy price and service time, that perform iterative algorithms based on the *block coordinate descent* (BCD) technique [@Tseng2001]. In the PA scheme, the iterative algorithm solves three sub-problems with respect to the requested price, service time of the PB, and scheduling times of the IoT devices, respectively in each its iteration. Whilst, the JA scheme splits the primary problem into two sub-problems, in which the former jointly optimize the requested price and service time of the PB, and the latter allocates the operation times for IoT devices optimally. Then, we adopt the *convex-concave procedure* (CCCP) technique [@Yuille2001] to address the joint sub-problem in the JA scheme. As a result, our proposed schemes can guarantee to always achieve the Stackelberg equilibrium in polynomial time. Furthermore, two baseline scenarios, i.e., non-negotiated energy trading and social welfare scenarios, and the *Price of Anarchy* (PoA) [@Roughgarden2015] ratio are proposed to evaluate the inefficiency of the proposed approach due to the selfish behaviors of both players. For performance comparison, we perform simulations to compare the revenues of both providers achieved by the proposed approach and other conventional transmission methods (i.e., bistatic backscatter communication mode (BBCM) [@Hilliard2015] and HTT communication mode (HTTCM) [@Ju2014HTT]). Numerical results then verify that the proposed approach outperforms other conventional transmission methods.
The major contributions of this paper are summarized as follows:
1. We propose a heterogeneous WPBC network comprising of two service providers, i.e., the ISP and ESP, in which various IoT devices with diverse hardware configurations and capabilities are considered to belong the ISP.
2. We propose an energy trading model based on Stackelberg game where the ISP is the leader and the ESP is the follower. Two baseline scenarios, i.e., energy trading without negotiation and social welfare scenarios are presented to compare with the proposed energy trading model. In addition, the inefficiency of the proposed approach, due to selfish behaviors of both players, determined by the PoA ratio is investigated through numerical results.
3. Two schemes (i.e., the PA and JA schemes) performing iterative algorithms based on the BCD and CCCP techniques are proposed to maximize the revenue of the ISP which originally is a multi-variable non-convex function. Simulation results show the revenue comparison of the ISP achieved by the proposed approach and other conventional methods using both the PA and JA schemes.
The rest of the paper is organized as follows. Section II presents the system model. Section III formulates the Stackelberg game for joint energy trading and time scheduling, and two iterative algorithms are provided in Section IV to find the Stackelberg equilibrium. Section V conducts simulations to validate the theoretical derivations. Finally, Section VI concludes the paper.
System Model
============
Network Setting
---------------
As illustrated in Fig. \[fig:System\_model\](a), we consider the HWPBC consisting of two service providers, i.e., the ISP and ESP. At the ISP, we consider three types of low-cost IoT devices with dissimilar hardware configurations that can support two functions: i.e., the BBCM and/or HTTCM. The first set of IoT devices represented by ${\mathcal{A}\rm{ }} \!\buildrel \Delta \over =\! \{\text{AWPD}_a| \forall a \!=\! \{1, \dots, A\}\!\}$ is active wireless-powered IoT devices (AWPDs) that are equipped with energy harvesting and wireless transmission circuits. With this configuration, the AWPDs can operate in the HTTCM only. In addition, we denote ${\mathcal{P}\rm{ }} \!\!\buildrel \Delta \over = \!\!\{\text{PWPD}_p| \forall p \!=\! \{1, \dots, P\}\!\}$ to be the set of passive wireless-powered IoT devices (PWPDs) that are designed with a backscattering circuit to perform the BBCM only. Finally, hybrid wireless-powered IoT devices (HWPDs) belonging to the set ${\mathcal{H}\rm{ }}\!\! \buildrel \Delta \over =\! \{\text{HWPD}_h| \forall h\!=\!\{1, \dots, H\}\!\}$ are equipped with all hardware components to support both aforementioned operation modes. On the other hand, the ESP utilizes a dedicated power beacon (PB) to supply energy for the IoT devices.
$\begin{array}{ccc}
\epsfxsize=2.4 in \epsffile{Figures/System_model/network_model} &
\epsfxsize= 3.9 in \epsffile{Figures/System_model/Time_frame} \\ [-0.2cm]
(a) & (b)
\end{array}$
The IoT service is operated over two consecutive working periods of the PB, i.e., *emitting period* $\beta$ and *sleeping period* $(1 \!-\! \beta)$ as shown in Fig. \[fig:System\_model\](b). For simplicity and efficiency in time resource allocation for multiple IoT devices, the TDMA mechanism is adopted here to avoid collisions among transmissions. We denote $\bm{\theta} \buildrel \Delta \over = \left( \!{{\theta_1}, \ldots, {\theta_p}, \ldots,{\theta_{P}}} \!\right)^{\!\rm{T}}$ and $\bm{\tau} \buildrel \Delta \over = \left(\! {{\tau_1}, \ldots, {\tau_h}, \ldots,{\tau_{H}}}\! \right)^{\!\rm{T}}$ as the backscattering time vectors for the PWPDs and HWPDs in the emitting period of the PB, respectively. Similarly, $\bm{\nu} \buildrel \Delta \over = \left( {{\nu_1}, \ldots ,{\nu_a}, \ldots, {\nu_{A}}} \right)^{\rm{T}}$ and $\bm{\mu} \buildrel \Delta \over = \left( {{\mu_1}, \ldots, {\mu_h}, \ldots,{\mu_{H}}} \right)^{\rm{T}}$ are the transmission time vectors for AWPDs and HWPDs in the idle period of the PB, respectively. When the PB is in the emitting period, it transmits unmodulated RF signals, and thus the IoT devices (i.e., PWPDs and HWPDs) with the capability of backscattering can passively transmit their data by leveraging such signals. Meanwhile, the AWPDs and HWPDs equipped with energy harvesting circuits can harvest energy for their active transmissions in the sleeping period of the PB. Note that, an $\text{AWPD}_a$ can execute energy harvesting in the entire emitting period (i.e., $\beta$), while the harvesting time of an $\text{HWPD}_h$ is $(\beta \!-\! \tau_h)$ because it must backscatter in the time slot $\tau_h$. In the sleeping period of the PB, the AWPDs and HWPDs can perform active transmissions to deliver their data to the gateway based on the TDMA protocol.
Network Throughput Analysis
---------------------------
The network throughput (denoted by $R_{sum}$) of communications between the IoT devices and gateway is defined as the total information bits decoded successfully at the gateway over the two periods of the PB.
### Emitting period of the PB
In this period, the IoT devices (i.e., PWPDs and HWPDs) will perform backscattering the RF signal from the PB to deliver their information. We assume that the PWPDs and HWPDs implement backscatter frequency-shift keying (FSK), or binary FSK to gain more 3 dB than the classic FSK [@Kimionis2004]-[@Hilliard2015]. The power beacon transmits a continuous sinusoid wave of the frequency $F_c$ with the complex baseband equivalent as follows: $$c\left( t \right) = \sqrt {2{P_S}} {e^{ - j\left( {2\pi \Delta Ft + \Delta \varpi } \right)}},$$ where the $P_S$ is the transmission power of the PB, $\Delta F$ and $\Delta \varpi$ are the frequency and phase offsets, respectively, between the PB and the IoT gateway.
We assume communication channels of three types of links: (1) the links from the PB to the IoT devices, (2) the links from the IoT devices to the IoT gateway, (3) the link from the PB to the IoT gateway, suffer frequency non-selective fading (flat fading) due to the low bit rate of backscatter communications. Since the limited communication range, we consider light-of-sight (LOS) environments in this paper, thus the channel gain for the three types of links is given by: $${g_{BD}} \!=\! \frac{{{G_B}{G_D}{\lambda ^2}}}{{{{\left( {4\pi {d_{BD}}} \right)}^2}}},{g_{DG}} \!=\! \frac{{{G_D}{G_G}{\lambda ^2}}}{{{{\left( {4\pi {d_{DG}}} \right)}^2}}},{g_{BG}} \!=\! \frac{{{G_B}{G_G}{\lambda ^2}}}{{{{\left( {4\pi {d_{BG}}} \right)}^2}}},$$ where $G_B$, $G_D$, $G_G$ denote the antenna gains of the PB, IoT devices and IoT gateway, respectively. $\lambda$ is the wavelength of the RF signal, and $d_{BD}$, $d_{DG}$, and $d_{BG}$ are the communication distances of three aforementioned links. IoT devices are irradiated by the RF undemodulated signal $c(t)$. Then, the baseband scatter waveform at the IoT devices are written as: $$x\left( t \right) = \gamma {u_i}\left( t \right){\sqrt {{g_{BD}}}} \;{c\!\left( t \right)}, \quad i \in \left\{ {0,1} \right\},$$ where $\eta$ is the attenuation constant of the reflected waveform depending on the backscattering efficiency. For the binary FSK modulation, we consider two distinct load values $\Gamma_i$ with different rates $F_i$ to represent bits $b_i \in \{0, 1\}$, thus the baseband backscatter FSK waveform $u_i(t)$ models the fundamental frequency component of a $50\%$ duty cycle square waveform of frequency $F_i$ and random initial phase ${\Phi _i} \in \left[ {0,2\pi } \right)$: $${u_i}\left( t \right) = {u_0} + \frac{{{\Gamma _0} - {\Gamma _1}}}{2}\frac{4}{\pi }\cos \left( {2\pi {F_i}t + {\Phi _i}} \right),\quad i \in \left\{ {0,1} \right\}$$ where $u_0 = \left(A_s - \frac{{{\Gamma _0} + {\Gamma _1}}}{2}\right)$, and $A_s$ is a complex-valued term related to the antenna structural mode [@Bletsas2010].
The IoT gateway receives both the RF unmodulated signal directly from the PB and the backscattered signals from the IoT devices. Thus, the received baseband signal at the IoT gateway for duration $T$ of a single bit $b_i \in \{0, 1\}$ is given by: $$\begin{aligned}
y\left(t\right) &\!=\! \sqrt {{g_{BG}}} \;c\!\left( t \right) + \sqrt {{g_{DG}}}\; x\!\left( t \right) + n\!\left( t \right) \\
&\!=\! \sqrt {\!2{P_S}} \Bigl\{{ \!\sqrt {\!{g_{BG}}}} \!+\! \gamma \sqrt {\!{g_{BD}}} \sqrt {\!{g_{DG}}} {u_0}\\
&{ \quad + \gamma \sqrt {\!{g_{BD}}}\sqrt {\!{g_{DG}}} \frac{2}{\pi }\!\left( {{\Gamma _0} \!-\! {\Gamma _1}}\! \right)\!\cos\! \left( \!{2\pi {F_i}t \!+\! {\Phi _i}}\! \right)} \!\Bigr\} \!\!+\! n(t),
\end{aligned}$$ where $n(t)$ is the channel noise. Carrier frequency offset (CFO) and removing the DC value from the received signal $y(t)$ are carried out before the maximum-likelihood estimation (MLE) is implemented at the IoT gateway. The received signal $y(t)$ is then rewritten as follows: $$y\!\left( t \right) \!=\! \gamma \sqrt {2{P_S}} \sqrt {{g_{BD}}} \sqrt {{g_{DG}}} \frac{2}{\pi }\!\left( {{\Gamma _0} \!-\! {\Gamma _1}} \right)\!\cos \!\left( {2\pi {F_i}t + {\Phi _i}} \right)$$ Thus, the received power at the IoT gateway is calculated as follows: $${P_R^{bb}} = {\eta^2}{g_{BD}}{g_{DG}}\frac{4}{{{\pi ^2}}}{\left( {{\Gamma _0} - {\Gamma _1}} \right)^2}{P_S}$$ The achievable rate of backscatter communications is given by: $$\label{eq: achievable_rate}
W = \Omega_B {\log _2}\left( {1 + \frac{{\zeta {P_R^{bb}}}}{{{N_0}}}} \right)$$ where $\Omega_B$ is the bandwidth of the unmodulated RF signal, $\zeta$ is the performance gap reflecting real modulation, and $N_0$ is the power spectral density (psd) of the channel noise. We denote $W_p$ and $W_h$ are the achievable rates of the $\text{PWPD}_p$ and $\text{HWPD}_h$ calculated as in , respectively. Finally, the total throughput obtained by the AWPDs and HWPDs in the emitting period of the PB is determined as follows: $$\begin{aligned}
{R^{bb}} &\!=\! \sum\limits_{p = 1}^P {{W_p}} {\theta _p} + \sum\limits_{h = 1}^H {{W_h}} {\tau _h} \\
&\!=\! \sum\limits_{p = 1}^P \Omega_B {\theta _p}{\log _2}\!\!\left({1 \!+\! {\kappa_p}{P_S} }\! \right) \!+\!\! \sum\limits_{h = 1}^H \Omega_B {\tau _h}{\log _2}\!\!\left({1 \!+\! {\kappa_h}{P_S}} \!\right)
\end{aligned}$$ where ${\kappa _p} = {\zeta}{\eta_p^2}{g_{BD,p}}{g_{DG,p}}{\left( {{\Gamma _0} - {\Gamma _1}} \right)^2}\frac{4}{{\pi ^2}{N_a^0}}$ and ${\kappa _h} = {\zeta}{\eta_h^2}{g_{BD,h}}{g_{DG,h}}{\left( {{\Gamma _0} - {\Gamma _1}} \right)^2}\frac{4}{{\pi^2}{N_h^0}}$.
### Sleeping period of the PB
As mentioned in the previous subsection, only AWPDs and HWPDs are able to communicate with the gateway in this period by using their RF transmission circuits. The amount of harvested energy of the $\text{AWPD}_a$ and $\text{HWPD}_h$ from the PB are calculated as follows: $$\left\{ {\begin{array}{*{20}{l}}
{{E_a} = \beta P_{R,a}^B},\\
{{E_h} = \left( {\beta - {\tau _h}} \right)P_{R,h}^B},
\end{array}} \right.$$ where $P_{R,a}^B = {\varphi_{a}}{g_{{BD},a}}{P_S}$ and $P_{R,h}^B = {\varphi_{h}}{g_{BD},h}{P_S}$ are the received power at the $\text{AWPD}_a$ and $\text{HWPD}_h$ from the PB, respectively [@BalanisAntenna2012]. $P_S$ is the transmission power of the energy transmitter (i.e., the PB), and $\{\varphi_a, \varphi_h\}$ are the harvesting efficiency coefficients of the $\text{AWPD}_a$ and $\text{HWPD}_h$, respectively. We consider the energy consumption by active transmissions of the AWPDs and HWPDs as the dominant energy consumption and ignore the energy consumed by electronic circuits. Hence, all harvested energy of the AWPDs and HWPDs is utilized to transmit data in the sleeping period of the PB, and the transmission power of the $\text{AWPD}_a$ and $\text{HWPD}_h$ are $P_a^t \!=\! E_a/{\nu_a}$ and $P_h^t \!=\! E_h/{\mu_h}$, respectively. Then the total throughput $R^{st}$ achieved by active transmissions of the AWPDs and HWPDs in the sleeping period of the PB are determined by: $$\begin{aligned}
\label{eq: R^tr}
{R^{st}} &\!\!\!=\!\!\! \sum\limits_{a = 1}^A \!\!{{\nu _a}\Omega_D {{\log }_2}}\!\!\left(\!{\!1 \!\!+\!\! \frac{{\zeta}{g_{{DG}\!,a}}{P_a^t}}{{N_a^0}}}\! \right) \!\!+\!\!\! \sum\limits_{h = 1}^H \!{{\mu _h}\Omega_D\! {{\log }_2}}\!\!\left(\!{\!1 \!\!+\!\! \frac{{\zeta}{g_{{DG}\!,h}}{P_h^t}}{{N_h^0}}} \!\right)\\
&\!=\!\!\!\sum\limits_{a = 1}^A \!{\nu _a}{\Omega_D}{\log _2}\!\!\left(\!{\!1 \!\!+\!\! {\delta_a}\!\frac{{\beta}{P_S}}{{{\nu _a}}}}\! \!\right) \!\!+\!\! \sum\limits_{h = 1}^H \!{\mu _h}{\Omega_D}{\log _2}\!\!\left[\!{\!1 \!\!+\!\! {\delta_h}\!\frac{{(\!\beta \!-\!\tau_h\!)\!P_S}}{{{\mu _h}}}}\!\!\right]\!\!,
\end{aligned}$$ where $\delta_{a} \!=\!\frac{{\zeta}{\varphi_a}{g_{{DG},a}}{g_{{BD},a}}}{N_0^a}$ and $\delta_{h} \!=\!\frac{{\zeta}{\varphi_h}{g_{{DG},h}}{g_{{BD},h}}}{N_0^h}$. $\Omega_D$ is the bandwidth for the HTT protocol, and $\{N_a^0, N_h^0\}$ are the noise of the communication channels from the $\text{AWPD}_a$ and $\text{HWPD}_h$ to the gateway, respectively.
Finally, the network throughput ($R_{sum}$) of the IoT service can be determined as follows: $$\begin{aligned}
\label{eq: Rsum1}
&{R_{sum}}\!\left( \bm{\theta},\bm{\nu},\bm{\tau},\bm{\mu} \right) = R^{bb} \!+\! R^{st} \\
&\!\! = \!\!\sum_{p = 1}^P\! {{\Omega_B}{\theta_p}{\log _2}\!\left({1 \!+\! {\kappa_p}{P_S} }\! \right)} \!+\! \sum_{a = 1}^A \!{{\nu_a}{\Omega_D}{\log_2}\!\left(\!\!1 \!+\! {\delta_a}\frac{ {\beta}{P_S}}{\nu_a}\!\right)} \\
&\!\!+\!\! \sum_{h = 1}^H \!\!{\left\{\!{{\Omega_B}{\tau_h}{\log _2}\!\left({\!1 \!\!+\! {\kappa_h}{P_S} }\! \right)} \!+\! {\mu_h}{\Omega_D}{\log_2}\!\!\left[\!1 \!\!+\! {\delta_h}\!\!\frac{( \beta \!-\! {\tau_h})\!{P_S}}{\mu_h}\!\right]\!\!\right\}}\!.
\end{aligned}$$ It is modeled as the achieved profit of the communication service to jointly maximize the benefits of both service providers in the HWPBC network.
Joint Energy Trading and Time Allocation based on Stackelberg Game {#section:EnergyTrading}
==================================================================
Based on the system model given in the Section II, we first introduce the Stackelberg game to model the energy interaction between the ISP and ESP. Then we will present the strategic behaviors of these service providers in the following subsections.
Stackelberg Game-based Energy Trading
-------------------------------------
### Game Formulation
- **Leader payoff function**: The achievable benefit of the ISP is defined as follows: $$\label{eq:Leader_func}
\begin{aligned}
{\bm{U}_L} \!\left( {{p_l}} , {\beta}, \bm{\psi} \right) = {p_r}{R_{sum}} - {p_l}{ \beta}{P_S},
\end{aligned}$$ where $p_r$ is the benefit per bit transmitted by IoT devices, and $p_l$ is the energy price paid by the IoT service provider to the energy service provider. The leader maximizes its utility function $\bm{U}_L$ w.r.t. the energy price $p_l$, operation time $\beta$, and time scheduling $\bm{\psi} \buildrel \Delta \over = (\bm{\theta}, \bm{\nu}, \bm{\tau}, \bm{\mu})$.
- **Follower utility function**: In this game, the PB is the follower and it optimizes its transmission power based on the requested energy price and operation time from the IoT service provider. The utility function of the follower is determined based on its profit obtained from the IoT service provider and its cost incurred during the operation time: $$\label{eq: follower_payoff_func}
{{\bm{U}}_F}\left( {{P_S}} \right) = {\beta}\left[ {{p_l}{P_S} - F({{P_S}})} \right],$$ where $F(x) = {a_m}x^2 + b_{m}x$ is a quadratic function which is applied for the operation cost of the PB [@Mohsenian2010].
### Solution to the Stackelberg Game
The definition of the Stackelberg equilibrium (SE) is stated as follows:
The optimal solution $(P_S^{*}, p_l^*, \beta^*, \bm{\psi}^*)$ is the Stackelberg equilibrium if the following conditions are satisfied [[@Fudenberg1991]]{}: $$\left\{ \begin{array}{ll}
{{\bm{U}}_L}\!\left({{P_S^{*}},p_l^*, \beta^*, \bm{{\psi}}^*} \right) \ge {\bm{U}_L}\!\left({{P_S^{*}},{p_l},\beta, \bm{{\psi}}}\right),\\
{{\bm{U}}_F}\!\left({{P_S^{*}},p_l^*,\beta^*, \bm{{\psi}}^*} \right) \ge {{\bm{U}}_F}\!\left( {{P_S},p_l^*,\beta^*, \bm{{\psi}}^*} \right).
\end{array} \right.$$
To obtain the Stackelberg game solution, we adopt the backward induction technique in the two following theorems.
\[theorem: SE\_of\_follower\] Given a strategy of the leader (i.e., the ISP), the follower (i.e., the PB) can obtain a unique optimal $P_S^*$ in a closed-form.
Intuitively, given a strategy $( p_l, \beta, \bm{\psi})$ offered by the leader, the utility function of the follower in (\[eq: follower\_payoff\_func\]) is a quadratic function w.r.t. $P_S$. Thus, the unique optimal solution of $P_S$ can be obtained as follows: $$\label{eq:optimalPS}
{P_S^{*}} = \frac{{{p_l} - b_m}}{{2a_m}}.$$
Given the follower’s (i.e., the ESP) strategy $P_S^*$, there exits a sub-optimal solution for the leader (i.e., the ISP), and the best strategy of the ISP can be obtained by an iterative algorithm.
Given optimal transmission power $P_S^{*}$ of the follower, the leader payoff function can be rewritten as in .
$$\begin{aligned}
\label{eq: rewritten_leader_func}
{{\bm{U}}_L}\! \left( {{p_l}}, \beta, \bm{\psi} \right) &= {p_r}\Biggl\{ {\sum\limits_{p = 1}^P {{\Omega_B}{\theta_p}{\log _2}\!\left(\!\!{1 \!+\! {\kappa _p}\frac{{\left( {{p_l} - {b_m}} \right)}}{{2{a_m}}} } \!\right)} \!+\!\sum\limits_{a = 1}^A\! {\Omega_D}{{\nu _a}} {{\log }_2}\!\left[\! {1 \!+\! {\delta _a}\frac{{\beta\left( {{p_l} - {b_m}} \!\right)}}{{2{\nu _a}{a_m}}}} \!\right]}\\
& \quad \quad {+ \sum\limits_{h = 1}^H \! \left[ \!{{{\Omega_B}{\tau_h}{\log _2}\!\left(\!\!{1 \!+\! {\kappa _h}\frac{{\left( {{p_l} \!-\! {b_m}} \right)}}{{2{a_m}}} }\! \!\right)} \!+\! {\Omega_D}{\mu _h}{{\log }_2} \!\left(\!\! {1 \!+\! {\delta _h}\frac{{( \beta \!-\! {\tau _h})\!\left( {{p_l} \!-\! {b_m}} \right)}}{{2{\mu _h}{a_m}}}} \!\right)} \!\!\right]}\! \!\Biggr\} \!-\! \frac{{{p_l}\beta\!\left( {{p_l} \!-\! {b_m}} \right)}}{{2{a_m}}}. \quad \quad
\end{aligned}$$
Then, the maximum profit of the leader is expressed by the strategic vector $\bm{\chi^*} = ({p_l^*}, \beta^*, \bm{\psi}^*)$:
\[opt1:main\] $$\begin{aligned}
&\mathop {\max }\limits_{\left( {{p_l}, \beta, \bm{\psi} } \right)}{{\bm{U}}_L} \!\left( {{p_l}, \beta}, \bm{\psi} \right), \tag{\ref{opt1:main}}\\
\text{s.t.} \; & 0 \le P_S \le P_{S}^{max}, \label{opt1:a} \\
& P_i^{min} \le P_i^t \le P_i^{max}, i \in \left\{ {a,h} \right\}, \label{opt1:b}\\
& E_i^{min} \le E_i \le {E_i^{max}}, i \in \left\{ {a,h} \right\},\label{opt1:c}\\
& \gamma_i^{bb} \ge \gamma_i^{\min }, i \in \left\{ {p,h} \right\}, \label{opt1:d}\\
& 0 \le \sum\nolimits_{p = 1}^P {\theta _p} \!+\! \sum\nolimits_{h = 1}^H {\tau _h} \le \beta \le 1, \forall {\theta _p}, \forall {\tau _h} \!\ge\! 0 \label{opt1:e}\\
& 0 \!\le\!\! \sum\nolimits_{a = 1}^A\! {\nu _a} \!+\! \sum\nolimits_{h = 1}^H \!{\mu _h} \!\le\! 1 \!-\! \beta \!\le\! 1, \forall {\nu _a}, \forall {\mu _h} \!\ge\! 0, \label{opt1:f}\end{aligned}$$
It should be noted that the transmission power of the PB, i.e., ${P_S} = \frac{{\left( {{p_l} - {b_m}} \right)}}{{2{a_m}}}$, must satisfy the FCC Rules [@FCC_rules] for unlicensed wireless equipment operating in the ISM bands as shown in the constraint . For the IoT devices, the transmission power of AWPDs and HWPDs, i.e., $P_a^t = \frac{{{\varphi _a}{g_{BD,a}}\beta \left( {{p_l} - {b_m}} \right)}}{{2a_m}{\nu_a}}$, and $P_h^t = \frac{{{\varphi _h}{g_{BD,h}}\left( {\beta - {\tau _h}} \right)\left( {{p_l} - {b_m}} \right)}}{{2{a_m}{\mu_h}}}$, respectively, must be enough for active communication with the IoT gateway as well as under a threshold as presented in . Next, the total energy harvested by the AWPDs and HWPDs in the emitting period of the PB, i.e., ${E_a} = \frac{{{\varphi _a}{g_{BD,a}}\beta \left( {{p_l} - {b_m}} \right)}}{{2{a_m}}}$, ${E_h} = \frac{{{\varphi _h}{g_{BD,h}}\left( {\beta - {\tau _h}} \right)\left( {{p_l} - {b_m}} \right)}}{{2{a_m}}}$, must be sufficient for their operations, as well as not exceed the capacity of their batteries as represented in the constraints . Furthermore, the SNR at the gateway received from PWPDs and HWPDs by backscatter communications, i.e., $\gamma _p^{bb} = \frac{{{\kappa _p}\left( {{p_l} - {b_m}} \right)}}{{2{a_m}}},\gamma _h^{bb} = \frac{{{\kappa _h}\left( {{p_l} - {b_m}} \right)}}{{2{a_m}}}$, must satisfy the constraints to guarantee *bit-error-rate* (BER) lower than or equal to $10^{-2}$ for IoT applications \[ref???\]. Finally, the constraints - are time constraints to impose IoT devices working on the proper periods, i.e., the PWPDs and HWPDs must backscatter RF signals in the emitting period and the AWPDs and HWPDs must perform active transmissions in the idle period of the PB. To find the optimal solution $\bm{\chi^*} = (p_l^*, \beta^*, \bm{{\psi}^*})$, in the next section we introduce a low-complexity iterative algorithm using *block coordinate descent* (BCD) technique [@Tseng2001] to address the non-convex optimization problem in .
Non-Negotiated Energy Trading
-----------------------------
For comparison, we present a baseline scenario in which the ESP sends a fixed energy price to the ISP without negotiation. Then, the ISP optimizes its profit by adjusting the transmission power and buying time demand and data transmission strategies (i.e., backscattering or active transmission). Given energy price $p_l$ from the ESP, the achievable profit of the ISP is expressed as follows: $$\begin{aligned}
\label{eq: fix_price_profit}
{\bm{U}_P}\left( {{P_S},\beta ,\psi } \right) &= {\bm{U}_T}\left( {{P_S},\beta ,\psi } \right) - {p_l}\beta {P_S},
\end{aligned}$$ where ${\bm{U}_T}\left( {{P_S},\beta ,\psi } \right)$ presented in is the total profit of the ISP achieved by providing data service.
$$\begin{aligned}
\label{eq: profit_func}
{\bm{U}_T}\!\left(\! {{P_S}\!,\beta \!,\bm{\psi}\! } \right) \!=\! {p_r}\!\left\{ \sum\limits_{p = 1}^P\! {{\Omega_B}{\theta_p}{\log _2}\!\left(\!{1 \!+\! {\kappa _p}{P_S}} \!\right)} \!+\!\!\sum\limits_{a = 1}^A\! {\Omega_D}{{\nu _a}} {{\log }_2}\!\!\left(\! {\!1 \!+\! {\delta _a}\!\frac{{\beta{P_S}}}{{{\nu _a}}}} \!\!\right) \!\!+\!\! \sum\limits_{h = 1}^H \!\! \left[ \!{{{\Omega_B}{\tau_h}{\log _2}\!\left(\!{1 \!+\! {\kappa _h}{P_S} } \!\right)} \!\!+\!\! {\Omega_D}{\mu _h}{{\log }_2} \!\!\left(\!\! {1 \!+\! {\delta _h}\!\frac{{( \beta \!-\! {\tau _h}){P_S}}}{{{\mu _h}}}} \!\!\right)} \!\!\right]\!\! \!\right\}\!.
\end{aligned}$$
Then, the profit maximization of the IoT service is obtained as follows:
\[optFix:main\] $$\begin{aligned}
&\mathop {\max }\limits_{\left( {{P_S}, \beta, \bm{\psi} } \right)}{{\bm{U}}_P} \!\left( {{P_S}, \beta}, \bm{\psi} \right), \tag{\ref{optFix:main}}\\
\text{s.t.} \; & \text{satisfy constraints~\eqref{opt1:a}}-\eqref{opt1:f}, \label{optFix:a}
\end{aligned}$$
It is worth noting that, in this case, the profit function of the ISP with respect to the transmission power and operation time of the PB, and the scheduling time of IoT devices can also be solved by the proposed schemes in the following section.
Social welfare scenario
-----------------------
Energy trading based on Stackelberg game formulated in section \[section:EnergyTrading\]-A captures the strategic interaction between the ISP and ESP. However, this trading strategy may lead to the performance loss of both the ISP and ESP due to the possible selfish behaviors of both players. Therefore, we propose a *social welfare* scenario in which the ISP and ESP cooperatively maximize the sum of their profits to investigate the inefficiency of the proposed approach. Mathematically, the utility function of social welfare can be formulated as below: $$\begin{aligned}
\label{eq: social_welfare_func}
{{\bm{U}_{SW}}}\! \left(\! {{P_S}}, \beta, \bm{\psi} \right) = {\bm{U}_T}\!\left(\! {{P_S},\beta ,\psi } \right) \!-\! {\beta}\!\left({a_m}{P_S^2} \!+\! {b_m}{P_S}\right)\!.
\end{aligned}$$ Thus, the social welfare maximization problem is given by:
\[optSW:main\] $$\begin{aligned}
&\mathop {\max }\limits_{\left( {{P_S}, \beta, \bm{\psi} } \right)}{{\bm{U}}_{SW}} \!\left( {{P_S}, \beta}, \bm{\psi} \right), \tag{\ref{optSW:main}}\\
\text{s.t.} \; & \text{satisfy constraints~\eqref{opt1:a}}-\eqref{opt1:f}, \label{optSW:a}
\end{aligned}$$
Similarly, the social welfare maximization problem can be also solved efficiently by the relaxed schemes proposed in the following section. To evaluate the inefficiency of the proposed approach, we uses the *Price of Anarchy* (PoA) [@Roughgarden2015] which is defined as the ratio of the utility value (i.e., defined in ) of a worst Nash equilibrium and its maximum value. Note that, in our game, the Stackelberg equilibrium is considered as the worst Nash equilibrium. Then, the PoA ratio is expressed as below: $$PoA = \frac{{{\bm{U}_{SW}}\left( {\bm{{\chi ^*}}} \right)}}{{\mathop {\max }\limits_{\left( {{P_S},\beta ,{\rm{ }}\bm{\psi} } \right)} {\bm{U}_{SW}}\left( {{P_S},\beta ,\bm{\psi} } \right)}}$$
Iterative Algorithms to Find the Stackelberg Equilibrium {#section:IterativeAlgorithms}
========================================================
To address the non-convex multi-variable optimization problem (\[opt1:main\]), we propose two relaxed schemes, i.e., partial adjustment (PA) and joint adjustment (JA) of the energy price and service time of the PB, which exploit the BCD technique. These schemes reduce significantly number of variables in the original optimization problem by splitting it into convex optimization sub-problems and solving them in each iteration. Thus, the proposed schemes expected to be a powerful tool to address the original problem efficiently are presented in the following subsections.
PA Scheme
---------
This scheme performs an iterative algorithm to divide the variable tuple $\bm{\chi}$ into 3 different blocks of variables, i.e., the energy price $ {p_l}$, the emitting time ${\beta}$, and the scheduling times ${\bm{\psi}} \buildrel \Delta \over = (\bm{\theta}, \bm{\tau}, \bm{\nu}, \bm{\mu})$. In particular, the algorithm starts by initializing an initial solution $\{{p_l}^{(0)}, {\beta}^{(0)}, {\bm{\psi}}^{(0)}\}$. The following three steps are repeated until no further improvement can be obtained: (i) optimize the energy price ${p_l}^{(n)}$ from the last optimal output $\{{p_l^{(n-1)}},{\beta}^{(n-1)}, {\bm{\psi}}^{(n-1)}\}$; (ii) obtain the emitting time of the PB ${\beta}^{(n)}$ by keeping the $\{{p_l}^{(n)}, {\bm{\psi}}^{(n-1)}\}$ fixed; (iii) find the optimal scheduling times ${\bm{\psi}}^{(n)}$ of the IoT devices with the fixed ${p_l}^{(n)}$ and ${\beta}^{(n)}$. These steps are described in detail as the follows:
### Optimal Energy Price Offered for the PB
In the first step of the algorithm loop, we obtain the optimal requested price $p_l$ based on the optimal solution from the previous step $\{{p_l^{(n-1)}}, {\beta^{(n-1)}}, {{\bm{\psi}}^{(n-1)}}\}$. It is worth noting that the time constraints in the problem are eliminated because the time variables are constant and set by the previous optimal vector $\bm{\psi}^{(n-1)}$. Then, the original optimization problem can be transformed into:
\[subopt1:main\] $$\begin{aligned}
{3}
&\mathop {\max } \limits_{{p_l}} { G} ({{p_l}} ) , \tag{\ref{subopt1:main}}\\
\text{s.t.} \quad & 0 \le {p_l} - {b_m} \le 2{a_m}{P_S^{\max }} , \label{subopt1:a} \\
& P_a^{min} \le {{{ r}_{a,1}}\left( {{p_l} - {b_m}} \right)} \le P_a^{max}, \label{subopt1:b}\\
& P_h^{min} \le {{{ r}_{h,2}}\left( {{p_l} - {b_m}} \right)} \le P_h^{max}, \label{subopt1:c}\\
& E_a^{min} \le {{ r}_{a,1}}{\nu_a^{(n-1)}}\left( {{p_l} - {b_m}} \right) \le E_a^{max},\label{subopt1:d}\\
& E_h^{min} \le {{ r}_{h,2}}{\mu_h^{(n-1)}}\left( {{p_l} - {b_m}} \right) \le E_h^{max}, \label{subopt1:e} \\
& { c}_{p,2}\left( {{p_l} - {b_m}} \right) \ge \gamma_p^{\min }, \label{subopt1:f}\\
& { c}_{h,6}\left( {{p_l} - {b_m}} \right) \ge \gamma _h^{\min }, \label{subopt1:g}
\end{aligned}$$
where
$$\begin{aligned}
\label{eq: G_1}
{ G} \left( {{p_l}}\right) &= {\sum\limits_{p = 1}^P {{{ c}_{p,1}}{\log _2} \left[ {1 + {c_{p,2}}{{\left( {{p_l} - {b_m}} \right)}}} \right]} + \sum\limits_{a = 1}^A {{ c}_{a,3}}{{\log }_2} \left[ {1 + {{ c}_{a,4}}{{\left( {{p_l} - {b_m}} \right)}}} \right]} \\
& \quad \quad+ \sum\limits_{h = 1}^H \left\{ {{{{ c}_{h,5}}{\log _2} \left[{1 + {c_{h,6}}{{{\left( {{p_l} - {b_m}} \right)}}} } \right]} + {{ c}_{h,7}}{{\log }_2} \left[ {1 + {{ c}_{h,8}}{{\left( {{p_l} - {b_m}} \right)}}} \right]} \right\} - \frac{{\beta^{(n-1)}}{{{p_l}\left( {{p_l} - {b_m}} \right)}}}{2{a_m}}, \quad\quad\quad\quad\quad
\end{aligned}$$
and ${c}_{p,1} \!=\! {p_r}\Omega_B \theta_p^{\left( \!{n - 1}\! \right)}$, ${c_{p,2}} \!=\! \frac{\kappa_p}{2{a_m}}$, ${c}_{a,3} \!=\! {p_r}{\Omega_D}\nu _a^{\left( \!{n - 1} \!\right)}$, ${c}_{a,4} \!=\! \frac{{{\delta _a}{\beta^{(\!n - 1\!)}}}}{{2\nu _a^{\left(\! {n - 1}\! \right)}{a_m}}}$, ${c}_{h,5} \!=\! {p_r}\Omega_B {\tau _h}$, ${c_{h,6}} \!=\! \frac{\kappa_h}{2{a_m}}$, ${c}_{h,7} \!=\! {p_r}{\Omega_D}{\mu _h^{\left( {n - 1} \right)}}$, ${c}_{h,8} \!=\! \frac{{{\delta _h}\left( {{\beta ^{(n - 1)}} - \tau _h^{(n - 1)}} \right)}}{{2\mu _h^{\left( {n - 1} \right)}{a_m}}}$, ${r}_{a,1} = \frac{{{c}_{a,4}{N_0^a}}}{{\zeta {g_{DG,a}}}}$, ${r}_{h,2} = \frac{{{c}_{h,8}{N_0^h}}}{{\zeta {g_{DG,h}}}}$, $(\forall a \in \mathcal{A}, \forall h \in \mathcal{H})$.
\[lemma: convex\_proof\_subopt1\] The objective function $G$ is a concave function w.r.t. $p_l$ satisfying the linear constraints in -, and the optimal solution for the single variable sub-problem can be obtained by line search methods.
The function ${ G} {(p_l)}$ is a sum of logarithmic functions of $p_l$ which has the form $\log_2({a_t}x+b_t)$, and a quadratic function ${ f} {(p_l\!)} = -{{ c}_9}{p_l}{\left(p_l - b_m\right)}$. Intuitively, the logarithmic function $\log_2({a_t}x+b_t)$ is a concave function w.r.t. $x$. On the other hand, the quadratic function ${ f} {(p_l)}$ is also a concave function. Thus, the objective function ${ G}$ is a concave function w.r.t. $p_l$. Since the sub-problem is a single variable optimization problem which can be solved efficiently using the line search methods such as the golden section or parabolic interpolation methods.
### Optimal Emitting Time of the PB
Similar to the sub-problem , the transmission power constraint of the PB, the time constraints of all IoT devices, and SNR constraints of backscatter devices are always satisfied with the fixed $\{p_l^{(n)}, {\bm{\psi}}^{(n-1)}\}$, thus they are totally ignored. The optimal emitting time $\beta$ of the PB in the *n-*th iteration can be obtained in the second step by solving the following sub-problem:
\[subopt2:main\] $$\begin{aligned}
{3}
&\mathop {\max }\limits_{{\beta}} {{\hat G}} \left( \beta \right), \tag{\ref{subopt2:main}}\\
\text{s.t.} \quad & 0 \le {\beta} \le 1, \label{subopt2:a} \\
& P_a^{min} \le {{{\hat r}_{a,1}}{\beta}} \le P_a^{max}, \label{subopt2:b} \\
& P_h^{min} \le {{{\hat r}_{h,2}}{\left({\beta - {\hat c}_{h,5}}\right)}} \le P_h^{max}, \label{subopt2:c}\\
& E_a^{min} \le {{\hat r}_{a,1}}{\nu_a^{(n-1)}}{\beta} \le E_a^{max},\label{subopt2:d} \\
& E_h^{min} \le {{\hat r}_{h,2}}{\mu_h^{(n-1)}}\left({\beta - {\hat c}_{h,5}}\right) \le E_h^{max}, \label{subopt2:e}\end{aligned}$$
where $$\begin{aligned}
{\hat G} \! \left( \beta \right) &\!=\!\! {\sum\limits_{a = 1}^A \!{{\hat c}_{a,1}} {{\log }_2}\!\left[ \!{1 \!+\! {\hat c}_{a,2}{\beta} } \right]} \!+\!\!\! {\sum\limits_{h = 1}^H\! {{\hat c}_{h,3}} {{\log }_2}\!\left[ \!{1 \!+\! {\hat c}_{h,4}\!\left( {\beta \!-\! {\hat c}_{h,5}} \right)}\!\right]}\\
& \quad \!-\! {{\hat c}_6}{\beta} \!+\! {\hat C},
\end{aligned}$$ $$\begin{aligned}
{\hat C} \!&=\! {p_r} {\sum\limits_{p = 1}^P \!\Omega{\theta _p^{\left(\! {n - 1} \!\right)}{{\log }_2}\!\!\left[ \!{1 \!+\! {\kappa _p}\frac{\!{\left( \!{p_l^{\left( \!{n - 1} \!\right)} \!-\! {b_m}} \!\right)}}{{2{a_m}}}} \!\!\right]}}\\
& \quad + {p_r}{\sum\limits_{h = 1}^H \!\Omega{\tau _h^{\left( \!{n - 1} \!\right)}{{\log }_2}\!\!\left[\! {1 \!+\! {\kappa _h}\frac{{\left( \!{p_l^{\left( \!{n - 1} \!\right)} - {b_m}} \right)}}{{2{a_m}}}} \!\!\right]} }\!, \quad
\end{aligned}$$ ${\hat c}_{a,1} \!=\! {p_r}{\Omega_D}\nu _a^{\left( {n - 1} \right)}\!$, ${\hat c}_{a,2} \!=\! \frac{{{\delta _a}\left( {p_l^{(n)} - {b_m}} \right)}}{{2\nu _a^{\left( {n - 1} \right)}{a_m}}}$, ${\hat c}_{h,3} \!=\! {p_r}{\Omega_D}\mu _h^{\left( {n - 1} \right)}\!$, ${\hat c}_{h,4} \!=\! \frac{{{\delta _h}\left( {p_l^{(n)} - {b_m}} \right)}}{{2\mu _h^{\left( {n - 1} \right)}{a_m}}}$, ${\hat c}_{h,5} \!=\! {\tau_h^{(n-1)}}$, ${\hat c_6} \!=\! \frac{{p_l^{\left( n \right)}\!\left( \!{p_l^{\left( n \right)} - {b_m}} \!\right)}}{{2{a_m}}}$, ${\hat r}_{a,1} \!=\! \frac{{{\hat c}_{a,2}{N_0^a}}}{{\zeta {g_{DG,a}}}}$, ${\hat r}_{h,2} \!=\! \frac{{{\hat c}_{h,4}{N_0^h}}}{{\zeta {g_{DG,h}}}}$, $(\forall a \in \mathcal{A}, \forall h \in \mathcal{H})$.
\[lemma: convex\_proof\_subopt2\] The objective function $\hat{G}$ is a concave function w.r.t. $\beta$ satisfying the linear constraints in -, and the optimal solution for the single variable sub-problem can be obtained by line search methods.
Following the proof of the Lemma \[lemma: convex\_proof\_subopt1\], the function ${\hat G} {(\beta)}$ is contributed by logarithmic functions forming as $\log_2{(a_t{x} + b_t)}$ and a linear function ${\hat f}{(\beta)} = -{\hat c_6}{\beta}$. The logarithmic function $\log_2{(a_t{x} + b_t)}$ is also concave w.r.t. $x$. Moreover, ${\hat C}$ is constant with the fixed $\bm{\psi}^{(n-1)}$. Thus, the objective function ${\hat G}$ is concave w.r.t. $\beta$. Therefore, the optimal solution of the single variable sub-problem can be also found efficiently by line search methods.
### Optimal Time Resource Allocation
In the third step, we investigate the time scheduling $\bm{\psi}^{(n)}$ based on the given $\{p_l^{(n)}, \beta^{(n)}\}$. The original optimization problem is simplified into:
\[subopt3:main\] $$\begin{aligned}
{3}
&\mathop {\max }\limits_{\bm{\psi}} {\tilde G}\left( {\bm{\psi}} \right), \tag{\ref{subopt3:main}}\\
\text{s.t.} \;
& {P_a^{min}} \le \frac {{\tilde r}_{a,1}}{\nu_a} \le {P_a^{max}}, \label{subopt3:a}\\
& {P_h^{min}} \le \frac{{{\tilde r}_{h,2}}\left({{\tilde c}_{h,4}} - {{\tilde r}_{h,5}}{\tau_h}\right) }{\mu_h} \le {P_h^{max}}, \label{subopt3:b}\\
& {E_h^{min}} \le {{{\tilde r}_{h,2}}\left({{\tilde c}_{h,4}} - {{\tilde c}_{h,5}}{\tau_h}\right) } \le {E_h^{max}}, \label{subopt3:c}\\
& 0 \!\le\! \sum\nolimits_{p = 1}^P {\theta _p} \!+\! \sum\nolimits_{h = 1}^H {\tau _h} \!\le\! 1- \beta^{(n)},\forall {\theta _p}, {\tau _h} \!\ge\! 0, \label{subopt3:d}\\
& 0 \le \sum\nolimits_{a = 1}^A {\nu _a} + \sum\nolimits_{h = 1}^H {\mu _h} \le \beta^{(n)}, \forall {\nu _a}, {\mu _h} \!\ge\! 0, \label{subopt3:e}
\end{aligned}$$
where $$\label{eq: funcG3}
\begin{aligned}
&{\tilde G}\!\left( {\bm{\psi}} \right) \!=\!\! \sum\limits_{p = 1}^P\! {{\tilde c}_{p,1}} {\theta _p} \!+\! \sum\limits_{a = 1}^A \!{p_r}{\Omega_D}{\nu _a}{\log _2}\!\!\left(\!\! {1\! + \!\frac{{{\tilde c}_{a,2}}}{{{\nu _a}}}}\! \!\right) \\
&\quad \quad \!\!+\!\! \sum\limits_{h = 1}^H\!\! {\left[\! {{\tilde c}_{h,3}{\tau _h} \!+ \!{p_r}{\Omega_D}{\mu _h}{{\log }_2}\!\!\left(\! \!{1\! + \!\frac{{{\tilde c}_{h,4} \!-\! {\tilde c}_{h,5}{\tau _h}}}{{{\mu _h}}}}\! \right)} \!\!\right]} \!\!+\! {\tilde C},
\end{aligned}$$ ${\tilde C} \!=\! -\frac{{p_l^{\left( n \right)}{\beta^{\left( n \right)}}\left( {p_l^{\left( n \right)} - {b_m}} \right)}}{{2{a_m}}}$, ${\tilde c}_{p,1} \!=\! {p_r}\Omega_B {\log _2}\left[ {1 + {\kappa _p}\frac{{\left( {p_l^{\left( n \right)} - {b_m}} \right)}}{{2{a_m}}}} \right]$, ${\tilde c}_{a,2} \!=\! \frac{{{\delta _a}{\beta ^{\left( {n} \right)}}\left( {p_l^{\left( {n} \right)} - {b_m}} \right)}}{{2{a_m}}}$, ${\tilde c}_{h,3} \!=\! {p_r}{\Omega_B} {\log _2}\left[ {1 + {\kappa _h}\frac{{\left( {p_l^{\left( n \right)} - {b_m}} \right)}}{{2{a_m}}}} \right]$, ${\tilde c}_{h,4} \!=\! \frac{{{\delta _h}{\beta ^{\left( {n} \right)}}\left( {p_l^{\left( {n} \right)} - {b_m}} \right)}}{{2{a_m}}}$, ${\tilde c}_{h,5} \!=\! \frac{{{\delta _h}\left( {p_l^{\left( {n} \right)} - {b_m}} \right)}}{{2{a_m}}}$, ${\tilde r}_{a,1} \!=\! \frac{{{\tilde c}_{3,a}{N_0^a}}}{{\zeta {g_{DG,a}}}}$, ${\tilde r}_{h,2} \!=\! \frac{{N_0^h}}{{\zeta {g_{DG,h}}}}$, $(\forall a \in \mathcal{A}, \forall h \in \mathcal{H})$. It is obvious that the SNR constraints of backscatter devices, i.e., PWPDs and HWPDs, as well as the energy constraints for AWPDs and HWPDs are removed as they always be satisfied with the fixed $\{p_l^{(n)}, \beta^{(n)}\}.$
To obtain the optimal solution for the sub-problem , we have the following Lemma \[lemma: convex\_proof\_subopt3\].
\[lemma: convex\_proof\_subopt3\] The objective function $G_{3}$ is a concave function w.r.t. $\bm{\psi}$ satisfying the linear constraints in -, and the optimal solution for the multi-variable sub-problem can be obtained by the interior-point method.
See Appendix \[App:lemma3\].
### The overall iterative algorithm for scheme I
Finally, the proposed iterative algorithm is summarized in the **Algorithm \[algorithm1\]**. The convergence and computing complexity of the proposed iterative algorithm is provided in the following theorem.
\[theorem: convergence\_complexity\_BCD\] For the PA scheme, the Algorithm \[algorithm1\] is guaranteed to converge to the SE point, and it converges in polynomial time.
See Appendix \[App: BCD\_proof\].
**Input:** The previous output $\{{p_l}^{(n-1)}, {\beta}^{(n-1)}, {\bm{\psi}}^{(n-1)}\}$. **Initialize:** $n = 1$, $\{{p_l}^{(0)}, {\beta}^{(0)}, {\bm{\psi}^{(0)}}\}$, tolerance $\xi_1 > 0$. **Compute:** the leader’s utility ${U_L}\left( {{{p_l}^{\left( 0 \right)}},{{\beta}^{\left( 0 \right)}}}, {\bm{\psi}}^{(0)} \right)$. **Repeat:** , and continue. The optimal solution $\bm{\chi^*} = \{{p_l}^*, {\beta}^*, {\bm{\psi}}^*\}$.
JA Scheme
---------
Different to the PA scheme, we first perform a joint optimization of the energy price and service time for the PB due to their trade-off relation. After that, time scheduling for the IoT devices are obtained with the optimal value of $\{{p_l}, {\beta}\}$ in the current loop by solving the problem .
### Joint optimal energy price and service time
With given tuple $\{{p_l}^{(n-1)}, {\beta}^{(n-1)}, {\bm{\psi}}^{(n-1)}\}$ from the previous output, we find the joint optimal energy price and service time by solving the following sub-problem:
\[subopt4:main\] $$\begin{aligned}
{3}
&\mathop {\max }\limits_{{p_l},{\beta}} {Q} \left( {p_l}, {\beta} \right), \tag{\ref{subopt4:main}}\\
\text{s.t.} \quad & 0 \le {\beta} \le 1, \label{subopt4:a} \\
& 0 \le {p_l} - {b_m} \le 2{a_m}{P_S^{\max }}, \label{subopt4:b} \\
& {P_a^{min}} \le {{s_{a,1}}{\beta}{\left({p_l - b_m}\right)}} \le {P_a^{max}}, \label{subopt4:c} \\
& {P_h^{min}} \le {{s_{h,2}}{\left(\beta - \tau_h^{(n-1)}\right)}{\left({p_l - b_m}\right)}} \le {P_h^{max}},\label{subopt4:d} \\
& {E_a^{min}} \!\le {s_{a,1}}{\nu _a^{\left(\! {n - 1} \!\right)}}{\beta}{\left({p_l \!-\! b_m}\right)} \!\le {E_a^{max}},\label{subopt4:e} \\
& {E_h^{min}} \!\le {s_{h,2}}{\mu_h^{(\!n-1\!)}}\!{\left(\!\beta \!-\! \tau_h^{(n-1)} \!\right)}\!{\left({p_l \!-\! b_m}\right)} \!\le {E_h^{max}}, \label{subopt4:f} \\
& e_{p,2}\left( {{p_l} - {b_m}} \right) \ge {\gamma_p^{min}}, e_{h,6}\left( {{p_l} - {b_m}} \right) \ge {\gamma_h^{min}}, \label{subopt4:g}
\end{aligned}$$
where ${Q} \left( {p_l}, {\beta} \right)$ is expressed in , and
$$\begin{aligned}
\label{eq: func_G4}
{Q}\left( {{p_l},\beta } \right) &= \sum\limits_{p = 1}^P {e_{p,1}{{\log }_2} \left[ {1 + e_{p,2}\left( {{p_l} - {b_m}} \right)} \right]} + \sum\limits_{a = 1}^A {e_{a,3}{{\log }_2} \left[ {1 + e_{a,4}\beta \left( {{p_l} - {b_m}} \right)} \right]} \\
& \quad + \sum\limits_{h = 1}^H {\left\{ {e_{h,5}{{\log }_2} \left[ {1 + e_{h,6} \left( {{p_l} - {b_m}} \right)} \right] + e_{h,7}{{\log }_2} \left[ {1 + e_{h,8}(\beta - \tau_h^{(n-1)}) \left( {{p_l} - {b_m}} \right)} \right]} \right\}} - \frac{{{\beta {p_l}\left( {{p_l} - {b_m}} \right)}}}{2{a_m}}, \quad
\end{aligned}$$
$e_{p,1} \!=\! {p_r}\Omega_B \theta _p^{\left(\! {n - 1} \!\right)}$, $e_{p,2} \!=\! \frac{{{\kappa _p}}}{{2{a_m}}}$, $e_{a,3} \!=\! {p_r}{\Omega_D}\nu _a^{\left(\! {n - 1} \!\right)}$, $e_{a,4} \!=\! \frac{{{\delta _a}}}{{2\nu _a^{\left( \!{n - 1} \!\right)}{a_m}}}$, $e_{h,5} \!=\! {p_r}\Omega_B \tau _h^{\left(\! {n - 1}\! \right)}$, $e_{h,6} \!=\! \frac{{{\kappa _h}}}{{2{a_m}}}$, $e_{h,7} \!=\! {p_r}{\Omega_D}\mu _h^{\left( \!{n - 1}\! \right)}$, $e_{h,8} \!=\! \frac{{{\delta _h}}}{{2{a_m}\mu _h^{\left(\! {n - 1} \!\right)}}}$, $s_{a,1} \!=\! \frac{{{e_{a,4}}{N_0^a}}}{{\zeta {g_{DG,a}}}}$, $s_{h,2} \!=\! \frac{{{e_{h,8}}{N_0^h}}}{{\zeta {g_{DG,h}}}}$, $(\forall a \in \mathcal{A}, \forall h \in \mathcal{H})$.
However, the sub-problem is non-convex due to the product of ${\beta}{p_l}\left({p_l - b_m}\right)$. To address this problem, we linearise this product by defining $q_1 = \frac{1}{2}{\left({p_l - b_m}\right)}{\left(1 + \beta\right)}$, $q_2 = \frac{1}{2}\left({p_l - b_m}\right)\left(1 - \beta\right)$, then the problem becomes:
\[subopt5:main\] $$\begin{aligned}
{3}
&\mathop {\max }\limits_{{q_1, q_2}} {\hat Q} \left( q_1, q_2 \right), \tag{\ref{subopt5:main}}\\
\text{s.t.} & 0 \le q_2 \le q_1, \label{subopt5:a} \\
& 0 \le q_1 + q_2 \le 2{a_m}{P_S^{max}}, \label{subopt5:b} \\
& \frac{q_1 - q_2}{q_1 + q_2} \ge T_{bs}^{(n-1)}, \frac{2{q_2}}{q_1 + q_2} \ge T_{at}^{(n-1)}, \label{subopt5:c} \\
& {P_a^{min}} \!\le\! {s_{a,1}}\!{\left({q_1 \!-\! q_2}\right)} \!\le\! {P_a^{max}}\!, \label{subopt5:d} \\
& {P_h^{min}} \!\!\le\! {s_{h,2}}\!{\left[\!\!\left(\!1 \!-\! \tau_h^{(n-1)}\!\right)\!\!{q_1} \!-\! \left(\!1 \!+\! \tau_h^{(n-1)}\!\right)\!\!{q_2}\!\right]} \!\!\le\! {P_h^{max}}\!, \label{subopt5:e} \\
& {E_a^{min}} \!\le\! {s_{a,1}}{\nu_a^{(\!n-\!1)}}\!{\left(\!{q_1 \!-\! q_2}\!\right)} \!\le\! {E_a^{max}}\!, \label{subopt5:f} \\
& {E_h^{min}} \!\le\! {s_{h,2}}{\mu_h^{(\!n-1\!)}}\!{\left[\!\left(\!\!1 \!-\! \tau_h^{(n-1)}\!\right)\!\!{q_1} \!-\! \left(\!\!1 \!+\! \tau_h^{(n-1)}\!\right)\!\!{q_2}\right]} \!\le\! {E_h^{max}}\!, \label{subopt5:g} \\
& e_{p,2}\left( {q_1 + q_2} \right) \ge {\gamma_p^{min}}, e_{h,6}\left( {q_1 + q_2} \right) \ge {\gamma_h^{min}}, \label{subopt5:h}
\end{aligned}$$
where ${\hat Q}\!\left( {{q_1},{q_2}} \right)$ is expressed in , and
$$\begin{aligned}
\label{eq: func_Q}
&{\hat Q}\!\left( {{q_1},{q_2}} \right) \!=\!\! \sum\limits_{p = 1}^P {e_{p,1}{{\log }_2}\left[ {1 + e_{p,2}\left( {{q_1} + {q_2}} \right)} \right]} + \sum\limits_{a = 1}^A {e_{a,3}{{\log }_2}\left[ {1 + e_{a,4}\left( {{q_1} - {q_2}} \right)} \right]} \\
& \quad \quad \!+\! \! \sum\limits_{h = 1}^H \!\!{\Bigl\{\! {e_{h,5}{{\log }_2}\!\left[ \!{1 \!+\! e_{h,6}\!\left(\!{{q_1} \!+\! {q_2}} \!\right)} \right] \!+\! e_{h,7}{{\log }_2}\!\!\left[\! {1 \!+\! e_{h,8}(\!1 \!-\! \tau_h^{(\!n-1\!) }){q_1} \!-\! e_{h,8}(\!1 \!+\! \tau_h^{\left(\!n-1\!\right)}){q_2}} \! \right]} \!\Bigr\}} \!-\! \frac{\left(\!{q_1^2 \!+\! {b_m}{q_1}}\!\right)}{2{a_m}} \!+\! \frac{\left(\!{q_2^2 \!+\! {b_m}{q_2}}\!\right)}{2{a_m}},
\end{aligned}$$
$T_{bs}^{(n-1)} = \sum\nolimits_{p = 1}^P {\theta_p^{(n-1)}} + \sum\nolimits_{h = 1}^H {\tau_h^{(n-1)}}$, $T_{at}^{(n-1)} = \sum\nolimits_{a = 1}^A {\nu_a^{(n-1)}} + \sum\nolimits_{h = 1}^H {\mu_h^{(n-1)}}$. Intuitively, the last term of the objective function ${\hat L}{(q_1, q_2)}$ is convex, while other terms are concave. We define ${V} \buildrel \Delta \over = \{q_1, q_2\}$ and $S$ is the set of $V$ satisfying -, then the objective function of the problem is rewritten as follows: $${\hat Q}\left( V \right) = {Q_{ccav}}\left( V \right) + {Q_{cvex}}\left( V \right),$$ where
$$\begin{aligned}
{Q_{ccav}}\!\left(\!V\!\right) &= \sum\limits_{p = 1}^P \!{e_{p,1}\!{{\log }_2}\!\left[\!{1 \!+\! e_{p,2}\!\left(\!{{q_1} \!+\! {q_2}} \right)}\!\right]} \!+\! \sum\limits_{a = 1}^A \!{e_{a,3}\!{{\log }_2}\!\left[\!{1 \!+\! e_{a,4}\!\left(\!{{q_1} \!-\! {q_2}}\!\right)}\! \right]} \\
&\quad \!+\!\! \sum\limits_{h = 1}^H \!\!{\Bigl\{\! {e_{h,5}{{\log }_2}\!\left[ \!{1 \!+\! e_{h,6}\!\!\left( {{q_1} \!+\! {q_2}} \right)}\! \right] \!+\! e_{h,7}{{\log }_2}\!\left[\! {1 \!+\! e_{h,8}\!(1 \!-\! \tau_h^{(n-1)}){q_1} \!-\! e_{h,8}\!(1 \!+\! \tau_h^{(n-1)}){q_2}} \right]} \!\Bigr\}} \!-\! \frac{\left( {q_1^2 \!+\! {b_m}{q_1}} \right)}{2{{a_m}}}, \quad
\end{aligned}$$
$${Q_{cvex}} = \frac{\left( {q_2^2 + {b_m}{q_2}} \right)}{2{a_m}}.$$
The problem is the *difference-of-convex-function* (DC) programming problem, which can be solved efficiently by the *convex-concave procedure* (CCCP) [@Yuille2001]. The core idea of the CCCP is to linearise the last term (i.e., convex function) by the first-order Taylor expansion at the current fixed point. We denote ${V^{(k-1)}} \buildrel \Delta \over = \left\{\!q_1^{(k-1)}, q_2^{(k-1)}\!\right\}$ as the fixed point at the $l$-th iteration, then the problem can be solved by the following sequential convex programming with linear constraints -: $$\begin{aligned}
\label{eq: subopt6}
{V^{\left( k \right)}} &= \arg \mathop {\max }\limits_{V \in S} {\tilde Q}\left( V \right) \\
&= \arg \mathop {\max }\limits_{V \in S} \left\{ {{Q_{ccav}}\!\left( V \right) + {V^T}\nabla {Q_{cvex}}\!\left( \!{{V^{\left( {k - 1} \right)}}} \right)}\! \!\right\},
% \textit{subject to} \quad & \text{constraints} \eqref{subopt5:a}-\eqref{subopt5:e}
\end{aligned}$$ where $\nabla {Q_{cvex}}\left( {{V^{\left( {k - 1} \right)}}} \right) = \frac{(2q_2^{\left( {k - 1} \right)} + {b_m})}{2{a_m}}$ is the gradient of $Q_{cvex}\left(V\right)$ at $V^{(k-1)}$. Ultimately, ${\tilde Q} (V)$ is a convex function, thus $V^{(k)}$ can be efficiently obtained by numerical methods, such as Newton or Interior-point methods.
In general, the CCCP can start by any point within the feasible region defined by the constraints -. However, we choose the initial value $V^{(0)} \!\!=\!\! \left\{\!q_1^{(0)}\!\!\left(\!p_l^{(\!n-1\!)}\!, \beta^{(\!n-1\!)}\!\right)\!\!, q_2^{(0)}\!\!\left(\!p_l^{(\!n-1\!)}\!, \beta^{(\!n-1\!)}\!\right)\!\!\right\}$ to guarantee the convergence of the proposed iterative method. The entire procedure of the CCCP algorithm is summarized in the Algorithm \[algorithm2\].
**Input:** The previous result of the BCD algorithm $\{{p_l^{(n-1)}, \beta^{(n-1)}, \bm{\psi}^{(n-1)}}\}$. **Initialize:** Initiate $k \!=\! 1$, a tolerance $\xi_2 \!>\! 0$, and a feasible solution $V^{(0)} \!\!=\!\! \{q_1^{(0)}(p_l^{(\!n-1\!)}\!, \beta^{(n-1)}), q_2^{(0)}(p_l^{(n-1)}\!, \beta^{(n-1)}\!)\!\}$. **Repeat:** , and continue. The optimal solution $ V^* = \{{q_1}^*, {q_2}^*\}$.
For analysis of the convergency of the CCCP algorithm, we state the following theorem and then prove it in Appendix \[App: CCCP\_proof\].
\[theorem: convergence\_optimal\_solution\_CCCP\] The Algorithm \[algorithm2\] utilizing the CCCP technique to solve the joint optimization problem . It converges to a local optimum $V^{*}$ by generating a sequence of $V^{(k)}$ providing ${\hat Q}\left( {{V^{\left( k \right)}}} \right) > {\hat Q}\left( {{V^{\left( {k - 1} \right)}}} \right), \forall k \ge 1$.
### The overall iterative algorithm for the JA scheme
After the implementation of the joint energy price and service time estimation, we perform time allocation for the IoT devices optimally by solving the problem . These steps are repeated until the stopping criterion of the algorithm is satisfied. The overall iterative algorithm for the JA scheme is summarized in the Algorithm \[algorithm3\]. Its convergence is analyzed in the following theorem, which can be proved similarly as the Theorem \[theorem: convergence\_complexity\_BCD\].
**Input:** The previous output $\{{p_l}^{(n-1)}, {\beta}^{(n-1)}, {\bm{\psi}}^{(n-1)}\}$. **Initialize:** $n = 1$, $\{{p_l}^{(0)}, {\beta}^{(0)}, {\bm{\psi}^{(0)}}\}$, tolerance $\xi_1 > 0$. **Compute:** the leader’s utility ${U_L}\!\left( {{{p_l}^{\left( 0 \right)}},{{\beta}^{\left( 0 \right)}}}, {\bm{\psi}}^{(0)} \right)$. **Repeat:** , and continue. The optimal solution $\bm{\chi^*} = \{{p_l}^*, {\beta}^*, {\bm{\psi}}^*\}$.
\[theorem: convergence\_optimal\_solution\_BCD2\] For the JA scheme, the proposed iterative Algorithm \[algorithm3\] converges to the SE point, and it converges in polynomial time.
Similar to the proof of the Theorem \[theorem: convergence\_complexity\_BCD\].
Numerical Results
=================
In this section, we first investigate the inefficiency of the proposed approach, and then verify revenues of both providers in comparison with conventional transmission modes. We consider the carrier frequency of RF signals at $2.4$ GHz. The bandwidth of the RF signals and the antenna gain of the PB are $10$ MHz and $6$ dBi, respectively. The IoT devices (i.e., AWPDs and HWPDs) have the antenna gains of $6$ dBi [@Kim2010]. Unless otherwise specified, the default backscatter rate of backscatter devices is set at 10 kpbs. In our setup, both the AWPDs and HWPDs have the energy harvesting and data transmission efficiency coefficients of $\varphi = 0.6$ and $\phi = 0.5$, respectively.
{width="\textwidth"}
{width="\textwidth"}
Inefficiency of the Proposed Approach
-------------------------------------
Due to the selfish behavior of players, we evaluate the inefficiency of the proposed approach by comparing with the baseline scenarios, i.e., non-negotiated energy trading and social welfare scenarios in this section. It is worth noting that, in the non-negotiated energy trading, we choose a fixed energy price at half of the maximum value. In Fig. \[fig:Baseline\], the utility functions of the ISP for the proposed approach and baseline scenarios are plotted versus the distance between the PB and the IoT devices. It can be observed that the social welfare scenario outperforms the proposed approach and the non-negotiated energy trading scenario. This is due to the cooperation between the ISP and ESP in the social welfare scenario. In addition, the profit of the non-negotiated energy trading scenario is also higher than the proposed approach. The reason is that the ISP in the non-negotiated energy trading scenario can find an optimal transmission power of the PB to maximize its revenue with the fixed energy price, whereas the ESP decides this figure in the proposed approach.
Fig. \[fig:PoA\] shows the PoA ratio of the proposed approach using the PA and JA schemes. In both cases, the PoA ratios are small when the IoT devices are located near the PB. This is due to that Moreover, the PoA ratios are equal to zero when the distance between the PB and IoT devices is greater than 12 meters, and 18 meters in the cases of using the PA and JA schemes, respectively. It is because when the ISP and ESP are far away, the achievable profit of the ISP is lower than the energy cost. Thus, there is no success negotiation between these providers as shown in Fig. \[fig:Baseline\].
Revenue Performance of the ISP and ESP
--------------------------------------
For performance comparison, we consider three conventional methods, i.e., the BBCM, HTTCM, and TDMA mechanism. It is worth noting that, in the TDMA mechanism, all IoT devices are allocated with identical time resources. In this case, the total backscatter time of the IoT devices accounts for a half length of the normalized time frame as illustrated in Fig. \[fig:System\_model\](b). Thus, the operation time of the PB $\beta$ is fixed and equal to the total backscatter time of the IoT devices.
{width="\textwidth"}
{width="\textwidth"}
### Identical number of devices for each IoT devices’ set
We first evaluate the performance of the proposed approach under the setting that the number of devices in each IoT device type are equal, (i.e., 10 devices for each IoT device type). Fig. \[fig:Varying\_pr\] shows the variation of the leader’s payoff (i.e., the ISP) as the benefit per bit transmitted ($p_r$) increases in the range of $0.1$ to $1$. The common observation is that the utilities of the ISP obtained by all methods increase as the benefit per bit transmitted increases. That is a obvious result because with more benefit gained by selling per one data bit, the ISP can purchase energy either in longer time or higher transmission power, or both, thus the profit of the ISP also increases. In particular, we first observe that the proposed approach, BBCM and HTTCM solved by scheme 2 always perform better than themselves solved by scheme 1. The reason is due to that the scheme 2 can optimize the profit of the IoT service with respect to both offered price ($p_l$) and the active time of the PB $(\beta)$, thus it can chooses a better local optimal point than the scheme 1. Next, we also observe that the proposed method solved by scheme 2 achieves the highest profit in the considered range of $p_r$. In contrast, the proposed method solved by scheme 1 obtains a lower achieved profit than the TDMA mechanism when the benefit per bit transmitted is smaller than 1. Note that the scheme 1 prefers to offer a high price to buy energy rather than a long period purchasing energy, thus the optimal of this period in the scheme 1 is smaller than in the TDMA mechanism. On the other hand, the offered price has more weight than the energy purchasing time in the energy cost. For this reason, the scheme 1 might perform not as good as the TDMA mechanism when the benefit per bit transmitted is under 1. Furthermore, the BBCM solved by both schemes performs much worse than other methods, in which the one solved by the scheme 2 is slightly better than the one solved by the scheme 1.
$\begin{array}{ccc}
\epsfxsize=2.8 in \epsffile{Figures/Simulation_Results/Follower_Utility_2D} &
\epsfxsize=3.1 in \epsffile{Figures/Simulation_Results/Follower_Utility_Varying_pr} \\ [-0.2cm]
(a) & (b)
\end{array}$
In Fig. \[fig:Varying\_dis\], we plot the profit of the leader versus the distance between the PB and the IoT devices. At the distance of 2 meters, the profit of the ISP obtained by the BBCM using both proposed schemes are much greater than those of other approaches because transmission power of the PB is the major impact on the performance of this approach. However, its performance drastically decreases when the distance increases. By contrast, the profits of the ISP obtained by the proposed approach and HTTCM slightly reduce as the distance is smaller than 10 meters. There is no more profit for the proposed approach and HTTCM using scheme 1 when the distance is greater than or equal to 14 and 16 meters, respectively. Whilst, the profits of these approaches solved by scheme 2 are only equal to zero at the distance of 20 meters.
Next, we investigate the profit of the follower (i.e., the ESP) in Fig. \[fig: FollowerUtility\]. First, we demonstrate this profit as a quadratic function with respect to the transmission power of the PB as shown in . Fig. \[fig: FollowerUtility\](a) shows the offered prices corresponding to the optimal points of the profit curve in both proposed schemes. Furthermore, as shown in the figure, scheme 1 prefers the offered price than the time purchasing energy, while the scheme 2 balances both factors. Thus, the benefit of the ESP by selling energy in the scheme 1 is much greater than the scheme 2. After applying the optimal transmission power of the PB as in , the profit of the ESP depends on the requested price and time purchasing energy (i.e., the operation time of the PB) announced by the IoT service as shown in Fig. \[fig: FollowerUtility\](b). It can be seen that this profit increases linearly with the time purchasing energy and non-linearly with the offered energy price, respectively.
![Runtime of Proposed methods.[]{data-label="fig: Runtime1000"}](Figures/Simulation_Results/Running_Time_1000.pdf)
Fig. \[fig: Runtime1000\] shows the complexity of the proposed method solved by both schemes. Because both schemes take only few iterations to converge, thus in order to evaluate the computational efficiency of the proposed schemes more precisely, we measure the runtime of these schemes in 1000 times with different number of IoT devices (i.e., $N = 5, N = 10, N = 15$, in which number of devices for each type is equal). In general, we observe that the runtime of both schemes increases when the number of IoT devices increase, and the maximum runtime to solve the proposed method with 45 devices is just over 5 seconds in average by the scheme 1. In the case of small number of IoT devices (i.e., $N = 5$), the computational efficiency of the scheme 1 is better than the scheme 2. It is the result of that the scheme 2 have to run two iterative algorithms (i.e., both inner and outer iterative loops) compared to only one iterative algorithm implemented in the scheme 1. Time scheduling of the scheme 1 runs slower than the scheme 2 in all three cases because their feasible regions are different. Therefore, when the number of IoT devices increases (i.e., $N = 10, N = 15$), the scheme 2 runs faster than the scheme 1. That is due to that time scheduling accounts for the majority of the runtime of both schemes.
### Different number of devices for each IoT devices’ set
$\begin{array}{ccc}
\epsfxsize=2.2 in \epsffile{Figures/Simulation_Results/Varying_number_of_AWPDs} &
\epsfxsize=2.2 in \epsffile{Figures/Simulation_Results/Varying_number_of_HWPDs} &
\epsfxsize=2.2 in \epsffile{Figures/Simulation_Results/Varying_number_of_PWPDs} \\ [-0cm]
(a) & (b) & (c)
\end{array}$
We now investigate the profit of the ISP by altering the number of devices for one type from $3$ to $30$, while keeping these figures for other types fixed at $10$. Fig. \[fig:vary\_numb\_Of\_Devices\](a) shows a profit comparison of the ISP among approaches when varing the number of AWPDs. In general observation, as the number of AWPDs is smaller than 24, the profit of the proposed method solved by scheme 2 increases and is highest compared with the others. However, when this number increases (i.e., larger than or equal to 24), the profits of the proposed method solved by both schemes are equal. There is no more profit added to the proposed method due to the power constraint violation of AWPDs. The profit of the TDMA mechanism is greater than the figure of the proposed method solved by scheme 1 as the number of AWPDs is smaller than 15. The reason is that as the number of AWPDs increases, the harvesting time reduces, thus the throughput obtained by active transmission declines. The profits of the HTTCM solved by both schemes shows the same trend but are smaller than the proposed method. These figures are also smaller than that of the TDMA mechanism. The similar trends in the profits of the proposed approach, HTTCM, and BBCM are demonstrated in Fig. \[fig:vary\_numb\_Of\_Devices\](b). The reason is that HWPDs can performs both functions, i.e., backscattering or active transmission. By contrast, the profit of the TDMA is greater than all other approaches before it remains unchanged as the number of devices is greater than or equal to 9. In addition, as illustrated in Fig. \[fig:vary\_numb\_Of\_Devices\](c), the increase in the number of PWPDs causes no impact on the profit of the proposed scheme due to the low level of backscatter rate. By contrast, due to sharing the time resources for PWPDs, the profit of the TDMA mechanism reduces linearly.
Conclusion
==========
In this paper, we have studied the Stackelberg game to maximize the profits of both service providers in heterogeneous IoT wireless-powered communication networks. The *Stackelberg Equilibrium* (SE) presented the proper price for the energy service, emitting time of the PB, and optimal scheduling times for the communication service has been obtained via the closed-form and the PA/JA schemes that exploits the BCD technique. Both the theoretical and numerical analyses have shown the convergence and computing efficiency of the iterative algorithm. Simulation results have shown that the proposed scheme always outperforms other baseline methods in terms of achieved profit of the ISP. It has also reveal that the JA scheme defeats the PA scheme in all cases.
The proof of Lemma \[lemma: convex\_proof\_subopt3\] {#App:lemma3}
====================================================
First, we consider the function ${\tilde{G}} {(\bm{\psi})}$ in contributed by four terms $G_p(\bm{\theta}) = \!\sum_{p = 1}^P g_p(\theta_p)$, $G_a(\bm{\nu}) = \!\sum_{a=1}^A g_a(\nu_a)$, and $G_h(\bm{\tau},\bm{\mu}) = \!\sum_{h = 1}^H g_h(\tau_h,\mu_h)$ and a constant $\tilde{C}$ where $$\begin{aligned}
\left\{\! \begin{array}{ll}
\!\!{g_p}({\!\theta _p\!})\!&\!=\! {{\tilde{c}_{p,1}}{\theta _p}},\\
\!\!{g_a}({\!\nu _a\!})\!&\!=\! {{p_r}{\Omega_D}{\nu _a}{{\log }_2}\!\left(\! {1\! +\! \frac{{{\tilde{c}}_{a,2}}}{{{\nu _a}}}} \right)},\\
\!\!{g_h}({\!\tau _h},{\mu _h}\!)\! &\!=\! {\tilde{c}_{h,3}} {\tau_h} \!+\! {p_r}{\Omega_D}{\mu_h} \!\log_2 \!\left[\! {1 \!+\! \frac{{\tilde{c}_{4,h} - \tilde{c}_{h,5}{\tau _h}}}{{{\mu _h}}}} \!\right]\!.
\end{array}\!\! \right.
\end{aligned}$$ It is worth noting that the first term $G_p(\bm{\theta})$ are linear functions of $\theta_p, \forall p \in \{1,\dots,P\}$. The second term $G_a (\bm{\nu})$ and third term $G_h (\bm{\tau}, \bm{\mu})$ are concave functions w.r.t. $\nu_a, \forall a \in \{1,\dots,A\}$ and $(\tau_h, \mu_h), \forall h \in \{1,\dots,H\}$, respectively, which are straightforward to prove by considering their Hassian matrices. Moreover, $\tilde{C}$ is a constant with the fixed $\{p_l^{(n)}, \beta^{(n)}\}$. Finally, we can conclude that the function $\tilde{G}$ is a concave function w.r.t. $\bm{\psi} \buildrel \Delta \over = { ( \bm{\theta}, \bm{\nu}, \bm{\tau}, \bm{\mu} )}$ and efficiently solved by the interior-point method [@Boyd2004].
The proof of Theorem \[theorem: convergence\_complexity\_BCD\] {#App: BCD_proof}
==============================================================
At $n$-th iteration, from Lemma \[lemma: convex\_proof\_subopt1\], \[lemma: convex\_proof\_subopt2\], \[lemma: convex\_proof\_subopt3\], we have: $$\begin{aligned}
\left\{\! \begin{array}{ll}
{U_L}\!{\left({p_l^{(\!n\!)}}\!, {\beta^{(\!n-1\!)}}\!, {\bm{\psi}^{(\!n-1\!)}}\!\right)} \!\ge\! {U_L}\!{\left({p_l^{(\!n-1\!)}}\!, {\beta^{(\!n-1\!)}}\!, {\bm{\psi}^{(\!n-1\!)}}\!\right)},\\
{U_L}{\left({p_l^{(n)}}\!, {\beta^{(n)}}\!, {\bm{\psi}^{(n-1)}}\!\right)} \!\ge\! {U_L}{\left({p_l^{(n)}}\!, {\beta^{(n-1)}}\!, {\bm{\psi}^{(n-1)}}\!\right)}\!,\\
{U_L}{\left({p_l^{(n)}}, {\beta^{(n)}}, {\bm{\psi}^{(n)}}\right)} \ge {U_L}{\left({p_l^{(n)}}, {\beta^{(n)}}, {\bm{\psi}^{(n-1)}}\right)}.
\end{array}\!\! \right.
\end{aligned}$$
Due to the transitive property, we can obtain ${U_L}\!{\left(\!{p_l^{(\!n\!)}}\!, {\beta^{(\!n\!)}}\!, {\bm{\psi}^{(\!n\!)}}\!\!\right)} \!\ge\! {U_L}\!{\left(\!{p_l^{(\!n-1\!)}}\!, {\beta^{(\!n-1\!)}}\!, {\bm{\psi}^{(\!n-1\!)}}\!\!\right)}\!$. It is worth noting that a feasible region determined by the constraints - is a compact set and always contains the output $\bm{\chi}^{(n)}$, and thus the **Algorithm \[algorithm1\]** will converge to the optimal solution $\bm{\chi}^*$. In addition, the line search and interior-point methods always find the optimal solutions of the problems , , in polynomial time [@Bertsimas1997]. Then the theorem is proved.
The proof of Theorem \[theorem: convergence\_optimal\_solution\_CCCP\] {#App: CCCP_proof}
======================================================================
We first prove the convergence of **Algorithm \[algorithm2\]**. For $k > 1$, we have: $$\begin{aligned}
\label{eq: converge_CCCP}
% \begin{array}{l}
{\hat{G}}\!\left( \!{{V^{\left( \!k\! \right)}}}\! \right) & \!\buildrel \Delta \over = \! {G_{ccav}}\!\left(\! {{V^{\left( k \right)}}}\! \right) \!+\! {G_{cvex}}\!\left(\! {{V^{\left( k \right)}}} \!\right)\\
&\ge\! {G_{ccav}}\!\!\left(\!\! {{V^{\left( \!k\! \right)}}}\!\! \right) \!\!+\! {G_{cvex}}\!\!\left( \!\!{{V^{\left( \!{k - 1}\! \right)}}}\! \right) \\
& \quad + {\left(\!\! {{V^{\left(\! k\! \right)}} \!-\! {V^{\left( \!{k - 1} \!\right)}}} \!\right)^T}\!\!\nabla {G_{cvex}}\!\!\left(\!\! {{V^{\left( \!{k - 1} \!\right)}}} \!\right)\\
&\ge\! {G_{ccav}}\!\left(\! {{V^{\left(\! {k - 1} \!\right)}}} \!\right) \!+\! {\left( \!{{V^{\left(\! {k - 1}\! \right)}} } \!\right)^T}\!\nabla {G_{cvex}}\!\left(\! {{V^{\left(\! {k - 1} \!\right)}}} \!\right) \\
& \quad \!+\!\! {G_{cvex}}\!\!\left(\! {{V^{\left(\! {k - 1} \!\right)}}}\! \right) \!-\! {\left(\! \!{{V^{\left( \!{k - 1}\! \right)}}}\!\right)^T}\!\nabla {G_{cvex}}\!\!\left( \!{{V^{\left( \!{k - 1} \!\right)}}}\!\! \right)\\
&=\! {G_{ccav}}\!\!\left(\! {{V^{\left(\! {k - 1}\! \right)}}}\!\! \right) \!\!+\!\! {G_{cvex}}\!\!\left(\! {{V^{\left(\! {k - 1}\! \right)}}}\! \right) \!\buildrel \Delta \over =\! {\hat{G}}\!\left( \!{{V^{\left( \!{k - 1} \!\right)}}} \!\right)\!\!,
% \end{array}
\end{aligned}$$ where the first inequality in is derived from the first order Taylor approximation of a convex function [@Boyd2004]: $${G_{cvex}}\!\!\left( \!\!{{V^{\left( \!k\! \right)}}} \!\right) \!\ge\! {G_{cvex}}\!\left( \!{{V^{\left( \!{k - 1} \!\right)}}} \!\right) \!+\! {\left( \!{{V^{\left(\! k\! \right)}} \!-\! {V^{\left(\! {k - 1} \!\right)}}} \!\right)^T}\!\nabla {G_{cvex}}\!\left(\! {{V^{\left(\! {k - 1} \!\right)}}}\!\! \right)\!.$$ The second inequality is obtained from . We define ${S_V}$ is the set of $V$ satisfying the constraints -. Since the $S_V$ is a compact set and $V^{(k)}$ is always within the feasible set $S_V$, the CCCP algorithm will converge to $V^{(*)}$, i.e., $V^{(k)} = V^{(k-1)} = V^{*}$. Thus the **Algorithm \[algorithm2\]** is convergence.
Next, we prove that $V^{*}$ is a local optimum of the optimization problem . We define a constraint set ${C}\left(V\right) \buildrel \Delta \over = \left\{ {{C_1}\left( V \right),{C_2}\left( V \right), \ldots ,{C_L}} \right\}$, where $L$ is the total number of constraints in the problem . Since $S_V$ is a compact set and ${\hat{G}}\left(V\right)$ is concave function of $V$, we have the KKT condition for the optimization problem as follows: $$\left\{ {\begin{array}{*{20}{c}}
{\!\nabla {G_{ccav}}\!\!\left(\! {{V^{\left( \!k \!\right)}}} \!\right) \!+\! \nabla {G_{cvex}}\!\left(\! {{V^{\left(\! {k - 1}\! \right)}}}\! \right) \!+\! {Y^T}\nabla C\left( \!{{V^{\left( \!k\! \right)}}}\!\right) \!=\! 0}\\
{Y = \left[ {{y_1},{y_2}, \ldots ,{y_L}} \right],{y_i} \ge 0,{y_i}{C_i}\left( {{V^{\left( k \right)}}} \right) = 0,\forall i}
\end{array}} \right.$$ where $Y$ is the optimal Lagrangian variable set for $V^{(k)}$. When $V^{(k)} = V^{(k-1)} = V^{*}$, the above equation set can be rewritten as follows: $$\left\{ {\begin{array}{*{20}{c}}
{\nabla {G_{ccav}}\left( {{V^*}} \right) + \nabla {G_{cvex}}\left( {{V^*}} \right) + {Z^T}\nabla C\left( {{V^*}} \right) = 0}\\
{Z = \left[ {{z_1},{z_2}, \ldots ,{z_L}} \right],{z_i} \ge 0,{z_i}{C_i}\left( {{V^*}} \right) = 0,\forall i}
\end{array}} \right.$$ where $Z$ is the optimal Lagrangian variable set for $V^{*}$. That means $V^{*}$ is a local optimum for the problem that satisfies the KKT condition.
[100]{}
D. Bandyopadhyay, and J. Sen, “Internet of Things: Applications and Challenges in Technology and Standardization," *Wireless Pers. Commun.,* vol. 58, no. 1, pp. 49–69, 2011.
A. A. Fuqaha et al., “Internet of Things: A Survey on Enabling Technologies, Protocols, and Applications," *IEEE Commun. Surv. & Tut.*, vol. 17, no. 4, pp. 2347-2376, 2015.
D. W. K. Ng et al., “The Era of Wireless Information and Power Transfer," in *Wireless Information and Power Transfer: Theory and Practice,* Wiley, pp.1-16, 2019.
H. Ju and R. Zhang, “Throughput maximization in wireless powered communication networks,” *IEEE Trans. on Wireless Commun.*, vol. 13, no. 1, pp. 418-428, Jan., 2014.
S. Lohani, R. A. Loodaricheh, E. Hossain and V. K. Bhargava, “On Multiuser Resource Allocation in Relay-Based Wireless-Powered Uplink Cellular Networks," *IEEE Trans. on Wireless Commun.*, vol. 15, no. 3, pp. 1851-1865, Mar. 2016.
A. Salem and K. A. Hamdi, “Wireless Power Transfer in Multi-Pair Two-Way AF Relaying Networks," *IEEE Transactions on Communications*, vol. 64, no. 11, pp. 4578-4591, Nov. 2016.
M. -L. Ku, W. Li, Y. Chen, and K. J. R. Liu, “Advances in energy harvesting communications: Past, present, and future challenges,” *IEEE Commun. Surveys Tuts.,* vol. 18, no. 2, pp. 1384–1412, 2nd Quart., 2016.
N. Van Huynh et al., “Ambient Backscatter Communications: A Contemporary Survey," *IEEE Commun Surv. & Tut.,* vol. 20, no. 4, pp. 2889-2922, 2018.
A. Bletsas, S. Siachalou and J. N. Sahalos, “Anti-collision backscatter sensor networks," *IEEE Transactions on Wireless Communications*, vol. 8, no. 10, pp. 5018-5029, Oct. 2009.
J. Kimionis, A. Bletsas and J. N. Sahalos, “Increased Range Bistatic Scatter Radio,” in IEEE Transactions on Communications, vol. 62, no. 3, pp. 1091-1104, March 2014.
V. Liu et al., “Ambient backscatter: Wireless communication out of thin air,” in *Proc. ACM SIGGOMM*, Hong Kong, Aug. 2013, pp. 39–50.
S. Gong et al., “Backscatter Relay Communications Powered by Wireless Energy Beamforming," *IEEE Trans. on Commun.,* vol. 66, no. 7, pp. 3187-3200, Jul., 2018.
P. Wang et al.,“Optimal Resource Allocation for Secure Multi-User Wireless Powered Backscatter Communication with Artificial Noise," *IEEE INFOCOM*, pp. 460-468, France, 2019.
B. Lyu, Z. Yang, G. Gui, and Y. Feng, “Wireless powered communication networks assisted by backscatter communication,” *IEEE Access*, vol. 5, pp. 7254-7262, Mar., 2017.
D. T. Hoang et al.,“Overlay RF-powered backscatter cognitive radio networks: A game theoretic approach," *IEEE Inter. Conf. on Commun.,* pp. 1-6, Paris, 2017.
W. Wang et al., “Stackelberg Game for Distributed Time Scheduling in RF-Powered Backscatter Cognitive Radio Networks,” in IEEE Trans. on Wireless Commun., vol. 17, no. 8, pp. 5606-5622, Aug., 2018.
W. Chen, C. Li, S. Gong, L. Gao, and J. Xu, “Joint transmission scheduling and power allocation in wirelessly powered hybrid radio networks,” in *Proc. IEEE ICNC, Honolulu*, HI, USA, Feb. 2019, pp. 515–519.
A. Mohsenian-Rad et al., “Autonomous demand-side management based on game-theoretic energy consumption scheduling for the future smart grid," *IEEE Trans. on Smart Grid*, vol. 1, no. 3, pp. 320-331, 2010.
P. Tseng, “Convergence of a block coordinate descent method for nondifferentiable minimization,” *Jour. Optim. Theory Appl.*, vol. 109, no. 3, pp. 475–494, 2001.
A. L. Yuille and A. Rangarajan,“The concave-convex procedure (CCCP),“ *Proc. Adv. Neural Inf. Process. Syst.*, , pp. 1033–1040, Apr. 2001.
T. Roughgarden, “Intrinsic robustness of the price of anarchy,” *J. ACM,* vol. 62, no. 5, Nov. 2015, Art. no. 32.
N. F. Hilliard, P. N. Alevizos and A. Bletsas,“Coherent Detection and Channel Coding for Bistatic Scatter Radio Sensor Networking," *IEEE Trans. on Commun.,* vol. 63, no. 5, pp. 1798-1810, May, 2015.
C. A. Balanis, *Antenna Theory: Analysis and Design*. NY, Wiley, 2012.
D. Fudenberg, J. Tirole, *Game Theory*, MIT Press, 1991.
P. N. Alevizos, K. Tountas and A. Bletsas, “Multistatic Scatter Radio Sensor Networks for Extended Coverage,” in IEEE Transactions on Wireless Communications, vol. 17, no. 7, pp. 4522-4535, July 2018.
A. Bletsas, A. G. Dimitriou, and J. N. Sahalos, “Improving backscatter radio tag efficiency,” IEEE Trans. Microwave Theory Tech., vol. 58, no. 6, pp. 1502–1509, Jun. 2010
FCC Rules for RF devices, part 15, Oct 2018. Available at: http://afar.net/tutorials/fcc-rules/.
D. Y. Kim and D. I. Kim, “Reverse-link interrogation range of a UHF MIMO-RFID system in Nakagami-m fading channels,” *IEEE Trans. on Indus. Electronics*, vol. 57, no. 4, pp. 1468-1477, Apr., 2010.
S. Boyd and L. Vandenberghe, *Convex Optimization*, Cambridge, U.K.: Cambridge Univ. Press, 2004
D. Bertsimas and J. N. Tsitsiklis, *Introduction to Linear Optimization,* vol. 6. Belmont, MA, USA: Athena Scientific, 1997.
[^1]: Ngoc-Tan Nguyen is with the School of Electrical and Data Engineering, University of Technology Sydney, Sydney, NSW 2007, Australia and JTIRC, VNU University of Engineering and Technology, Vietnam National University, Hanoi, Vietnam (e-mail: [email protected]).
[^2]: D. T. Hoang, N. N. Diep, and E. Dutkiewicz are with the School of Electrical and Data Engineering, University of Technology Sydney, Sydney, NSW 2007, Australia.
[^3]: N. N. Hoang, and N. Q. Tuan are with the JTIRC, VNU University of Engineering and Technology, Vietnam National University, Hanoi, Vietnam.
|
---
abstract: |
When surfing the Internet, individuals leak personal and corporate information to third parties whose (legitimate or not) businesses revolve around the value of collected data. The implications are serious, from a person unwillingly exposing private information to an unknown third party, to a company unable to manage the flow of its information to the outside world. The point is that individuals and companies are more and more kept out of the loop when it comes to control private data.
With the goal of empowering informed choices in information leakage through the Internet, we propose [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}, a system for comprehensive and collaborative auditing of data that flows to Internet services. Similarly to open-source efforts, we enable users to contribute in building awareness and control over privacy and communication vulnerabilities. [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}provides the core infrastructure and algorithms to let individuals and enterprises regain control on the information exposed on the web. We advocate [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}as a data processing layer positioned right below HTTP in the host protocol stack. This enables the inspection of clear-text data even when HTTPS is deployed and the application of processing rules that are customizable to fit any need. Preliminary results obtained executing a prototype implementation on ISP traffic traces demonstrate the feasibility of [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}.
author:
- Hassan Metwalley
- Stefano Traverso
- Marco Mellia
- |
\
Stanislav Miskovic
- Mario Baldi
title: 'CrowdSurf: Empowering Informed Choices in the Web[^1]'
---
Introduction {#sec:intro}
============
Users increasingly rely on Internet services, looking for news and products, accessing social networks, organizing their life, etc. There are companies that base their business on the collection of personal information implicitly or explicitly embedded in the above users’ activities. This results in leakage of information that users and companies prefer to keep private, in people being exposed to dubious third-party services, as well as in web companies (sometimes illegitimately) tracking their users. This phenomenon is ubiquitous, with even the major players taking part in it [@facebooksued; @GoogleSafari; @kramer:pnas].
Hence, users’ concerns about privacy and information leakage largely increased, motivated also by recently exposed government surveillance programs. However, no means exist to control which data is handed to the web.
To this end, a common misconception is that encryption would solve the problem. Accordingly, HTTPS usage increased by 100% each year, reaching about 42% of web flows in June 2014 [@finamore:conext14]. In reality, the effects are quite different and rather exacerbate the problem. Firstly, encryption increases the value of data. Specifically, web services that deploy encryption establish a monopoly on information by precluding any other parties from deploying it, thus gaining a huge advantage in today’s Internet where many businesses revolve around user information. Secondly, when HTTPS is deployed, users have no chance to rely on third parties to check and possibly choose which (personal) information they are sharing.
In this scenario, we advocate the need of a communication model where users are explicitly offered the freedom to i) understand which services get their data, and ii) govern which information they are asked to exchange. We envision a holistic and flexible solution to verify and control the information which is exchanged on the Internet, when using a web browser or running a smartphone app, whether connected to the corporate network or to a public WiFi hotspot.
[[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}, presented in this paper, provides a framework for such a solution. It is designed as an open service to which anyone can easily contribute. A part of [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}resides on client devices to both provide visibility on traffic and possibly act upon it (e.g., by modifying or blocking information). We conceived [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}as a new layer that sits right between the application and the protocol stack, where information has not yet been encrypted. [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}targets web surfing, and thus HTTP, the new “narrow waist of the Internet” [@Popa:hotnets2010]. This empowers the protection of the user’s data and optional contributions to the system by users themselves. Anyone contributes according to a personal level of expertise or convenience, from teams of security researchers who can collaboratively measure intricate signs of behind-the-scenes communications between the service providers, to novice users who can simply offer anonymized samples of their traffic or vote on the legitimacy of data leaving their devices.
Another part of [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}resides on open cloud servers performing intensive processing tasks over massive datasets obtained through the contribution of volunteers. Specifically, the cloud component runs algorithms that clean and rank users’ voting, index voluntarily submitted traffic, and attempts to discover unknown types of information leakage. All of the data gathered and processed enables the cloud [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}component to compute pieces of [*advice*]{} about the trustfulness of web services. This advice is shared with all the resident components that can leverage it to support users in taking informed decisions. Users can create [*rules*]{} based on the received advice to enable fine grained control on the information flow. For instance, users can choose to block undesired services, or filter private information, or explicitly embrace third-party services.
The technology offered by [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}is essential not only for individual users, but also for companies with the need to control the information entering and leaving the corporate network. Currently, companies are forced to trust the devices connected to the network and have hard time verifying the information they exchange. The so called “BYOD” (Bring Your Own Device) phenomenon and the reduced efficacy of traditional approaches based on firewalls and IDS’s (mined by encryption [@finamore:conext14]) further exacerbate the problem. In the corporate scenario, the open cloud service is replaced by a private component that, through the resident components, can impose filters to any device connected to the corporate network. At last, [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}allows third-parties to offer novel services, possibly complementing current client-server-based ones. For instance, [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}could be used to enable a user to voluntarily use an accelerating proxy offered by an ISP only for specific types of traffic (e.g., when watching videos, but not when accessing her bank account). [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}would dynamically forward traffic to the proxy or directly to the final destination depending on a set of rules provided by the user or by a third party and relying on advice obtained by the system. [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}could even be instrumental in enabling users to monetize on their personal information, should they decide for it, as proposed in [@RiedererHotnets2011].
[[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}Description {#sec:system}
============================================================================================================================
We envision a crowd-sourced system in which users can voluntarily opt-to collaborate by providing explicit (e.g., their opinion) and implicit (e.g., traffic samples) information on the web services they use. In return they obtain information about web services. A *[[collector]{}]{}* – running in the cloud – collects information provided by the users, and it feeds an automatic [*[[data analyzer]{}]{}*]{}, which runs data mining algorithms to produce *advices*. An advice contains indications about trustfulness of web services. For instance, the [[data analyzer]{}]{}can flag services collecting/leaking users’ personal information, or services that children should not access, or services known to host malicious software. A federated group of experts, the *[[advising community]{}]{}*, inspects the results provided by the [[data analyzer]{}]{}and interacts with it to generate the advices. Following a collaborative approach similar to Wikipedia and the Electronic [F]{}rontier [F]{}oundation[^2] (EFF), users are invited to increase the system’s “wisdom”. They can be active in controlling the personal information they expose to services, or in forming the advices. And then they can voluntarily donate portions of their browsing activity, i.e., anonymized HTTP-level traces. The advising community is supported by data mining algorithms that automatically raise flags.
The Internet offers some tools to help users to avoid disclosing personal information when browsing the web, e.g., popular browser plugins such as DoNotTrackMe[^3], EFF’s Privacy Badger[^4], WoT[^5] and Ghostery[^6]. For mobile terminals, some proposals offer similar ideas [@Agarwal:2013:PDM:2462456.2464460; @Enck:2010:TIT:1924943.1924971]. Each targets some specific aspects of privacy leakage only. Some leverage the idea of a crowd-sourced approach to inform users about website trustfulness. More holistic technologies as TOR [@dingledine2004tor] offer protection to users identity and from traffic inspection attacks, but they do not curb the personal information which is exposed to servers at application layer. Similarly, companies are offering solutions to control web browsing [@safebrowsing]. Yet, what they offer is unknown, and mostly un-verifiable. [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}is all of them, and none of them. [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}’s challenge is in offering a unified system to overcome limitation of current systems, which often do not cooperate and are dominated by manual decisions. We propose a flexible system that, based on the knowledge of the crowd and supported by automated algorithms, empowers users and companies, offering them the chance to retake the control on private data.
[[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}System Design {#sec:design}
------------------------------------------------------------------------------------------------------------------------------
For the design of [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}we follow a short list of simple design requirements: 1) **Crowd-sourced**: we want the system to engage users to improve its effectiveness. 2) **Anonyminity**: we want [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}to never offend users’ privacy. Hence any contribution must be purged from any piece of personal information. 3) **Automated**: the system have to automatically process users’ contributions to generate the advices. 4) **Client centric**: [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}must be available on any device, as the default tool to support users’ choices. 5) **Easy to use**: we want [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}to be as simple and automatic as possible to allow anyone to use it.
Given these principles, we imagine [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}cornerstone as a new layer to add to the Internet stack. We expect users’ terminals, mobiles and personal computers alike, to embed the [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}layer in their operating system. Fig. \[fig:stack\] represents the high level architecture of the [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}layer. It sits between the HTTP and the transport layer, where it handles HTTP traffic, before it is eventually encrypted. This choice is motivated by the fact the today HTTP is “the” application layer [@Popa:hotnets2010].
Users asynchronously obtain advices from the [[advising community]{}]{}, and they are free to decide to what level to take them into account: users are free to accept or overrule the notification of a potential danger. The system implements this feature through the [Advices to Rule-Sets]{} block in Fig. \[fig:stack\]. It enables controlling how advices are translated in a set of *rules*, or *rule-set*. A rule consists of a *regular expression* and one or more *actions*. For each HTTP request, the [Rule Processor]{} looks for matches and applies the corresponding action, for instance [Block]{}, [Redirect]{}, [Modify]{}, [Log&Report]{}, etc., with [Allow]{} being the default one. This simple pattern matching/action process has proved very flexible and very efficient. It is at the root of successful technologies such as the one used in firewalls, antiviruses, traffic classifiers, etc.
Given that [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}aims at supporting a crowd-sourced approach, the [Log&Report]{} block is vital. It enables the collection of data samples before traffic is possibly encrypted. The layer can perform measurements at user’s will and under user’s control. The layer temporarily stores the measurement data locally until a certain amount is reached, at which point the layer transmits the data to the [[collector]{}]{}. Since protecting user’s privacy is strategic for [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}, we adopt different approaches to avoid compromising it. The anonymization block in Fig. \[fig:stack\] is responsible for this. First, it implements sampling policies, e.g., by logging only a fraction of traffic at random. This also reduces the amount of data to transfer. Second, it filters out any piece of personal information. E.g., by default, all key values are replaced by random strings by using cryptographic hash functions. Then, a pattern/action mechanism is used. As before, the community can supply pre-defined lists of anonymization practices, which can always be customized by the user. For instance a generic policy “remove all possible password fields” can be augmented with “never collect data when browsing my online bank account”. Third, each user is assigned a unique random identifier, rotated periodically (e.g., every day). Fourth, data on the [[collector]{}]{}will be stored for only the time needed to process it (also to limit the storage at the [[collector]{}]{}). At last, since even information available at the network layer (e.g., IP addresses) could be exploited to trace back the identity of the user, transmission to the [[collector]{}]{}takes place through an encrypted channel established over a randomly chosen CrowdProxy, i.e., by employing other devices running the [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}layer as relays. The [[collector]{}]{}automatically provides the identity of other [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}devices among which to randomly choose the relay.
Fig. \[fig:scenarios\] represents possible deployment scenarios. Private user A accessing the Internet receives advices from the [[advising community]{}]{}(dashed black unidirectional arrows) and possibly use them to regulate its access to web services (solid blue double-headed arrows). If A’s preferences allow it, traffic samples are sent to the [[collector]{}]{}via a CrowdProxy (dashed red arrows).
The [[advising community]{}]{}and public [[data analyzer]{}]{}may be supported by public bodies or non-profit organization like EFF. However, advices could also be generated by a [[third-party advisor]{}]{}run by an independent, third-party entity which offers custom advices to users. This opens a “market of advices”. For instance, user B opts for a service offered by a third party advisor. Finally, as shown in the bottom half of Fig. \[fig:scenarios\], [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}can also be deployed in a corporate scenario. In this case the [[corporate controller]{}]{}does not create advices, but it directly imposes rule-sets (orange dotted arrows) which are installed on devices connected to the corporate network (employee C). Indeed, we expect the employee not to be allowed to modify the rule-sets imposed by the corporate authority. Notice also that devices may be asked to report employees’ browsing activity on administrator’s demand directly to the [[corporate controller]{}]{}, without involving other devices. The presence of the [[corporate controller]{}]{}must be automatically identified by any device connected to the corporate network including those BYOD. This can be achieved for instance using DHCP extensions, or using standard DNS names that forces the [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}layer to connect to the [[corporate controller]{}]{}. Notice that the same rule can be imposed on any corporate-owned device even when connected from other networks.
[[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}Application Examples {#sec:examples}
-------------------------------------------------------------------------------------------------------------------------------------
In the following we describe examples of [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}applications in both the public Internet, and the corporate network. We use the same examples to run the experiments presented in Sec. \[sec:overhead\].
A summary of rules is available in Tab. \[tab:profiles\]. We define a “Paranoid Profile” that opts for blocking all advertisement sites, to not run Javascript code, and to use private navigation mode on the browser. This profile is the equivalent of running AdBlockPlus and NoScript plugins. This user decides to not share any traffic samples with the community.
A second profile is called “Kid Profile”: the user activates parental control by installing the advices provided by the [[advising community]{}]{}. In the experiment, we simply use the list of the Alexa top 50 “Adult Sites” augmented by other manually verified adult sites. The user contributes also to manually signal other offending websites/objects he gets into. Finally, he volunteers to enable logging and reporting of the three most popular online trackers (*doubleclick.net*, *scorecardresearch.com*, and *yieldmanager.com*).
A third profile impersonate the “Corporate Profile”: rules are imposed by the network administrator, and i) do not allow employees to access Facebook (also removing Facebook buttons from any website), ii) redirect all requests from Google search to Bing search, iii) block the usage of adult sites, Ebay, Amazon, and YouTube, and iv) all HTTP(S) requests exchanged with Dropbox and Twitter are reported to the corporate [[collector]{}]{}.
Preliminary Prototype {#sec:exres}
=====================
We develop a preliminary [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}prototype in which we implement the [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}layer as a Firefox plugin. It supports rules, and the `block`, `redirect`, `log&report` actions.[^7] The [[collector]{}]{}is a Java-based web service, which communicates with clients using SOAP. During the registration phase, the [[collector]{}]{}provides the [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}instance with a randomly assigned ID. The [[collector]{}]{}component receives and stores reports generated by the various [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}plugins. The [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}plugin has been installed successfully on both the desktop and the Android version of Firefox. In the following we present simple experiments collected using this prototype.
![Page rendering time cost for different plugin setups. Absolute numbers in left-hand plot, relative values with respect to the Plugin-free setup in the right-hand plot.[]{data-label="fig:total_loading_time"}](plugin_loading-time_sideBySide){width="0.9\columnwidth"}
Processing Overhead {#sec:overhead}
-------------------
First, we evaluate the performance overhead a user would pay when running the [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}plugin. Given the not-optimized implementation of the prototype, these benchmarks are meant to show the feasibility of the approach rather than being considered as a thorough testing. We consider the three profiles described in Sec. \[sec:examples\]. For baseline, we take a plugin-free configuration. We setup a testbed based on Selenium WebDriver[^8] to automatize the browsing of a selected set of webpages. In particular, we consider i) the Alexa top 10 global websites, ii) 8 news portals, and iii) 6 portals which do not include any online tracker. We run the experiment from a standard PC and instrument the browser to visit each website 20 times. After discarding the best and the worst samples, we measure the average time needed to render the webpage. We purge the browser cache and cookies after each visit.
The left-hand plot in Fig. \[fig:total\_loading\_time\] reports the average rendering time for each website and for each profile. We observe that news portals are the slowest at rendering, most of them taking more than 8 s to fetch and render all the content they embed. Differently, other websites show a much simpler design and their content (mostly being HTML, CSS and Javascript files) are very fast to download. Right plot of Fig. \[fig:total\_loading\_time\] reports the relative average rendering time of each profile with with respect to the Plugin-free configuration. For some webpages as *startpage.com* and *wikipedia.org*, the rendering time is very short (order of tens of ms). Thus, the relative difference among the three profiles is broadened, but it is very small in absolute numbers. For the case of *google.com*, the Corporate profile shows much better performance than the Paranoid, since in the former profile requests are redirected to *bing.com*, which in our measurements is faster at rendering. In general, the Paranoid profile is favored, as it blocks advertisement and some Javascript content download, thus speeding up the rendering of the webpage in many cases. The horizontal lines show the average of the relative rendering time for the three profiles. The Paranoid is 1.07 times faster than the baseline. Corporate and Kid configurations show slightly worse performance being 1.08 and 1.17 times slower, respectively.
In summary, results are variable, with more complicated pages that suffer some extra computational costs incurred by [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}plugin that has to consider and check all links. Nonetheless, being the current implementation not optimized, results hint that clients today have enough power to easily handle the extra load generated by a possible [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}implementation.
Motivations for Having [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{} {#sec:related}
========================================================================================================================================
![Shares of users adopting “popular” privacy-preserving extensions.[]{data-label="fig:extensions"}](ExtensionChart){width="0.85\columnwidth"}
To demonstrate the need of a system like [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}, we present some measurement facts. We analyze a 10-day long traffic trace we obtain from a large European ISP collected during October 2014 using Tstat [@tstat]. To analyze both HTTP and HTTPS communications, the dataset includes anonymized TCP logs from more than 19,000 households identified by the modem IP addresses, out of which 11,000 are active (as those IP addresses from which we see at least one HTTPS request and 1000 TCP flows in the trace). We leverage DN-Hunter, a technique that allows to annotate TCP flows with original server hostname [@dn-hunter]
![Percentage of users contacted from top third party tracking services.[]{data-label="fig:trackersDiffusion"}](trackersDiffusion){width="0.9\columnwidth"}
Pervasiveness of Tracking Services
----------------------------------
We first observe how many of those users are running any plugin that could help customizing their web browsing privacy. To measure this, for each plugin, we run some active experiments to look for connections toward some update hostname. We then count the fraction of users that contacted such service for updates, and report the results in Fig. \[fig:extensions\]. The numbers are puzzling: only 3.1% of users have installed DoNotTrackMe, 5% use AdBlock, and 11.5% use AdBlockPlus. Moreover, more than 84.5% of users do not run any extension to limit advertisement or prevent connections to online trackers. On the other hand, we measure the pervasiveness of most popular online trackers. We build a list of more than 440 tracking services using Ghostery database. We look for users that contact them, i.e., which establish TCP connections toward tracker hostnames. The results illustrated in Fig. \[fig:trackersDiffusion\] are impressive: 98.8%, 98.7% and 97.4% of users regularly (and unintentionally) contact top third party tracking services, i.e., DoubleClick, Google Analytics, and Google Syndication. We count 120 third party services that are contacted by more than 50% of population. Similarly, 96.6% and 92.4% of users contact [Facebook]{} and [Twitter]{} (and the tracking services they include) on a daily basis, often involuntarily via the social network buttons embedded on other web pages.
These facts clearly testify how pervasive are tracking services in today Internet, and how users are unaware of their presence.
Checking HTTPS Information Handling
-----------------------------------
We run a second experiment to verify which data is sent over HTTPS when entering personal information such as user credentials and credit card data on legitimate websites. We collect a dataset by browsing a catalog of websites with a [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}enabled browser which logs all HTTP and HTTPS requests it observes. We consider a list made of the Alexa top site in the Global, Banking, Gambling and Shopping categories. We investigate a total of 160 top sites. For each website, we manually attempt to log in with the dummy credentials “MyName:MyPassword”. Then, from the collected logs, we check how the client sends those credentials to the servers.
We find that still 10% among the most popular websites in the global rank do not use HTTPS to exchange users’ credentials. Only 2 of these apply some custom encryption/obfuscation technique before transmitting them to the server. Even more surprisingly, among the websites embracing HTTPS in the Global category, we notice that users’ credentials are always sent in plain text over the encrypted channel. Assuming HTTPS offers a secure channel, no guarantees are given on how the server handles and stores credentials. Indeed, the server could store those in plain text, posing severe security risks if the server gets compromised. Unfortunately, this is not a rare event. The most recent incident involved a giant like eBay [@ebay]. Even for the Bank category, 75% of websites transmit credentials in plain text, totally trusting the HTTPS channel. Interesting, some of those do implement two-step strong authentication methods based on pins or tokens, sent in plain text through the HTTPS channel. Similarly, 90% of both Gambling and Shopping categories do not hash the credentials. These findings strengthen the need for [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}to warn users about the weaknesses that are unfortunately present on (popular) websites they are used to log in.
**Input:** $\mathbf{HS}$, $W$ \# HTTP request log and target website **Output:** $\mathbf{TS}$ \# List of possible third party trackers and their user-tracking keys $\mathbf{H_{ipa}} \gets$ init\_hash\_table() \# Init hashtable of IP addresses $\mathbf{H_{k-v}} \gets$ init\_hash\_table() \# Init hashtable of key-value pairs
\# Read HTTP request logs $h \gets$ ipa, hostname, path, referer \# Extract fields of interest \# Check target is third party $K$, $V \gets$ extract\_keys(h.path) \# Extract keys and values from the path field \# Iterate all key names and values $hostname\_key\_ipa \gets$ create\_hash(h.hostname, k, h.ipa) \# Create hash for $\mathbf{H_{ipa}}$ $hostname\_key\_value \gets$ create\_hash(h.hostname, k, v) \# Create hash for $\mathbf{H_{k-v}}$ \# Insert all key-value pairs in $\mathbf{H_{ipa}}$ \# Insert the IP address in $\mathbf{H_{k-v}}$ \# Iterate over $\mathbf{H_{ipa}}$ \# Iterate over values mapped to current hash \# Check current hash refers to one value only $hostname$, $key$, $ipa \gets$ decode\_hash(hash) \# Decode hash into hostname, key and IP address $hash_{aux} \gets$ create\_hash($hostname$, $key$, $value$) \# Create an auxiliary hash using hostname, key and value \# Check the auxiliary hash in $\mathbf{H_{k-v}}$ contains only one IP address and check this corresponds to the one in $\mathbf{H_{ipa}}$ \# Add hostname and key to the output list
Automatic Detection of Tracker: a Simple Algorithm {#sec:algorithms}
==================================================
One of the design challenges of [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}is the need of automatic means to detect services that possibly offend users’ privacy. This section presents a simple preliminary solution to automate advice generation. Specifically, we present an unsupervised methodology for an automatic [[data analyzer]{}]{}to identify possible third party trackers that users unknowingly contact while browsing a given website.
We consider the set of HTTP requests that a user generates when visiting a [*target*]{} website. The [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}layer running in user’s device monitors the HTTP traffic and sends to the [[collector]{}]{}the anonymized user identifier, and a sample of HTTP request logs having i) the hostname different from [*target*]{}, and ii) [*target*]{} appearing in the referer field. In other words, [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}reports to the [[collector]{}]{}all the third party URLs a user contacts when accessing the webpage [*target*]{}. Given this input, the [[data analyzer]{}]{}looks for parameters in the URLs that may suggest the third party service is using some identifier to track the users.
Our algorithm, illustrated in Alg. \[alg:aut\_algorithm\], extracts all HTTP parameters from the third party URLs. For example, from the third party URL [http://www.acme.com/query?key1=X&key2=Y]{}, it extracts [key1]{} and [key2]{}, with values [X]{} and [Y]{}, respectively; [www.acme.com]{} is the third party hostname. For each hostname and for each key, we investigate one-to-one mapping between the [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}user identifier and the observed values. Intuitively, we look for keys whose value is uniquely associated to the user. This hints to the key being a “user identifier”, and thus the algorithm labels the third party hostname as “tracker”.
We validate our approach using our passive traffic trace, which contains enough data to pinpoint trackers. We target three popular web portals, News1, YouTube, and Facebook. We check all third party hostnames, and we extract those keys whose values show a one-to-one mapping with the IP addresses of the client which is considered an user identifier in our traces.[^9] In this experiment, we consider those keys for which we observe at least 25 distinct IP addresses.
Thus, we run the algorithm to pinpoint the possible third party trackers, and the keys they employ to store the users’ identifiers. Results are reported in Table \[tab:keys\]. As shown, we identify 3, 5 and 8 trackers for News1, YouTube and Facebook, respectively. Keys clearly suggest the exchange of possible user identifiers. The only possible false positive is `install_timestamp`, which however we verify manually to be a unique user identifier. Looking at the hostnames, most of them are known tracking services that already appear in our list. The only exception is *www.skyscanner.com* (a flight booking website) which tracks the users using the key `ksh_id`.
Results are promising and show that the availability of large data enables automatic detection of personal leakage information. While the our experiments are carried over HTTP traffic, [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}allows us to check for privacy leakage also on HTTPS connection, by checking other key-value pair, e.g., in cookies.
A Feasibility Check {#sec:feasibility}
===================
![Popularity of hostnames found in a portion of our trace, and corresponding average time $T_c$ (in hours) to collect at least 100 with a sample ratio equal to $1/10$.[]{data-label="fig:rank"}](rank_collect-time){width="0.7\columnwidth"}
As described in Sec. \[sec:system\], the crowd feedback is vital for [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}. Therefore, we study how long the system should take to build a large enough data collection to get reliable analysis results to generate the advices. We consider for instance the case in which we aim at collecting data involving $N$ different hostnames. We say a hostname data to be reliable when we have collected at least $K$ entries, and we assume that an entry is reported to the [[collector]{}]{}when a user visits such hostname. We apply the sampling so that only a fraction of entries are eventually seen by [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}. Understanding how many visits we need to obtain $K$ samples for each of the $N$ hostnames is a problem that belongs to Coupon Collector’s family. In particular, we have to refer to the Newman-Shepp generalization [@newman1960double]. In this case, the expectation $E[V]$ of the number of visits $V$ needed to collect a constant number of entries $K$ for a large number of hostnames $N$ is given by: $$E [V] = N \log N + (K - 1) N \log \log N + O(N).
\label{eq:coupon}$$ The model assumes visits are equally distributed among $N$ hostnames. Considering the trace described in Sec. \[sec:related\] where $19,000$ households contacts every day more than $290,000$ distinct hostnames. By combining this data with Eq. \[eq:coupon\], we obtain that the system would take only $8$ days to collect at least $K$=$100$ reports for each of the $N$=$10,000$ selected hostnames in the catalogue.
In reality, the probability of visiting an hostnames typically follows a heavy-tailed Zipf-like distribution. For instance, on the left y-axis, Fig. \[fig:rank\] reports the number of visits of top $10,000$ hostnames. As expected, it follows the typical Zipf-like distribution. Thanks to this, the top $10,000$ hostnames correspond to 88.13% of total visits. Therefore, we run a trace-driven experiment using the actual trace to evaluate the average time $T_c$ needed to collect $100$ samples for each of the top $10,000$ hostnames in the trace. We assume [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}clients are configured with sampling ratio equal to $1/10$. We focus on the top $10,000$ popular hostnames in the first day of our trace. For each request, we measure $T_c$, averaging over 12 independent runs, i.e., starting the collection at a random time. As soon as $K$=$100$ samples are collected, the hostname advice is said to be reliable.
The right y-axis of Fig. \[fig:rank\] reports $T_c$ (in hours) needed to reach the minimum critical mass of $K$=$100$ visits. As shown, [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}would take few seconds to collect 100 visits for the most popular hostname (e.g., $87$ s for *www.google.com*). Less that $48$ h are needed in the worst case. Observe that some services show very bursty traffic patterns that considerably decrease $T_c$. Indeed, when the clients access those services, we collect a large number of samples in few time. The overall average value of $T_c$ is $12.57$ h, much less than the time predicted by Eq. \[eq:coupon\].
This simple experiment clearly shows that even with a population of only $19,000$ contributors that are reporting $1/10$ of their activity we can easily collect enough data to compute advices. We can also envision smarter sampling policies to, e.g., avoid to keep collecting samples from most popular sites while only asking sample contributions for other services.
Discussion and Future Work {#conclusion}
==========================
This paper presented [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}, a novel crowd-sourced holistic approach to empower informed choices in the web. Motivated by the fact that today service owners have the (almost total) control on information they can collect, and by the fact that users and companies are more and more kept out of the loop, we advocate the need for any user, any device, any network to have the freedom to control the information exchanged on the Internet.
In this paper, we have shown that [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}is feasible. We presented real data to show how easy would be building a crowd-sourced knowledge, supported by automatic algorithms. As a proof of concept, we implemented [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}as a Firefox plugin, showing that its benefits can come at a marginal performance cost for the user.
[[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}design presents some practical challenges that must be faced, and ingenuity must be used to find appropriate solutions. The research community as a whole is called to design efficient algorithms, and propose scalable implementations. [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}offers this possibility, allowing anyone to contribute.
We are aware that our idea is ambitious, as, first, [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}shall pass through a long and difficult standardization process to get accepted as a compelling technology by the industry, and, second, it shall undergo a deep engineering analysis to convince users about its effectiveness and usability. However, as the community is becoming more and more aware that data constitutes a vital asset in modern web, we are confident that the unified solution offered by [[[C<span style="font-variant:small-caps;">rowd</span>S<span style="font-variant:small-caps;">urf</span>]{}]{}]{}represents a good starting point to protect (and possibly endorse) such asset.
[10]{}
Facebook sued for 15\$ billion over alleged privacy infractions,\
<http://www.cnet.com/news/facebook-sued-for-15-billion-over-alleged-privacy-infractions/>
Google to pay record 22.5m fine to [FTC]{} over [S]{}afari tracking,\
<http://www.theguardian.com/technology/2012/jul/10/google-fine-iphone-ipad-privacy>
Google SafeBrowsing,\
<https://developers.google.com/safe-browsing/>
Hackers steal vast eBay user database, including passwords,\
<http://www.bdlive.co.za/world/americas/2014/05/23/hackers-steal-vast-ebay-user-database-including-passwords>
Agarwal, Y., Hall, M.: rotect[M]{}y[P]{}rivacy: [D]{}etecting and [M]{}itigating [P]{}rivacy [L]{}eaks on i[OS]{} [D]{}evices [U]{}sing [C]{}rowdsourcing. In: [ACM]{} Mobi[S]{}ys. (2013)
Bermudez, I.N., Mellia, M., Munafo, M.M., Keralapura, R., Nucci, A.: to the [R]{}escue: [D]{}iscerning [C]{}ontent and [S]{}ervices in a [T]{}angled web. In: [ACM]{} [IMC]{}. (2012)
Dingledine, R., Mathewson, N., Syverson, P.: Tor: [T]{}he second-generation onion router. In: USENIX Security Symposium. (2004)
Enck, W., Gilbert, P., Chun, B.G., Cox, L.P., Jung, J., McDaniel, P., Sheth, A.N.: Taint[D]{}roid: [A]{}n [I]{}nformation-flow [T]{}racking [S]{}ystem for [R]{}ealtime [P]{}rivacy [M]{}onitoring on [S]{}martphones. In: [USENIX]{} [OSDI]{}. (2010)
Finamore, A., Mellia, M., Meo, M., Munafò, M.M., Rossi, D.: Experiences of internet traffic monitoring with tstat. etwork (2011)
Kramer, A.D.I., Guillory, J.E., Hancock, J.T.: . (2014)
Naylor, D., Finamore, A., Leontiadis, I., Grunenberger, Y., Mellia, M., Papagiannaki, K., Steenkiste, P.: The [C]{}ost of the [S]{} in [HTTPS]{}. In: [ACM]{} [C]{}o[NEXT]{}. (2014)
Newman, D.J.: The double dixie cup problem. American Mathematical Monthly (1960)
Popa, L., Ghodsi, A., Stoica, I.: s the [N]{}arrow [W]{}aist of the [F]{}uture [I]{}nternet. In: [ACM]{} Hot[N]{}ets. (2010)
Riederer, C., Erramilli, V., Chaintreau, A., Krishnamurthy, B., Rodriguez, P.: For [S]{}ale : [Y]{}our [D]{}ata: [B]{}y : [Y]{}ou. In: [ACM]{} Hot[N]{}ets. (2011)
[^1]: This work was conducted under the Narus Fellow Research Program.
[^2]: <https://www.eff.org/>
[^3]: <http://www.abine.com/donottrackme.html>
[^4]: <https://www.eff.org/privacybadger>
[^5]: <https://www.mywot.com/>
[^6]: <https://www.ghostery.com/en/>
[^7]: For instance, the configurations developed for the Corporate and Kid profiles are available at <https://db.tt/yIl0LyX1>.
[^8]: <http://www.seleniumhq.org>
[^9]: The ISP assigns a static IP address to each household modem.
|
---
abstract: 'Dissipative particle dynamics (DPD) belongs to a class of models and computational algorithms developed to address mesoscale problems in complex fluids and soft matter in general. It is based on the notion of particles that represent coarse-grained portions of the system under study and allow, therefore, to reach time and length scales that would be otherwise unreachable from microscopic simulations. The method has been conceptually refined since its introduction almost twenty five years ago. This perspective surveys the major conceptual improvements in the original DPD model, along with its microscopic foundation, and discusses outstanding challenges in the field. We summarize some recent advances and suggests avenues for future developments.'
author:
- Pep Español
- 'Patrick B. Warren'
date: 'December 13, 2016'
title: 'Perspective: Dissipative Particle Dynamics'
---
Introduction
============
The behaviour of complex fluids and soft matter in general is characterized by the presence of a large range of different time and space scales. Any attempt to resolve *simultaneously* several time scales in a *single* simulation scheme is confronted by the problem of taking a prohibitively large number of sufficiently small time steps. Typically one proceeds hierarchically [@Berendsen2007], by devising models and algorithms appropriate to the length and time scales one is interested in. Leaving aside quantum effects negligible for soft matter, at the bottom of the hierarchy we have Hamilton’s equations, with accurate albeit approximate potential energy functions, which are solved numerically with molecular dynamics (MD). Nowadays some research teams can simulate billions of particles for hundreds of nanoseconds [@Heinecke2015]. This opens up the possibility to study very detailed, highly realistic molecular models that capture essentially all the microscopic details of the system. This is, of course, not enough in many situations encountered in soft matter and life sciences [@Shillcock2008]. One can always think of a problem well beyond computational capabilities: from the folding of large proteins, to the replication of DNA, or the simulation of an eukariotic cell, or the simulation of a mammal, including its brain. While we are still very far from even well-posing some of these problems, it is obvious that science is pushing strongly towards more and more complex systems.
Instead of using atoms moving with Hamilton’s equations to describe matter, one can take a continuum approach in which fields take the role of the basic variables. Navier-Stokes-Fourier hydrodynamics, or elasticity, or many of the different continuum theories for complex fluid systems are examples of this approach [@Ottinger2005]. These continuum theories are, in fact, coarse-grained versions of the atomic system that rely on two key related concepts: 1) the continuum limit—[[[*i.$\,$e.*]{}]{}]{} a “point” of space on which the field is defined is, in fact, a volume element containing a large number of atoms [@Batchelor1967], and 2) the local equilibrium assumption—[[[*i.$\,$e.*]{}]{}]{} these volumes are large enough to reproduce the thermodynamic behaviour of the whole system [@deGrootMazur1964]. The quantities from one volume element to its neighbour are assumed to change little and this allows the powerful machinery of partial differential equations to describe mathematically the system at the largest scales, allowing even to find analytical solutions for many situations. Nevertheless, the continuum equations are usually non-linear and analytical solutions are not always possible. One resorts then to numerical methods to solve the equations. Computational fluid dynamics (CFD) has evolved into a sophisticated field in numerical analysis with a solid mathematical foundation.
The length scales that can be addressed by continuum theories range from microns to parsecs. Remarkably, the same equations (with the same thermodynamics and transport coefficients) can be used at very different scales. Many of the interesting phenomena that occur in complex fluids occur at the *mesoscale*. The mesoscale can be roughly defined as the spatio-temporal scales ranging from 10–$10^4$nm and 1–$10^6$ns. These scales require a number of atoms that make the simulation with MD readily unfeasible. On the other hand, it was shown in the early days of computer simulations by Alder and Wainwright [@Alder1970] that hydrodynamics is valid at surprisingly small scales. Therefore, there is a chance to use continuum theory down to the mesoscale. However, at these short length scales the molecular discreteness of the fluid starts to manifest itself. For example, a colloidal particle of submicron size experiences Brownian motion which is negligible for macroscopic bodies like submarine ships. In order to address these small scales one needs to equip field theories like hydrodynamics with fluctuating terms, as pioneered by Landau and Lifshitz [@Landau1959]. The resulting equations of fluctuating hydrodynamics also receive the name of Landau-Lifshitz-Navier-Stokes (LLNS) equation. There is much effort in the physics/mathematical communities to formulate numerical algorithms with the standards of usual CFD for the solution of stochastic partial differential equations modeling complex fluids at mesoscales [@Naji2009; @Uma2011; @Shang2012; @Donev2010; @Oliver2013; @Donev2014; @Donev2014a; @Plunkett2014; @DeCorato2016].
While the use of fluctuating hydrodynamics may be appropriate at the mesoscale, there are many systems for which a continuum hydrodynamic description is not applicable (or it is simply unknown). Proteins, membranes, assembled objects, polymer systems [[[*etc*]{}]{}]{}. may require unaccessible computational resources to be addressed with full microscopic detail but a continuum theory may not exist. In these mesoscale situations, the strategy to retain some chemical specificity is to use *coarse-grained* descriptions in which groups of atoms are treated as a unit [@Voth2009]. While the details of how to do this are very system specific, and an area of intense active research (see reviews in [Refs. [@Noid2013; @Brini2013; @Lopez2014]]{}), it is good to know that there is a well defined and sounded procedure for the construction of coarse-grained descriptions [@Green1952; @Zwanzig1961] that is known under the names of non-equilibrium statistical mechanics, Mori-Zwanzig theory, or the theory of coarse-graining [@Grabert1982; @Zubarev1996; @Espanol2004Chapter; @Ottinger2005]. Simulating everything, everywhere, with molecular detail can be not only very expensive but also unnecessary. In particular, water is very expensive to simulate and sometimes its effect is just to propagate hydrodynamics. Hence there is an impetus to develop at least coarse-grained *solvent* models, but retain enough *solute* molecular detail to render chemical specificity.
At the end of the 20th century the simulation of the mesoscale was attacked from a computational point of view with a physicist intuitive, quick and dirty, approach. Dissipative particle dynamics (DPD) was one of the products, among others [@Malevanets1999; @Succi2001; @KHbook09; @Dunweg2009; @Gompper2009; @Donev2009], of this approach. DPD is a point particle minimal model constructed to address the simulation of fluid and complex systems at the mesoscale, when hydrodynamics and thermal fluctuations play a role. The popularity of the model stems from its algorithmic simplicity and its enormous versatility. Just by varying at will the conservative forces between the dissipative particles one can readily model complex fluids like polymers, colloids, amphiphiles and surfactants, membranes, vesicles, phase separating fluids, [[[*etc*]{}]{}]{}. Due to its simple formulation in terms of symmetry principles (Galilean, translational, and rotational invariances) it is a very useful tool to explore generic or universal features (scaling laws, for example) of systems that do not depend on molecular specificity but only on these general principles. However, detailed information highly relevant for industrial and technological processes requires the inclusion of chemical detail in order to go beyond qualitative descriptions.
DPD, as originally formulated, does not include this chemical specificity. This is not a drawback of DPD per se, as the model is regarded as a coarse-grained version of the system. Any coarse-graining process eliminates details from the description and keeps only the relevant ones associated to the length and time scales of the level of description under scrutiny. However, as it will be apparent, the original DPD model could be regarded as being too simplistic and one can formulate models that capture more accurate information of the system with comparable computational efficiency.
Since its initial introduction, the question “What do the dissipative particles represent?” has lingered in the literature, with intuitively appealing but certainly vague answers like “groups of atoms moving coherently”. In the present Perspective we aim at answering this question by reviewing the efforts that have been taken in this direction. We offer a necessarily brief overview of applications, and discuss some open questions and unsolved problems, both of fundamental and applied nature. Since the initial formulation of the DPD model a number of excellent reviews [@Warren1998; @Espanol2004Chapter; @Pivkin2010; @Moeendarbary2010a; @Guigas2011; @Lu2013; @Ghoufi2013; @Liu2014], and dedicated workshops [@CECAM2008; @Mousseau2014], have kept the pace of the developments. We hope that the present perspective complements these reviews with a balanced view about the more recent advances in the field. We also provide a route map through the different DPD variant models and their underlying motivation. In this doing, we hope to highlight a unifying conceptual view for the DPD model and its connection with the microscopic and continuum levels of description.
This Perspective is organized as follows. In [Sec. \[Sec:DPD\]]{} we consider the original DPD model with its virtues and limitations. In [Sec. \[Sec:MDPD\]]{} we review models that have been formulated in order to avoid the limitations of the original DPD model. The SDPD model, which is the culmination of the previous models that links directly to the macroscopic level of description (Navier-Stokes) is considered in [Sec. \[Sec:topdown\]]{}. The microscopic foundation of the DPD model is presented in [Sec. \[Sec:bottomup\]]{}. Finally, we present some selected applications in [Sec. \[Sec:applications\]]{} and conclude in [Sec. \[Sec:conclusions\]]{}.
The original DPD model {#Sec:DPD}
======================
The original DPD model was introduced by Hoogerbrugge and Koelman [@Hoogerbrugge1992], and was formulated by the present authors as a proper statistical mechanics model shortly after [@Espanol1995epl]. The DPD model consists on a set of point particles that move off-lattice interacting with each other with three types of forces: a conservative force deriving from a potential, a dissipative force that tries to reduce radial velocity differences between the particles, and a further stochastic force directed along the line joining the center of the particles. The last two forces can be termed as a “pair-wise Brownian dashpot” which, as opposed to ordinary Langevin or Brownian dynamics, is momentum conserving. The Brownian dashpot is a minimal model for representing viscous forces and thermal noise between the “groups of atoms” represented by the dissipative particles. Because of momentum conservation the behaviour of the system is hydrodynamic at sufficiently large scales [@Espanol1995pre; @Marsh1997; @BJM15].
![Dissipative particles interact pair-wise with a conservative linear repulsive force, and a Brownian dashpot made of a friction force that reduces the relative velocity between the particles and a stochastic force that gives kicks of equal size and opposite directions to the particles. These forces vanishes beyond a cutoff radius $r_c$.[]{data-label="Fig.Dashpot"}](dashpot-eps-converted-to.pdf)
The stochastic differential equations of motion for the dissipative particles are [@Espanol1995epl] $$\begin{aligned}
\dot{\bf r}_i=& {\bf v}_i
\nonumber\\
m_i\dot{\bf v}_i=& -\frac{\partial V}{\partial {\bf r}_i}
-\sum_j \gamma \omega^D(r_{ij}) ({\bf v}_{ij} \cdot {\bf e}_{ij}){\bf e}_{ij}
\nonumber\\
&+\sum_j \sigma \omega^R(r_{ij}) \frac{dW_{ij}}{dt}{\bf e}_{ij}
\label{dpd}\end{aligned}$$ where $r_{ij}=|{\bf r}_i-{\bf r}_j|$ is the distance between particles $i,j$, ${\bf v}_{ij}={\bf v}_{i}-{\bf v}_{j}$ is the relative velocity and ${\bf e}_{ij}={\bf r}_{ij}/r_{ij}$ is the unit vector joining particles $i$ and $j$. $dW_{ij}$ is an independent increment of the Wiener process. In [Eq. ]{}, $\gamma$ is a friction coefficient and $\omega^D(r_{ij}),\omega^R(r_{ij})$ are bell-shaped functions with a finite support that render the dissipative interactions local. Validity of the fluctuation-dissipation theorem requires [@Espanol1995epl] $\sigma$ and $\gamma$ to be linked by the relation $\sigma^2=2 \gamma {{k_{\mathrm{B}}}T}$ and also $\omega^D(r_{ij})=[\omega^R(r_{ij})]^2$. Here ${k_{\mathrm{B}}}$ is Boltzmann’s constant and $T$ is the system temperature. As a result, the stationary probability distribution of the DPD model is given by the Gibbs canonical ensemble $$\begin{aligned}
\rho(\{{\bf r},{\bf v}\}) &
=\frac{1}{Z}\exp\left\{-\beta\sum_i^Nm_i\frac{{\bf v}_i^2}{2}
-\beta V(\{{\bf r}\})\right\}
\label{gibbs}\end{aligned}$$ The potential energy $V(\{{\bf r}\})$ is a suitable function of the positions of the dissipative particles that is translationally and rotationally invariant in order to ensure linear and angular momentum conservation. In the original formulation the form of the potential function was taken of the simplest possible form $$\begin{aligned}
V(\{{\bf r}\})&=\frac{1}{2}\sum_{ij}a_{ij}(1-r_{ij}/r_c)^2
\label{V}\end{aligned}$$ where $a_{ij}$ is a particle interaction constant and $r_c$ is a cutoff radius. This potential produces a linear force with the form of a Mexican hat function of finite range. Without any other guidance, the weight function $\omega^R(r)$ in the dissipative and random forces is given by the same linear functional form. Complex fluids can be modeled through mesostructures constructed by adding additional interactions (springs and/or attractive or repulsive potentials between certain particles) to the particles. Groot and Warren [@Groot1997] offered a practical route to select the parameters in a DPD simulation by matching compressibility and solubility parameters of the model to real systems.
The soft nature of the weight functions in DPD allows for large time steps, as compared with MD that needs to deal with steep repulsive potentials. However too large time steps lead to numerical errors that depend strongly on the numerical algorithm used. The area of numerical integrators for the stochastic differential equations of DPD has received attention during the years with increasingly sophisticated methods. Starting from the velocity Verlet implementation of [Ref. [@Groot1997]]{} and the self-consistent reversible scheme of Pagonabarraga and Hagen [@Pagonabarraga1998], the field has evolved towards splitting schemes [@Shardlow2003; @Nikunen2003; @Defabritiis2006; @Serrano2006; @Thalmann2007; @Chaudhri2010]. A Shardlow [@Shardlow2003] scheme has been recommended after comparison between different integrators [@Nikunen2003], but there are also other recent more efficient proposals [@Litvinov2010; @Leimkuhler2015; @Farago2016].
Because of momentum conservation, the original DPD model in [Eqs. ]{}– can be regarded as a (toy) model for the simulation of fluctuating hydrodynamics of a simple fluid. As a model for a Newtonian fluid at mesoscales the DPD model has been used for the simulation of hydrodynamics flows in several situations [@JBK+99; @Keaveny2005; @Chen2006a; @VandeMeent2008; @Tiwari2008; @Steiner2009; @Filipovic2011]. It should be obvious, though, that the fact that DPD conserves momentum does not makes it the preferred method for solving hydrodynamics. MD is also momentum conserving and can be used to solve hydrodynamics; for a recent review see Kadau [[[*et al.*]{}]{}]{} [@Kadau2010]. However, in terms of computational efficiency hydrodynamic problems are best addressed with CFD methods with, perhaps, inclusion of thermal fluctuations.
In addition, the original DPD model suffers from several limitations that downgrade its utility as a LLNS solver. The first one is the thermodynamic behaviour of the model. Taken as a particle method, the DPD model has an equation of state that is fixed by the conservative interactions. The linear conservative forces of the original DPD model produce an unrealistic equation of state that is quadratic in the density [@Groot1997]. The quadratic equation of state in DPD seems to be a general property of soft sphere fluids at high overlap density. A well-known exemplar is the Gaussian core model [@Louis2000]. These systems have been termed *mean-field* fluids and this includes the linear DPD potential in [Eq. ]{}. Many thermodynamic properties for the linear DPD potential can be obtained by using standard liquid state theory and it has been our experience that the HNC integral equation closure works exceptionally well in describing the behaviour of DPD in the density regime of interest [@WVA+13; @Warren2014; @FvM+16]. Note that while it is possible to fit the compressibility (related to second derivatives of the free energy) to that of water, for example, the pressure (related to first derivatives) turns out to be unrealistic. The conservative forces of the original model are not flexible enough to specify the thermodynamic behaviour as an input of the simulation code [@Trofimov2002].
A second limitation is due to too simplistic friction forces. The central friction force in [Eq. ]{} implies that when a dissipative particle passes a second, reference particle, it will not exert any force on the reference particle unless there is a radial component to the velocity [@Espanol1997epl; @Espanol1998pre]. Nevertheless, on simple physical grounds one would expect that the passing dissipative particle would drag in some way the reference particle due to shear forces. Of course, if many DPD particles are involved simultaneously in between the two particles, this will result in an effective drag. The same is true for a purely conservative molecular dynamics simulation. It would be nice, though, to have this effect captured directly in terms of modified friction forces in a way that a smaller number of particles need to be used to reproduce large scale hydrodynamics. Note that the viscosity of the DPD model cannot be specified beforehand, and only after a recourse to the methods of kinetic theory can one estimate the friction coefficient to be imposed in order to obtain a given viscosity [@Espanol1995pre; @Marsh1997; @Masters1999; @Evans1999]. As we will see, inclusion of more sophisticated shear forces allows for a more direct connection with Navier-Stokes hydrodynamics.
A third limitation of DPD as a mesoscale hydrodynamic solver is the fact that the DPD model (in an identical manner as MD) is *hardwired to the scale*. What we mean with this is that given a physical problem, with a characteristic length scale, we may always put a given number of dissipative particles and parametrize the model in order to recover some macroscopic information (typically, compressibilities and viscosity). However, if one uses a different number of particles for exactly the same physical situation, one should start over and reparametrize the system again. This is certainly very different from what one would expect from a Navier-Stokes solver, that specifies the equation of state and viscosity irrespective of the scale, and one simply worries about having a sufficiently large number of points to resolve the characteristic length scales of the flow. In other words, in DPD there is no notion of *resolution*, *grid refinement*, and *convergence* as in CFD. There have been attempts to restore a *scale free* property for DPD [@Espanol1998pre; @Fuchslin2009; @Arienti2011], even for bonded interactions [@Spaeth2010a]. To get this property, the parameters in the model need to depend on the level of coarse-graining, but this is not specified in the original model. Closely related to this lack of scaling is the fact that there is no mechanism in the model to switch off thermal fluctuations depending on the scale at which the model is operating. On general statistical mechanics grounds, thermal fluctuations should scale as $1/\sqrt{N}$ where $N$ is the number of degrees of freedom coarse-grained into one coarse-grained (CG) particle. As the dissipative particles represent larger and larger volume elements, they should display smaller and smaller fluctuations. But there is no explicit volume or size associated to a dissipative particle. This problem is crucial, for instance, in the case of suspended colloidal particles or in microfluidics applications where flow conditions and the physical dimensions of the suspended objects or physical dimensions of the operating device determine whether and, more importantly, *to what extent* thermal fluctuations come into play.
Finally, another limitation of the DPD model is that it cannot sustain temperature gradients. Energy in the system is dissipated and not conserved, and the Brownian dashpot forces of DPD act as a thermostat.
MDPD, EDPD, FPM {#Sec:MDPD}
===============
During the years, the DPD model has been *enriched* in several directions in order to deal with all the above limitations. In this Section we briefly review these enriched DPD models.
The many-body (or multi-body) dissipative particle dynamics (MDPD) method stands for a modification of the original DPD model in which the purely repulsive conservative forces of the classic DPD model are replaced by forces deriving from a many-body potential; thus the scheme is still covered by [Eqs. ]{}–, but a many-body $V(\{{{{\mathbf r}}}\})$ is substituted for [Eq. ]{}. The MDPD method was originally introduced by Pagonabarraga and Frenkel [@Pagonabarraga2001], Warren [@Warren2001], and independently by Groot, [@footnote3] and subsequently modified and improved by Trofimov [[[*et al.*]{}]{}]{} [@Trofimov2002], reaching a level of maturity [@Warren2003; @Merabia2007; @Ghoufi2010; @Arienti2011; @Ghoufi2013; @Warren2013; @AM16]. The key innovation of the MDPD is the introduction of a density variable $d_i=\sum_{j\neq i}W(r_{ij})$, as well as a free energy $\psi(d_i)$ associated to each dissipative particle. Here $W(r)$ is a normalized bell-shaped weight function that ensures that the density $d_i$ is high if many particles are accumulated near the $i$-th particle. The potential of interaction of these particles is assumed to be of the form $V=\sum_i\psi(d_i)$ [@Warren2003b]. This is a many-body potential of a form similar to the embedded atom potential in MD simulations [@Daw1984; @Daw1993]. For multi-component mixtures the many-body potential may be generalized to depend on partial local densities.
Despite its many-body character, the resulting forces are still pair-wise, and implementation is straightforward. However, not all pair-wise force laws correspond to a many-body potential. Indeed the existence of such severely constrains the nature of the force laws, and some errors have propagated into the literature (see discussion in [Ref. [@Warren2013]]{}). In Appendix \[app:mdpd\] we explore how the force law is constrained by the weight function $W(r)$. The message is: if in doubt, always work from $V(\{{{{\mathbf r}}}\})$.
MDPD escapes the straitjacket of mean-field fluid behaviour by modulating the thermodynamic behaviour of the system directly at the interaction level between the particles. This allows for more general equations of state than in the original DPD model, which is a special case where the one-body terms are linear in local densities. Indeed, one can easily engineer a van der Waals loop in the equation of state to accommodates vapor-liquid coexistence. But this in itself is not enough to stabilize a vapour-liquid interface, since one should additionally ensure that the cohesive contribution is longer-ranged than the repulsive contribution. This can be achieved for example by using different ranges for the attractive and repulsive forces [@Warren2001; @Warren2003], or modelling the square gradient term in the free energy [@Tiwari2006].
The energy-conserving dissipative particle dynamics model (EDPD) was introduced simultaneously and independently by Bonet Avalós and Español [@Avalos1997; @Espanol1997DPDE] as a way to extend the DPD model to non-isothermal situation. In this case, the key ingredient is an additional internal energy variable associated to the particles. The behaviour of the model was subsequently studied [@Avalos1999; @Ripoll1998; @Ripoll2000; @Ripoll2005]. The method has been compared with standard flow simulations [@Abu-Nada2011; @Yamada2011], and recently a number of interesting applications have emerged [@Chaudhri2009; @Lisal2011], including heat transfer in nanocomposites [@Qiao2007], shock detonations [@Maillet2011], phase change materials for energy storage [@Rao2012], shock loading of a phospholipid bilayer [@Ganzenmuller2012], chemically reacting exothermic fluids [@Brennan2014; @Moore2016], thermoresponsive polymers [@Li2015], and water solidification [@Johansson2016].
The fluid particle model (FPM) was devised as a way to overcome the limitation of the simplistic friction forces in DPD [@Espanol1997epl; @Espanol1998pre]. The method introduced, in addition to radial friction forces, shear forces that depend not only on the approaching velocity but also on the velocity differences directly. Shear forces have been reconsidered recently [@Junghans2008]. The resulting forces are non-central and do not conserve angular momentum. In order to restore angular momentum conservation a spin variable is introduced. Heuristically, the spin variable is understood as the angular momentum relative to the center of mass of the fluid particle. The model has been used successfully by Pryamitsyn and Ganesan [@Pryamitsyn2005] in the simulation of colloidal suspensions, where each colloid is represented by just one larger dissipative particle, an approach also used by Pan [[[*et al.*]{}]{}]{} [@Pan2008].
DPD from top-down: The SDPD model {#Sec:topdown}
=================================
While MDPD is still isothermal and EDPD still uses conservative forces too limited to reproduce arbitrary thermodynamics, the two enrichments of a density variable and an internal energy variable introduced by these models suggest a view of the dissipative particles as truly thermodynamic subsystems of the whole system, consistently with the local equilibrium assumption in continuum hydrodynamics. There have been a number of works trying to formalize this view of “moving fluid particles” in terms of Voronoi cells of points moving with the flow field [@Espanol1997]. Flekk[ø]{}y [[[*et al.*]{}]{}]{} formulated a (semi) bottom-up approach for constructing a model of fluid particles with the Voronoi tessellation [@Flekkoy1999; @Flekkoy2000]. A thermodynamically consistent Lagrangian finite volume discretization of LLNS using the Voronoi tessellation was presented by Serrano and Español [@Serrano2001] and compared favourably [@Serrano2002] with the models in [Refs. [@Flekkoy1999]]{} and [@Flekkoy2000]. While this top-down modeling based on the Voronoi tessellation is grounded in a solid theoretical framework, it has not found much application due, perhaps, to the computational complexity of a Lagrangian update of the Voronoi tessellation [@footnote1].
In an attempt to simplify the Lagrangian finite Voronoi volume discretization model, the smoothed dissipative particle dynamics (SDPD) model was introduced shortly after [@Espanol2003SDPD], based on its precursor [@Espanol1999PRL]. SDPD is a thermodynamically consistent particle model based on a particular version of smoothed particle hydrodynamics (SPH) that includes thermal fluctuations. SPH is a mesh-free Lagrangian discretization of the Navier-Stokes equations (NSE) differing from finite volumes, elements, or differences in that a simple smooth kernel is used for the discretization of space derivatives. This leads to a model of moving interacting point particles whose simulation is very similar to MD. SPH was introduced in an astrophysical context for the simulation of cosmic matter at large scales [@Lucy1977; @Gingold78], but has been applied since then to viscous and thermal flows [@LiuLiu2003; @Liu2010], including multi-phasic flow [@Zhi-bin2016]. An excellent recent critical review on SPH is given by Violeau and Rogers [@Violeau2016].
In the particular SPH discretization given by SDPD of the viscous terms in the NSE, the resulting forces have the same structure of the shear friction forces in the FPM. By casting the model within the universal thermodynamically consistent [<span style="font-variant:small-caps;">generic</span>]{} framework [@Ottinger2005], thermal fluctuations are introduced consistently in SDPD by respecting an exact fluctuation-dissipation theorem at the discrete level. Therefore, SDPD (as opposed to SPH) can address the mesoscopic realm where thermal fluctuations are important.
The SDPD model consists on $N$ point particles characterized by their positions and velocities ${\bf r}_i,{\bf v}_i$ and, in addition, a thermal variable like the entropy $S_i$ (by a simple change of variables, one can also use alternatively the internal energy $\epsilon_i$ or the temperature $T_i$). Each particle is understood as a thermodynamic system with a volume ${\cal V}_i$ given by the inverse of the density $d_i=\sum_i^NW(r_{ij})$, a fixed constant mass $m_i$, and an internal energy $\epsilon_i=E(S_i,m_i,{\cal V}_i)$ which is a function of the entropy of the particle, its mass ([[[*i.$\,$e.*]{}]{}]{} number of moles), and volume. The functional form of $E(S,M,{\cal V})$ is assumed, through the local equilibrium assumption, to be the same function that gives the global thermodynamic behaviour of the fluid system (but see below). The equations of motion of the independent variables are [@Espanol2003SDPD] $$\begin{aligned}
d{\bf r}_i =&{\bf v}_idt
\nonumber\\
m d{\bf v}_i=& \sum_{j}\left[\frac{P_i}{d_i^2}
+\frac{P_j}{d_j^2}\right] F_{ij}{\bf r}_{ij}dt
\nonumber\\
&- \frac{5\eta}{3}
\sum_j \frac{F_{ij}}{d_id_j}
\left({\bf v}_{ij}+{\bf e}_{ij}{\bf e}_{ij} \!\cdot\! {\bf v}_{ij}
\right)dt
+m d\tilde{\bf v}_i
\nonumber\\
T_id{S}_i =&
\frac{5\eta}{6}
\sum_j \frac{F_{ij}}{d_id_j}
\left({\bf v}_{ij}^2+({\bf e}_{ij} \!\cdot\! {\bf v}_{ij})^2
\right)dt
\nonumber\\
&- 2\kappa \sum_j \frac{F_{ij}}{d_id_j}T_{ij}dt
+ T_i d\tilde{S}_i
\label{sdefin}\end{aligned}$$ Here, $P_i,T_i$ are the pressure and temperature of the fluid particle $i$, which are functions of $d_i,S_i$ through the equilibrium equations of state, derived from $E(S,M,{\cal V})$ through partial differentiation. Because the volume of a particle depends on the positions of the neighbours, the internal energy function plays the role of the potential energy $V$ in the original DPD model. In addition, ${\bf v}_{ij} = {\bf v}_{i}-{\bf v}_{j}$, and $T_{ij}=T_i-T_j$. The function $F(r)$ is defined in terms of the weight function $W(r)$ as $\boldsymbol{\nabla} W(r) = - {\bf r} F(r)$. Finally, $d\tilde{\bf
v}_i,d\tilde{S}_i$ are linear combinations of independent Wiener processes whose amplitude is dictated by the exact fluctuation-dissipation theorem [@footnote2].
It is easily shown that the above model conserves mass, linear momentum and energy, and that the total entropy is a non-decreasing function of time thus respecting the second law of thermodynamics. The equilibrium distribution function is given by Einstein expression in the presence of dynamic invariants [@Espanol1992]. As the number of particles increases, the resulting flow converges towards the solution of the Navier-Stokes equations, by construction.
SDPD can be considered as the general version of the three models MDPD, EDPD, FPM, discussed in [Sec. \[Sec:MDPD\]]{}, incorporating all their benefits and none of its limitations. For example, the pressure and any other thermodynamic information is introduced as an input, as in the MDPD model. The conservative forces of the original model become physically sounded pressure forces. Energy is conserved and we can study transport of energy in the system as in EDPD. The transport coefficients are input of the model (though, see below). The range functions of DPD have now very specific forms, and one can use the large body of knowledge generated in the SPH community to improve on the more adequate shape for the weight function $W(r)$ [@Liu2010]. The particles have a physical size given by its physical volume and it is possible to specify the physical scale being simulated. One should understand the density number of particles as a way of controlling the *resolution* of the simulation, offering a systematic ‘grid’ refinement strategy. In the SDPD model, the amplitude of thermal fluctuations scales with the size of the fluid particles: *large* fluid particles display smaller thermal fluctuations, in accordance with the usual notions of equilibrium statistical mechanics. While the fluctuations scale with the size of the fluid particles, the resultant stochastic forces on *suspended* bodies are independent of the size of the fluid particles and only depend on the overall size of the object [@Vazquez-Quesada2009jcp], as it should.
The SDPD model does not conserves angular momentum because the friction forces are non-central. This may be remedied by including an extra spin variable as in the FPM as has been done by Müller [[[*et al.*]{}]{}]{} [@Muller2015]. This spin variable is expected to relax rapidly, more and more so as the size of the fluid particles decreases. For high enough resolution the spin variable is slaved by vorticity. The authors of [Ref. [@Muller2015]]{} have shown, though, that the inclusion of the spin variable may be crucial in some problems where ensuring angular momentum conservation is important [@Greenspan1968].
In summary, SDPD can be understood as MDPD for non-isothermal situations, including more realistic friction forces. The SDPD model has a similar simplicity as the original DPD model and its enriched versions MDPD, EDPD, FPM. It has been remarked [@Lei2015] that SDPD does not suffer from some of the issues encountered in Eulerian methods for the solution of the LLNS equations. The SDPD model is applicable for the simulation of complex fluid simulations for which a *Newtonian solvent* exists. The number of studies using SDPD is now growing steadily and range from microfluidics,[@Fan2006] and nanofluidics [@Lei2015], colloidal suspensions [@Bian2012; @Vazquez-Quesada2015], blood [@Moreno2013; @Muller2014], tethered DNA [@Litvinov2011], and dilute polymeric solutions [@Litvinov2008; @Litvinov2010; @Litvinov2016]. Also, it has also been used for the simulation of fluid mixtures [@Thieulot2005; @Thieulot2005a; @Thieulot2005b; @Petsev2016], and viscoelastic flows [@Vazquez-Quesada2009pre].
Once SDPD is understood as a particle method for the numerical solution of the LLNS equations of fluctuating hydrodynamics, the issue of boundary conditions emerge. While there is an extensive literature in the formulation of boundary conditions in the deterministic SPH [@LiuLiu2003], and in DPD [@Revenga1998; @Revenga1999; @Pivkin2005; @Haber2006; @Pivkin2006; @Altenhoff2007; @Henrich2007; @Xu2009; @Lei2011; @Groot2012; @Mehboudi2014], the consideration of boundary conditions in SDPD has been addressed only recently [@Kulkarni2013; @Gatsonis2014; @Petsev2016].
In SDPD, what you put is *almost* what you get. The input information is the internal energy of the fluid particles as a function of density and entropy (or temperature), and the viscosity. However, only in the high resolution limit, for a large number of particles it is ensured the convergence towards the continuum equations. Therefore, for a finite number of particles there will be always differences between the input viscosity and the actual viscosity of the fluid and, possibly, between the input thermodynamic behaviour of the fluid particle and the bulk system. These differences could be attributed to numerical “artifacts” of the particle model, similar to discretisation errors that arise in CFD. Often the worst effects of these artifacts can be eliminated by using renormalized transport coefficients from calibration simulations. This is similar, for instance, to the way that discretisation errors in lattice Boltzmann are commandeered to represent physics, improving the numerical accuracy of the scheme [@Anc94]. In this context the availability of a systematic grid refinement strategy for SDPD is clearly highly beneficial.
Internal variables {#internal}
------------------
The SDPD model is obtained from the discretization of the continuum Navier-Stokes equations. Of course, any other continuum equations traditionally used for the description of complex fluids can also be discretized with the same methodology. In general, these continuum models for complex fluids typically involve *additional structural or internal variables*, usually representing mesostructures, that are coupled with the conventional hydrodynamic variables [@Kroger2004; @Ottinger2005]. The coupling of hydrodynamics with these additional variables renders the behaviour of the fluid non-Newtonian and complex. For example, polymer melts are characterized by additional conformation tensors, colloidal suspensions can be described by further concentration fields, mixtures are characterized by several density fields (one for each chemical species), emulsions are described with the amount and orientation of interface, [[[*etc*]{}]{}]{}.
All these continuum models rely on the hypothesis of local equilibrium and, therefore, the fluid particles are regarded as thermodynamic subsystems. Once the continuum equations are discretized in terms of fluid particles (Lagrangian nodes) with associated additional structural or order parameter variables, the resulting fluid particles are “large” portions of the fluid. The scale of these fluid particles is *supra-molecular*. This allows one to study larger length and time scales than the less coarse-grained models where the mesostructures are represented explicitly through additional interactions between particles ([[[*i.$\,$e.*]{}]{}]{} chains for representing polymers, spherical solid particles to represent colloid, different types of particles to represent mixtures). The price, of course, is the need for a deep understanding of the physics at this more coarse-grained level, which should be adequately captured by the continuum equations.
For example, in order to describe polymer solutions, we may take a level of coarse graining in which every fluid particle contains already many polymer molecules. This is a more coarse-grained model than describing viscoelasticity by joining dissipative particles with springs [@Somfai2006]. The state of the polymer molecules within a fluid particle may be described either with the average end-to-end vector of the molecules [@tenBosch1999; @Ellero2003], or with a conformation tensor [@Vazquez-Quesada2009pre]. In this latter case, the continuum limit of the model leads to the Olroyd-B model of polymer rheology. Another example where the strategy of internal variables is successful is in the simulation of mixtures. Instead of modeling a mixture with two types of dissipative particles as it is usually done in DPD, one may take a thermodynamically consistent view in which each fluid particle contains the concentration of one of the species, for example [@Thieulot2005; @Thieulot2005a; @Li2015a; @Petsev2016]. Chemical reactions can be implemented by including as an internal degree of freedom an extent of reaction variable [@Brennan2014].
DPD from bottom-up {#Sec:bottomup}
==================
The SDPD model [@Espanol2003SDPD], or the Voronoi fluid particle model [@Serrano2001], are top-down models which are, essentially, Lagrangian discretizations of fluctuating hydrodynamics. These models are the bona fide connection of the original DPD model with continuum hydrodynamics. However, the connection of the model with the microscopic level of description is less clear. Ideally, one would like to fulfill the program of coarse-graining, in which starting from Hamilton’s equations for the atoms in the system, one derives closed equations for a set of CG variables that represent the system in a fuzzy impressionistic way.
Coarse graining of a molecular system requires a clear definition of the mapping between the microscopic and CG degrees of freedom. This mapping is usually well defined when the atoms are bonded, as happens inside complex molecules like proteins and other polymer molecules, or in solid systems. In this case, one can choose groups of atoms and look at, for example, the center of mass of each group as CG variables. For unbonded atoms as those occurring in a fluid system, the main problem is that grouping atoms in a system where the atoms may diffuse away from each other is a tricky issue. We discuss separately the strategies that have been followed in order to tackle the coarse-graining of both, unbonded and bonded atoms.
DPD for unbonded atoms
----------------------
The derivation of the equations of hydrodynamics from the underlying Hamiltonian dynamics of the atoms is a well studied problem that dates back to Boltzmann and the origins of kinetic theory [@Irving1950; @Grabert1982]. It is a problem that still deserves attention for *discrete* versions of hydrodynamics [@DelaTorre2011; @Espanol2009; @Espanol2009c; @EspanolDonev2015], which is what we need in order to simulate hydrodynamics in a computer. These latter works show how an *Eulerian* description of hydrodynamics can be derived from the Hamiltonian dynamics of the underlying atoms, by defining mass, momentum, and energy of cells which surround certain points fixed in space. However, *Lagrangian* descriptions in which the cells “move following the flow”, are much more tricky to deal with. Typically, two types of groupings of fluid molecules have been considered, based on the Voronoi tessellation or on spherical blobs.
An early attempt to construct a Voronoi fluid particle out from the microscopic level was made by Español [[[*et al.*]{}]{}]{} [@Espanol1997]. The Voronoi centers were moved according to the forces felt by the molecules inside the cell in the underlying MD simulation. An effective excluded volume potential was obtained from the radial distribution function of the Voronoi centers. The method was revisited by Eriksson [[[*et al.*]{}]{}]{} [@Eriksson2009b] who observed “molecular unspecificity” of the Voronoi projection, in the sense that very different microscopic models give rise to essentially the same dynamics of the cells. In earlier work [@Eriksson2008], a force covariance method, essentially the Einstein-Helfand route to compute the Green-Kubo coefficients [@Kauzlaric2011], was introduced in order to compute the friction forces under the DPD ansatz. The results are disappointing as these authors showed that the dynamics of the CG particles with the forces of the DPD model measured from MD for a Lennard-Jones system were not consistent with the MD results themselves.
More recently, Hadley and McCabe [@Hadley2010] propose to group water molecules into beads through the $K$-means algorithm [@Macqueen1967]. The algorithm considers a number of beads with initially given positions and construct their Voronoi tessellation. The water molecules inside each Voronoi cell have a center of mass that does not coincide with the bead position. The bead position is then translated on top of the center of mass and a retessellation is made again, with a possibly different set of water molecules constituting the new bead. The procedure is repeated until convergence. At the end, one has centroidal Voronoi cells in which the bead position and the center of mass of the water molecules inside the Voronoi cell coincide. The $K$-means algorithm gives for every microstate (coordinates of water molecules) the value of the macrostate (coordinates of the beads) and, therefore, provides a rule-based CG mapping. Unfortunately, there is no analytic function that captures this mapping and, therefore, it is not possible to use the theory of coarse-graining to rigorously derive the evolution of the beads. The strategy by Hadley and McCabe is to construct the radial distribution function and infer from it the pair potential. Recently, Izvekov and Rice [@Izvekov2015] have also considered this procedure in order to compute both, the conservative force and the friction force between beads by extracting this information from force and velocity correlations between Voronoi cells. They find that very few molecules per cell are sufficient to obtain Markovian behaviour.
Instead of using Voronoi based fluid particles, Voth and co-workers consider a sphere (termed a [<span style="font-variant:small-caps;">blob</span>]{}) and move the sphere according to the forces experienced by the center of mass of the molecules inside it [@Ayton2004a]. The dynamics of the [<span style="font-variant:small-caps;">blob</span>]{} is then modeled in order to reproduce the time correlations of the [<span style="font-variant:small-caps;">blob</span>]{}. Subsequently a system of $N$ Brownian [<span style="font-variant:small-caps;">blob</span>]{}s is constructed in order to reproduce the above correlations.
Recently, another attempt to obtain DPD from the underlying MD has been undertaken by Lei [[[*et al.*]{}]{}]{} [@Lei2010] by using the rigorous approach of the theory of coarse-graining. However, in order to construct the “fluid particles” these authors constraint a collection of Lennard-Jones atoms to move bonded, by maintaining a specified radius of gyration. The fluid no longer is a simple atomic fluid but rather a fluid made of complex “molecules” (the atomic clusters constrained to have a radius of gyration) whose rheology is necessarily complex.
Our impression is that we still have not solved satisfactorily the problem of deriving from the microscopic dynamics the dynamics of CG particles that capture the behaviour of a simple fluid made of *unbonded* atoms. Work remains to be done in order to define the proper CG mapping for a fully satisfactory bottom-up model for Lagrangian fluid particles representing a set of few unbonded atoms or molecules “moving coherently”.
DPD for bonded atoms
---------------------
When the atoms are bonded and belong to definite groups where the atoms do not diffuse away from each other, the CG mapping is well defined, usually through the center of mass variables. In [Fig. \[Fig.star\]]{} we show a star polymer melt in which each molecule is coarse-grained by its center of mass, leading to a blob or bead description [@Hijon2010]. The important question is how are the CG interactions between the blobs. Two CG approaches, static and dynamic, have been pursued, depending on the questions one wishes to answer.
![Star polymer molecules (in different colors) in a melt are coarse-grained at the level of their centers of mass. The resulting model is a blob model of the DPD type [@Hijon2010].[]{data-label="Fig.star"}](a45-eps-converted-to.pdf)
*Static CG* is concerned with approximations to the exact potential of mean force that gives, formally, the equilibrium distribution function of all the CG degrees of freedom. Radial distributions, equations of state, [[[*etc*]{}]{}]{}. are the concern of static coarse graining. There is a vast literature in the construction of the potential of mean force for CG representations of complex fluids [@Reith2001; @Likos2001; @Voth2009], and complex molecules [@Milano2005a; @Noid2013; @Lopez2014]. Despite these efforts, there is still much room for improvement in the thermodynamic consistency for the modeling of the potentials of mean force [@Reinier2001]. If one uses the CG potential for the motion of the CG degrees of freedom, the resulting dynamics is unrealistically fast, although this may be in some cases convenient computationally.
*Dynamic CG*, on the other hand, focuses on obtaining, in addition to CG potentials, approximations to the friction forces between CG degrees of freedom. Within the theoretical framework of Mori-Zwanzig approach, it is possible to obtain in general the dynamics of the CG degrees of freedom from the underlying Hamiltonian dynamics. The first attempt to derive the DPD model from the underlying microscopic dynamics was given by Español for the simple case of a one dimensional harmonic lattice [@Espanol1996pre-harmonic]. The center of mass of groups of atoms were taken as the CG variables and Mori’s projection method was used. Because this system is analytically soluble, a flaw in the original derivation could be detected, and an interesting discussion emerged on the issue of non-Markovian effects in solid systems [@Cubero2005; @Cubero2005a; @Hijon2006; @Cubero2008a; @Hijon2008].
By following Schweizer [@Schweizer1989], Kinjo and Hyodo [@Kinjo2007] obtained a formal equation for the centers of mass of groups of atoms. The momentum equation contains three forces, a conservative force deriving from the exact potential of mean force, a friction force and a random force. By *modeling* the random forces the authors of [Ref. [@Kinjo2007]]{} showed that this equation encompass both, the BD and DPD equations. However, to consider the procedure in [Ref. [@Kinjo2007]]{} a *derivation* of DPD, it is necessary to specify the conditions under which one obtains BD instead of DPD (or [[[*vice versa*]{}]{}]{}). This was not stated by Kinjo and Hyodo. The crucial insight is that BD appears when “solvent” is eliminated from the description, this is, some (the majority) of the atoms are not grouped and are instead described as a passive thermal bath (or implicit solvent). The friction force in this case is proportional to the velocity of the particles, and the momentum of the CG blobs is not conserved. On the other hand, a DPD description appears when *all* the atoms are partitioned into disjoint groups. In this case, the conservation of momentum induced by Newton’s third law at the microscopic level leads to a structure of the friction forces depending on *relative* velocities of the particles. A derivation of the equations of DPD from first principles taking into account linear momentum conservation was presented by Hijon [[[*et al.*]{}]{}]{} [@Hijon2010]. The position-dependent friction coefficient was given in terms of a Green-Kubo expression that could be evaluated, under certain simplifying assumptions, directly from MD simulations, within the same spirit of an early derivation of Brownian Dynamics for a dimer representation (non-momentum conserving) of a polymer by Akkermans and Briels [@Akkermans2000]. The general approach was preliminarly tested for a system of star polymers (as those in [Fig. \[Fig.star\]]{}). A subsequent thorough study of this star polymer problem by Karniadakis and co-workers [@Li2014] has shown that the introduction of an intrinsic spin variable for each polymer molecule seems to be necessary at low concentrations in order to have an accurate representation of the MD results. The approach in [Ref. [@Hijon2010]]{} has been labeled by Li [[[*et al.*]{}]{}]{} [@Li2014] as the MZ-DPD approach, standing for Mori-Zwanzig dissipative particle dynamics. Other complex molecules (neopentane, tetrachloromethane, cyclohexane, and n-hexane) have been also considered [@Deichmann2014] within the MZ-DPD approach with interesting discussion on the validity of non-Markovian behaviour (more on this later). A slightly more general approach for the derivation of MZ-DPD equations has been given by Izvekov [@Izvekov2015]. Very recently, Español [[[*et al.*]{}]{}]{} [@Espanol2016] have formulated from first principles the dynamic equations for an *energy conserving* CG representation of complex molecules. This work gives the microscopic foundation of the EDPD model for complex molecules (involving bonded atoms only).
Non-Markov effects
------------------
The rigorous coarse-graining in which centers of mass of groups of atoms are used as CG variables relies on a basic and fundamental hypothesis, which is the separation of time scales of the evolution of the CG variables and “the rest” of variables in the system. More accurately, the separation of time scales refers to the existence, *in the evolution of the CG variables themselves* of two well-defined scales, a large amplitude slow component, and a small high frequency component that can be modeled in terms of white noise. The dynamics of the CG variables can then be approximately described by a non-linear diffusion equation in the space spanned by the CG variables [@Green1952; @Zwanzig1961]. This separation of time scales does not always exist, either because the groups of atoms are small and the centers of mass momenta evolve in the same time scales as the forces (due to collisions with atoms of other groups) [@Deichmann2014], or because of the existence of coupled slow processes not captured by the selected CG variables. When this happens, one strategy is to tweak the friction and simply fit frictions to recover the time scales. Gao and Fang used this approach in order to coarse grain a water molecule to one site-CG particle [@Gao2011]. Another strategy is to enlarge the set of CG variables with the hope that the new set will be Markovian. Briels [@Briels2009] addresses specifically the problem of CG in polymers and introduces transient forces to recover a Markovian description. Davtyan, Voth, and Anderson [@Davtyan2015] have considered the introduction of “fictitious particles” in order to recover the CG dynamics observed from MD. The fictitious particles are just a simple and elegant way to model the memory kernel in a particularly intuitive way. If the strategy to increase the dimension of the CG state space does not work yet, it is still possible to formulate from microscopic principles formal non-Markovian models and to extract information about the memory kernel from MD [@Yoshimoto2013]. However, in the absence of separation of time scales, the computational effort required to get from MD the memory kernel makes the whole strategy of bottom-up coarse graining inefficient. Note that the advantage of a bottom-up strategy for coarse graining is that one needs to run *relatively short* MD simulations to get the information (Green-Kubo coefficients) that is used in the dynamic equations governing much larger time scales. If one needs to run long MD simulation of the microscopic system to get the CG information, we have already solved the problem by brute force in the first place!
Electrostatic interactions
--------------------------
In many situations, one is interested in the consequential effects of charge separation. This is particularly so for aqueous systems where the relatively high dielectric permittivity of water means that ion dissociation readily occurs. The relevance lies not only in the structural and thermodynamic properties of ionic surfactants and polyelectrolytes [[[*etc*]{}]{}]{}, [@Sindelka2014; @LLP16] but is also motivated by a burgeoning interest of electrokinetic phenomena [@Pagonabarraga2010; @Smiatek2012; @Maduar2015; @Sepehr2016].
An important point to make is that some relevant electrostatic effects simply cannot be captured in a short-range interaction (DPD-like or otherwise). For example the bare electrostatic energy of a charged spherical micelle of aggregation number $N$ scales as $N^2/R \sim
N^{5/3}$, where $R\sim N^{1/3}$ is the micelle radius. This electrostatic energy cannot be captured in either a volume ($\sim N$) or surface ($\sim N^{2/3}$) term. To be faithful to this physics therefore, one has in some way to incorporate long-range Coulomb interactions explicitly into the DPD model. This area was pioneered by Groot, who used a field-based method [@Groot2003]. Since then, more standard Ewald methods have also been used [@GMV+06; @WVA+13], and in principle any fast electrostatics solver developed for MD could be taken over into the DPD domain. One important caveat is that with soft particles (ie no hard core repulsion) the singularity of Coulomb interactions needs to be tamed through the use of smoothed charge models [@Groot2003; @GMV+06; @WVA+13; @Warren2014]. Adding electrostatics is certainly computationally expensive, often halving the speed of DPD codes. This drastic slow-down offsets the advantage of the DPD methods when compared to more traditional coarse-graining methods.
Related to the problem of charge separation is the question of dielectric inhomogeneities (image charges [[[*etc*]{}]{}]{}), such as encountered in oil/water mixtures and at interfaces. In field-based methods this can be resolved by having a local-density-dependent dielectric permittivity [@Groot2003]. Alternatively one can introduce an explicitly polarisable solvent model [@PP14; @Peter2015a]. Note that an electric field in the presence of a dielectric inhomogeneity induces reaction forces on the *uncharged* particles [@Groot2003]. In a field-based method, this is quite complicated to incorporate in a rigorous way. For an explicitly polarisable solvent model, these induced reaction forces of course arise naturally and are automatically captured.
Systems studied with DPD {#Sec:applications}
========================
The number of systems and problems that have been addressed with DPD or its variants is enormous and we do not pretend to review the extensive literature on the subject. Nevertheless, to illustrate the range and variety of different applications of DPD we give a necessarily brief survey of the field. A general trend observed in the application side, is the shift from the original DPD model, of “balls and springs” models, towards more specific atomistic detail, in the line of MZ-DPD, or semi-bottom-up DPD (with structure based CG potentials and fitted friction).
*Colloids:* A recent review on the simulation of colloidal suspensions with particle methods, including DPD, can be found in [@Bolintineanu2014]. The first application of DPD to a complex fluid was the simulation of colloidal rheology by Koelman and Hooggerbruge [@Koelman1993]. Since then, a large number of works have addressed the simulation of colloidal suspensions, with a variety of approaches to represent the solute. Typically, a colloidal particle is constructed out of dissipative particles that are moved rigidly [@Koelman1993; @Li2008], or connected with springs [@Laradji2004; @Phan-Thien2014]. Arbitrary shapes may be considered in this way [@Boek1997], as well as confinement due to walls [@Gibson1999; @Li2008]. As a way to bypass the need to update the relatively large number of solid particles, some approaches represent each colloidal particle with a single dissipative particle [@Pryamitsyn2005; @Pan2008; @Pan2010], leading to minimal spherical blob models for the colloids. These simplified models for the solute require the introduction of shear forces of the FPM type. Representing a colloidal particle with a point particle is a strategy also used in minimal blob models in Eulerian CFD methods for fluctuating hydrodynamics [@BalboaUsabiaga2014]. A core can be added in order to represent hard spheres with finite radii, supplemented with a dissipative surface to mimic boundary conditions [@Whittle2010], and still retain the one-particle-per-colloid scheme. Although general features show semi-quantitative agreement with experimental results [@Whittle2010], other simulation techniques like Stokesian Dynamics, and theoretical work, it is clear that getting more detailed physics of colloid-colloid interactions and colloid-solvent interaction (either through a MZ-DPD approach or by phenomenologically including boundary layers and top-down parametrization) may be beneficial to the field.
*Blood:* A colloidal system of obvious biological interest is blood. Blood has been simulated with DPD [@Fedosov2011], and more recently with SDPD [@Moreno2013; @Muller2014; @Katanov2015]. Two recent reviews [@Li2013; @Ye2015] discuss the modeling of blood with particle methods. Multi-scale modeling ([[[*i.$\,$e.*]{}]{}]{} MZ-DPD) seems to be crucial to capture platelet activation and thrombogenesis [@Zhang2014].
*Polymers:* An excellent recent review on coarse-graining of polymers is given by Padding and Briels [@Padding2011]. Below the entanglement threshold Rouse dynamics holds and this is well satisfied in a DPD polymer melt [@Spenley2000]. Above the threshold, entanglements are a necessary ingredient in polymer melts. Because the structure based CG potential between the blobs are very soft, it is necessary to include a mechanism for entanglement explicitly. This is one example in which the usual simple schemes to treat the many-body potential (through pair-wise interactions) fails dramatically. There are several methods to include entanglements: Padding and Briels [@Padding2001; @Padding2002] introduced the elastic band method for coarse-grained simulations of polyethylene. Another alternative to represent entanglements is to use the Kumar and Larson method [@Kumar2001; @Goujon2008; @Yamanoi2011] in which a repulsive potential between bonds linking consecutive blobs is introduced. Finally, entanglements can be enforced in a simpler way by hard excluded volume LJ interactions [@Symeonidis2005], or through suitable criterion on the stretching of two bonds and the amount of impenetrability of them [@Nikunen2007].
Beyond scaling properties, effort has been directed towards a chemistry detailed MZ-DPD methodology, by using structure based CG effective potentials and either fitting the friction coefficient [@Guerrault2004; @Lahmar2007; @Maurel2012; @Maurel2015], or obtaining the dissipative forces from Green-Kubo expressions [@Trement2014]. In general, one can take advantage of systematic static coarse-graining approaches, like those for heptane and toluene [@Dunn2015], to be directly incorporated to DPD. Very recently, new Bayesian methods for obtaining the CG potential *and* friction are being considered [@Dequidt2015; @Solano2016] (on pentane). The ultimate goal of all these microscopically informed approaches is to predict rheological properties as a function of chemical nature of the polymer system with a small computational cost. As mentioned earlier, whatever improvement in the construction of CG potentials will be highly beneficial also for the construction of dynamic CG models. In this respect, the work on *analytical* integral equation approach of Guenza and co-workers [@McCarty2014] for obtaining the CG potential in polymer systems that ensures both, structural properties *and* thermodynamic behaviour seems to be very promising.
We perceive a powerful trend towards more microscopically informed DPD able to express faithfully the chemistry of the system. This trend is important when considering hierarchical multi-scale methods in which MD information is transferred to a dynamic CG DPD model, the DPD model is evolved in order to get topology and equilibrium states much faster than MD, and then a back-mapping fine grained procedure recovers microscopic states able to be evolved again with MD [@Chen2006; @Santangelo2007; @Gavrilov2015].
Other complex fluid systems involving polymers have been considered. An early work is the study of adsorption of colloidal particles onto a polymer coated surface [@Gibson1999]. Polymer brushes are reviewed by Kreer [@Kreer2016]. Self assembly of giant amphiphiles made of a nanoparticle with tethered polymer tail has been considered recently [@Ma2015]. Polymer membranes for fuel cells have been considered by Dorenbos [@Dorenbos2015]. Polymer solutions simulated with DPD obey Zimm theory that includes hydrodynamic interactions [@Jiang2007]. Polymer solutions have also been studied with SDPD observing Zimm dynamics [@Litvinov2008].
*Phase separating fluids:* In polymer mixtures, the $\chi$-parameter mapping introduced by Groot and Madden [@Groot1998] has been phenomenally popular because it links to long-established polymer physical chemistry (there are tables of $\chi$-parameters for instance, and a large literature devoted to calculating $\chi$-parameters [[[*ab initio*]{}]{}]{}). This has helped incorporate chemical specificity in DPD from solubility parameters [@Maiti2004b; @Liyana-Arachchi2015]. It is also known that $\chi$-parameters can be composition dependent (PEO in water is the notorious example). This can be accommodated within the MDPD approach. Akkermans [@Akkermans2008] presents a first principles coarse-graining method that allows to calculate the excess free energy of mixing and Flory-Huggins $\chi$-parameter. A related effort is given by Goel [[[*et al.*]{}]{}]{} [@Goel2014a].
DPD has been very successful in identifying mechanisms in phase separation: Linear diblock copolymer spontaneously form a mesocopically ordered structure (lamellar, perforated lamellar, hexagonal rods, micelles) [@Groot1998]. DPD is capable to predict the dynamical pathway towards equilibrium structures and it is observed that hydrodynamic interactions play an important role in the evolution of the mesophases [@Groot1999]. Domain growth and phase separation of binary immiscible fluids of differing viscosity was studied in [@Novik2000]. New mechanisms via inertial hydrodynamic bubble collapse for late-stage coarsening in off-critical vapor-liquid phase separation have been identified [@Warren2001]. The effect of nanospheres in the mechanisms for domain growth in phase separating binary mixture has been considered by Laradji and Hore [@Laradji2004].
*Drop dynamics:* A particular case of phase separating fluids is given by liquid-vapour coexistence giving rise to droplets. Surface-confined drops in a simple shear was studied in an early work [@Jones1999]. Pendant drops have been studied with MDPD [@Warren2003], while oscillating drops [@Liu2006], and drops on superhydrophobic substrates [@Wang2015], have also been considered.
*Amphiphilic systems:* An early review of computer modeling of surfactant systems is by Shelley and Shelley [@Shelley2000]. A more recent review on the modeling of pure membranes and lipid-water membranes with DPD is given by Guigas [[[*et al.*]{}]{}]{} [@Guigas2011]. Coarsening dynamics of smectic mesophase of amphiphilic species for a minimal amphiphile model was studied by Jury [[[*et al.*]{}]{}]{} [@Jury1999] and mesophase formation in pure surfactant and solvent by Prinsen [[[*et al.*]{}]{}]{} [@Prinsen2002]. More microscopic detail has been included by Ayton and Voth [@Ayton2002] with DPD model for CG lipid molecules that self assembly, a problem also considered by Kranenburg and Venturoli [@Kranenburg2003]. Effort towards more realistic parametrization for lipid bilayers was given by Gao [[[*et al.*]{}]{}]{} [@Gao2007]. Prior to this Li [[[*et al.*]{}]{}]{} [@Li2004] formulated a conservative force derived from a bond-angle dependent potential that allowed to consider different types of micellar structures. Microfluidic synthesis of nanovesicles was considered by Zhang [[[*et al.*]{}]{}]{} [@Zhang2015]. Simulations of micelle-forming systems have also been reported [@VLN13; @JSJ+16].
*Oil industry:* DPD simulations have also addressed problems in the oil industry, from oil-water-surfactant dynamics [@Rekvig2004], and water-benzene-caprolactam systems [@Shi2015], to aggregate behavior of asphaltenes in heavy crude oil [@Zhang2010], or the orientation of asphaltene molecules at the oil-water interface [@Ruiz-Morales2015].
*Biological membranes:* A review of mesoscopic modeling of biological membranes was given by Venturoli [[[*et al.*]{}]{}]{} [@Venturoli2006]. Groot and Rabone [@Groot2001] presented one of the first applications of DPD to the modeling of biological membranes and its disruption due to nonionic surfactants. Sevink and Fraaije [@Sevink2014] devised a coarse-graining of a membrane into a DPD model in which the solvent was treated implicitly. Amphiphilic polymer coated nanoparticles for assisted drug delivery through cell membranes has been recently studied [@Zhang2015a; @Zhang2015b]. The diffusion of membrane proteins has been considered iby Guigas and Weiss [@Guigas2015].
*Biomolecular modeling:* The CG modeling of complex biomolecules with a focus on static properties has been addressed in the excellent review by Noid [@Noid2013]. Pivkin [[[*et al.*]{}]{}]{} [@Peter2015a] have modeled proteins with DPD force fields, which competes with the Martini force field [@Marrink2013].
*Inorganic materials:* DPD has also been used for the CG modeling of solid inorganic materials. Coarse-grained representation of graphene turns out to be essential for study of large scale resonator technology [@Kauzlaric2011; @Kauzlaric2011jcp].
Conclusions {#Sec:conclusions}
===========
The DPD model is a tool for simulating the mesoscale. The model has evolved since its initial formulation towards enriched models that, while retaining the initial simplicity of the original, are now linked strongly to either the microscopic scale or the macroscopic continuum scale. In many respects, the original DPD model of [Fig. \[Fig.Dashpot\]]{} is a toy model and one can do much better by using these refined models. In this Perspective, we wish to convey the message that DPD has a dual role in modeling the mesoscale. It has been used as a way to simulate, on one hand, coarse-grained (CG) versions of complex molecular *objects* and, on the other hand, *fluctuating fluids*. While the first type of application, involving atoms bonded by their interactions, has a solid ground on the theory of coarse graining, there is no such a *microscopic* basis for DPD as a fluid solver. The best we can do today is to descend from the continuum theory and to formulate DPD as a Lagrangian discretization of fluctuating hydrodynamics, leading to the SDPD model.
Therefore as DPD simulators we are faced with three alternative strategies:
*\#1 Bottom-up MZ-DPD:* When dealing with molecular objects made of bonded atoms, we may formulate an appropriate CG mapping and construct the DPD equations of motion with momentum conserving forces [@Hijon2010]. These equations contain the potential of mean force generating conservative forces and position-dependent friction coefficient, with explicit microscopic formulae: the potential of mean force is given by the configuration dependent free energy function, and the position dependent friction coefficient tensor is given by Green-Kubo expressions. Both quantities are given in terms of expectations *conditional* on the CG variables and are, therefore, many-body functions. These are not, in general, directly computable due to the curse of dimensionality. One needs to formulate simple and approximate models (usually pair-wise with, perhaps, bond-angle and torsion effects) in order to represent the complex functional dependence of these quantities. Together with the initial selection of the CG mapping, finding suitable functional forms is the most delicate part of the problem. Once this simple functional models are selected, constrained MD simulations [@Akkermans2000; @Hijon2010], or optimization methods [@Noid2013; @Brini2013; @Lopez2014; @Dequidt2015], may be used to obtain the CG potential.
The existence of a framework to derive dynamic CG models from bottom-up is a highly rewarding intellectual experience with a high practical value because 1) it provides the *structure* of the dynamic equations, and 2) signals at the crucial points where approximations are required. The MZ-DPD approach is, in our view, an important breakthrough in the field, as it connects the well established world of *static coarse-graining* with the DPD world [@Noid2013]. In this way, it provides a framework for accurately addressing the CG *dynamics*. However, the usefulness to follow the program by the book is not always obvious due to the large effort in obtaining the objects form MD. In this case, one would go to the next strategy.
*\#2 Parametrization of DPD:* We may insist on a particularly simple form of linear repulsive forces and simple friction coefficients (like the ones in the original/cartoon DPD model) and fit the parameters to whatever property of the system one wants to correctly describe (for example, the compressibility). Nowadays, we advise caution with this simple approach because, usually, many other properties of the system go wrong. The simple DPD linear forces are not flexible enough in many situations. However, from what we have already learned from microscopically informed MZ-DPD in the previous \#1 strategy, we may give ourselves more freedom in selecting the functional forms (as in MDPD) for conservative and friction forces and have more free parameters to play with. Once it is realized that the potential between beads or blobs in DPD is, in fact, the potential of mean force, one can use semi-bottom-up approaches in which the potential of mean force is obtained from first principles, while the DPD friction forces are fitted to obtain the correct time scales [@Lyubartsev2002b; @Guerrault2004; @Lahmar2007]. Although this strategy is less rigorous, it may be more practical in some cases.
The \#1 bottom-up MZ-DPD strategy above has not been yet successful when the interactions of atoms or molecules in the system are *unbonded*, allowing two molecules that are initially close together to diffuse away from each other. These are the kind of interactions present in a fluid system. The main difficulty seems to be in the *Lagrangian* nature of a fluid particle that makes the CG mapping not obvious. Although some attempts have been taken in order to derive DPD for fluid systems with unbonded interactions, we believe that the problem is not yet solved. However, for these systems one may regard the dissipative particles as truly fluid particles ([[[*i.$\,$e.*]{}]{}]{} small thermodynamic systems that move with the flow). We are lead to the third strategy.
*\#3 Top-down DPD:* Assume that we know that a particular field theory describes the complex fluid of interest at a macroscopic scale (Navier-Stokes for a Newtonian fluid, for example). Then one may discretize the theory on moving Lagrangian points according to the SPH mesh-free methodology. The Lagrangian points may be interpreted as fluid particles. If we perform this discretization within a thermodynamically consistent framework like [<span style="font-variant:small-caps;">generic</span>]{} [@Ottinger2005], thermal fluctuations are automatically determined correctly [@Petsev2016], allowing to address the mesoscale. This strategy leads to enriched DPD models (SDPD is an example corresponding to Navier-Stokes hydrodynamics). The functional forms of conservative and friction forces in this DPD models are dictated by the mesh-free discretization, as well as the input information of the field theory itself. We have the impression that SDPD or its isothermal counterpart MDPD are underappreciated and underused. Although these methods are appropriate for fluid systems, we foresee the use of MDPD many-body potentials of the embedded atom form also for CG potentials for bonded atom systems. While CG potentials depending on the *global* density are potentially a trap [@Louis2002; @DAdamo2013], the inclusion of many-body functional forms of the embedded atom kind depending on *local* density is a promissing route to have more transferable CG potentials [@Allen2008], valid for different thermodynamic points. This expectation, though, needs to be substantiated by further research. In particular, liquid state theory for MDPD may need to be further developed [@Merabia2007; @McCarty2014].
This Perspective on DPD also points to several open methodological questions.
We have already mentioned the open problem of deriving from microscopic principles the dynamics of Lagrangian fluid particles made of unbonded atoms. Once this problem is solved, we will need to face the next problem of deriving from first principles the coupling of CG descriptions of bonded and unbonded atoms (a protein in a membrane surrounded by a solvent, for example). A derivation from bottom-up of this kind of coupling in a discrete *Eulerian* setting has been given recently in [@EspanolDonev2015].
For the simulation of fluids, standard CFD methods equipped with thermal fluctuations are readily catching up with the mesoscale [@Naji2009; @Uma2011; @Shang2012; @Donev2010; @Oliver2013; @Donev2014; @Donev2014a; @Plunkett2014; @DeCorato2016]. Methods for coupling solvents and suspended structures are being devised [@BalboaUsabiaga2014; @EspanolDonev2015], and therefore one may well ask what is the advantage of a Lagrangian solver based on the relatively inaccurate SPH discretization over these high quality CFD methods. Note that CFD methods allow for the rigorous treatment of limits (incompressibility, inertia-less, [[[*etc*]{}]{}]{}) that may imply large computer savings, and which are difficult to consider in SPH based methods. We believe (see Meakin and Xu [@Meakin2009] for a defense of particle methods) that fluid particle models may still compete in situations where biomolecules and other complex molecular structures move in solvent environments, because one does not need to change paradigm: only particles for both, solvent and beads, are used, with the corresponding simplicity in the codes to be used. Nevertheless, a fair comparison between Eulerian and Lagrangian methodologies is still missing.
As SDPD is just SPH plus thermal fluctuations it inherits the shortcomings of SPH itself. SPH is still facing some challenges in both, foundations (boundary conditions) and computational efficiency [@Violeau2016; @Zhi-bin2016]. In this respect, a Voronoi fluid particle model [@Serrano2001], understood as a Lagrangian finite volume solver may be an interesting possibility both in terms of computational efficiency and simplicity of implementation of boundary conditions. Serrano compared SDPD and a particular implementation in 2D of Voronoi fluid particles [@Serrano2006a]. In terms of computational efficiency, both methods are comparable because the extra cost in computing the tessellation is compensated by the small number of neighbours required, six on average, while in SDPD one needs 20-30 neighbours.
Another interesting area of research is that of multi-scale modeling. In CFD, one way to reduce the computational burden is to increase the resolution of the mesh only in those places where strong flow variations occur, or interesting molecular physics requiring small scale resolution is taking place. An early attempt within DPD was given by Backer [[[*et al.*]{}]{}]{} [@Backer2005]. We envisage that methods for multi-resolution SDPD will be increasingly used in the future [@Kulkarni2013; @Lei2015; @Tang2015; @Petsev2016]. Multi-resolution is a problem of active research also in the SPH community [@Violeau2016]. Eventually, one would like to hand-shake the particle method of SDPD with MD as the resolution is decreased [@Petsev2015]. Note, however, that as the fluid particles become small (say “four atoms per particle”) it is expected that the Markovian property breaks down and one needs to account for viscoelasticity [@Zwanzig1975; @Voulgarakis2009b], either with additional internal variables [@tenBosch1999; @Ellero2003; @Vazquez-Quesada2009pre], or with “fictitious particles” [@Davtyan2015].
Finally, a very interesting research avenue is given by the thermodynamically consistent ([[[*i.$\,$e.*]{}]{}]{}, able to deal with non-isothermal situations) Mori-Zwanzig EDPD introduced theoretically by Español [[[*et al.*]{}]{}]{} [@Espanol2016]. Up to now, CG representations of complex molecules have only included the location and velocity of the CG beads or blobs (sometimes its spin [@Li2014]), completely forgeting its internal energy content. Given the fundamental importance of the principle of energy conservation, it seems that in order to have thermodynamically consistent and more transferable potentials, we may need to start looking at these slightly more complex CG representations.
Acknowledgments
===============
We acknowledge A. Donev for useful comments on the manuscript. PE thanks the Ministerio de Economía y Competitividad for support under grant FIS2013-47350-C5-3-R.
MDPD consistency {#app:mdpd}
================
In MDPD the potential takes the form described in the main text where $V(\{{{{\mathbf r}}}\})=\sum_i\psi(d_i)$, and $d_i=\sum_{j\ne i}W(r_{ij})$. From this it is easy to show that the forces remain pairwise, with $${{{\mathbf F}}}_{ij} = -[\psi'(d_i)+\psi'(d_j)]\,W'(r_{ij})\,{{{\mathbf e}}}_{ij}\,.
\label{eq:app:fij}$$ Note that the weight function here is $W'(r)$. However, to our knowledge, there does not exist in the literature a proof of the *converse*, namely that this relationship between the weight functions is a *necessary* condition to ensure the existence of $V(\{{{{\mathbf r}}}\})$. We present here such a proof, following the line of argument in [Ref. [@Warren2013]]{}.
We start with a generalised MDPD pairwise force law, with an (as yet) arbitrary weight function ${\omega^c}(r)$, $${{{\mathbf F}}}_{ij} = A(d_i,d_j)\,{\omega^c}(r_{ij})\,{\hat{{{\mathbf r}}}}_{ij}\,.
\label{eq:app:gen}$$ We assume the amplitude function $A(d_i,d_j)$ is symmetric since otherwise ${{{\mathbf F}}}_{ij}\ne-{{{\mathbf F}}}_{ji}$. Let us denote partial derivatives with respect to the first and second density arguments by $A_{[1,0]}$ and $A_{[0,1]}$. The symmetry of $A(d_i,d_j)$ then implies $A_{[1,0]}(d_i,d_j)=A_{[0,1]}(d_j,d_i)$.
A generic radial force law can always be integrated, so we cannot deduce anything useful just by considering pairs of particles. Instead, following [Ref. [@Warren2013]]{}, let us consider three isolated, collinear particles, at positions $x_i$ ($i=1\dots3$) such that $x_1\le x_2\le x_3$. For this configuration the densities are $d_1=W(x_{12})+W(x_{13})$, $d_2=W(x_{12})+W(x_{23})$, and $d_3=W(x_{13})+W(x_{23})$. The pairwise forces are $F_{12}=A(d_1,d_2)\,{\omega^c}(x_{12})$, $F_{23}=A(d_2,d_3)\,{\omega^c}(x_{23})$, and $F_{13}=A(d_1,d_3)\,{\omega^c}(x_{13})$. Finally, the summed forces on the particles are $F_1=F_{12}+F_{13}$, $F_2=-F_{12}+F_{23}$, and $F_3=-F_{13}-F_{23}$.
The existence of a potential implies integrability constraints like $\partial F_1/\partial x_2-\partial F_2/\partial x_1=0$. Imposing these gives rise to an expression which can be simplified (by consideration of special cases) to a set of requirements for which the representative case is $$\begin{split}
&{\omega^c}(x_{12})\, W'(x_{23})\, A_{[1,0]}(d_1+d_3,d_1)\\
&\qquad -{\omega^c}(x_{23})\, W'(x_{12})\, A_{[1,0]}(d_1+d_3,d_3)=0.
\end{split}
\label{eq:app:mix}$$ The symmetry relation between $A_{[0,1]}$ and $A_{[1,0]}$ has been used. If we are allowed to cancel the $A_{[1,0]}$ functions we are home and dry, since this implies ${\omega^c}(x)\,W'(y) = {\omega^c}(y)\,W'(x)$ (for arbitrary arguments $x$ and $y$), and this can only be true if ${\omega^c}(x)\propto W'(x)$. However, the $A_{[1,0]}$ functions only cancel if $A_{[1,0]}(x+y,x)=A_{[1,0]}(x+y,y)$ (for arbitrary arguments $x$, $y$). A little thought shows that a sufficient condition for this to be true is that $A(d_i,d_j)=f(d_i)+f(d_j)$. This is precisely the form the force-law takes in [Eq. ]{}. The conclusion is that in this case ${\omega^c}(x)\propto W'(x)$ is a *necessary* condition for the existence of the many-body potential $V(\{{{{\mathbf r}}}\})$. It is also sufficient, since we can absorb the proportionality constant into the definitions of $d_i$ and $\psi(d)$, and then explicitly $V(\{{{{\mathbf r}}}\})=\sum_i\psi(d_i)$. This proves the claimed result above.
For another example, we might be tempted to consider $A(d_i,d_j)=f(d_i+d_j)$, but retaining the weight function ${\omega^c}(x)\propto W'(x)$. For this choice $A_{[1,0]}(x,y)=f'(x+y)$ and [Eq. ]{} reduces to $f'(2x+y)=f'(x+2y)$. This is true for arbitrary $x$ and $y$ if only if $f(x)$ is linear, and therefore the force law is [[[*de facto*]{}]{}]{} of the form shown in [Eq. ]{}. Thus, a non-linear function $f(x)$ would be a bad choice. For a further case study, see [Ref. [@Warren2013]]{}.
If we fail to satisfy [Eq. ]{} then the potential *does not exist*. If the potential does not exist, we lose the underpinning theory that the stationary probability distribution is given by [Eq. ]{}. Without this foundation we are in uncharted waters, and there is no link to established statistical mechanics and thermodynamics.
In our opinion, in MDPD the burden rests on the user to display the $V(\{{{{\mathbf r}}}\})$ which gives rise to the chosen force law. The absence of an explicitly displayed potential leads only to unwarranted complications.
[289]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, ** (, , ).
, , , ** (, , ).
, ****, ().
, ** (, , ).
, ** (, , ).
, ** (, , ).
, ****, ().
, ** (, , ).
, , , ****, ().
[[[*et al.*]{}]{}]{}, ****, ().
, , , ****, ().
, , , , ****, ().
, , , , ****, ().
, ****, ().
, , , ****, ().
, , , , ****, ().
[[[*et al.*]{}]{}]{}, ****, ().
, ed., ** (, , ).
, ****, ().
[[[*et al.*]{}]{}]{}, ****, ().
[[[*et al.*]{}]{}]{}, ****, ().
, ****, ().
, ****, ().
, ** (, , ).
, ** (, , ).
, in **, edited by , , (, , ), pp. .
, ****, ().
, ** (, , ).
, eds., ** (, , ).
, in [@KHbook09], pp. .
, , , , in [@KHbook09], pp. .
, , , ****, ().
, ****, ().
, , , in **, edited by (, , ), vol. , pp. .
, , , ****, ().
, , , ****, ().
, ****, ().
, , , ****, ().
, , , , ****, ().
, ****, ().
, , ().
, ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, , , , ****, ().
, , , , ****, ().
, ****, ().
, ****, ().
, , , , ****, ().
, ****, ().
, ****, ().
, , , , ****, ().
, , , , ****, ().
, , , , ****, ().
[[[*et al.*]{}]{}]{}, ****, ().
, , , , ****, ().
, , , , ****, ().
, , , ****, ().
[[[*et al.*]{}]{}]{}, ****, ().
, , , ****, ().
, , , , ****, ().
, ****, ().
, , , , (), .
, , , ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , , ****, ().
, , , , ****, ().
, , , , ****, ().
, ****, ().
, ****, ().
.
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, ****, ().
[[[*et al.*]{}]{}]{}, ****, ().
, ****, ().
, , , ****, ().
, ****, ().
[[[*et al.*]{}]{}]{}, ****, ().
[[[*et al.*]{}]{}]{}, ****, ().
, , , ****, ().
[[[*et al.*]{}]{}]{}, ****, ().
[[[*et al.*]{}]{}]{}, ****, ().
, , , , ****, ().
, , , , ****, ().
, , , ****, ().
, ****, ().
, , , ****, ().
, , , ****, ().
, ****, ().
, , , ****, ().
, ****, ().
[[[*et al.*]{}]{}]{}, ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, ** (, , ).
, ****, ().
[[[*et al.*]{}]{}]{}, ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, , , ****, ().
, ** (, , ).
, , , , ****, ().
[[[*et al.*]{}]{}]{}, ****, ().
[[[*et al.*]{}]{}]{}, ****, ().
, , , ****, ().
, , , , in ** (), vol. , pp. .
, , , ****, ().
, , , ****, ().
, , , , ****, ().
[[[*et al.*]{}]{}]{}, ****, ().
, , , ****, ().
, , , ****, ().
, ****, ().
, , , ****, ().
, , , ****, ().
, , , , ****, ().
, , , ****, ().
, ****, ().
, , , , ****, ().
, ****, ().
, , , ****, ().
, , , , ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, , , , ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, , , ****, ().
, , , , ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, , , , ****, ().
, , , , ****, ().
, , , , ****, ().
, ****, ().
, in ** (, , ), pp. .
, ****, ().
, , , , ****, ().
, , , ****, ().
, , , , ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , , ****, ().
, , , ****, ().
, , , , ****, ().
, ****, ().
, ****, ().
, , , , ****, ().
[[[*et al.*]{}]{}]{}, ****, ().
, , , , ****, ().
, , , ****, ().
, , , ****, ().
, in **, edited by (, , ).
, , , , ****, ().
, ****, ().
, ****, ().
, , , , ****, ().
, ****, ().
, , , ****, ().
[[[*et al.*]{}]{}]{}, ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, , , , ****, ().
[[[*et al.*]{}]{}]{}, ****, ().
, , , ****, ().
, , , , ****, ().
, ****, ().
[[[*et al.*]{}]{}]{}, ****, ().
, , , ****, ().
, , , ****, ().
, , , ****, ().
[[[*et al.*]{}]{}]{}, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, , , ****, ().
, , , ****, ().
, , , ****, ().
, , , ****, ().
, ****, ().
[[[*et al.*]{}]{}]{}, ****, ().
, , , , ****, ().
[[[*et al.*]{}]{}]{}, ****, ().
, ****, ().
, ****, ().
, , , , ****, ().
, , , , ****, ().
[[[*et al.*]{}]{}]{}, ****, ().
, , , , ****, ().
, , , ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, , , , ****, ().
, ****, ().
, ****, ().
[[[*et al.*]{}]{}]{}, ****, ().
, ****, ().
[[[*et al.*]{}]{}]{}, ****, ().
, , , ****, ().
, ****, ().
, , , , ****, ().
, , , ****, ().
, ****, ().
, ****, ().
[[[*et al.*]{}]{}]{}, ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, , , ****, ().
[[[*et al.*]{}]{}]{}, ****, ().
, , , ****, ().
[[[*et al.*]{}]{}]{}, ****, ().
, , , ****, ().
[[[*et al.*]{}]{}]{}, ****, ().
[[[*et al.*]{}]{}]{}, ****, ().
, ****, ().
, , , , ****, ().
, ****, ().
, ****, ().
, , , ****, ().
[[[*et al.*]{}]{}]{}, ****, ().
, ****, ().
, ****, ().
[[[*et al.*]{}]{}]{}, ****, ().
, , , , ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , , ****, ().
[[[*et al.*]{}]{}]{}, ****, ().
, , , ****, ().
, ****, ().
, ****, ().
|
---
abstract: 'We study a variation of the classical multi-armed bandits problem. In this problem, the learner has to make a sequence of decisions, picking from a fixed set of choices. In each round, she receives as feedback only the loss incurred from the chosen action. Conventionally, this problem has been studied when losses of the actions are drawn from an unknown distribution or when they are adversarial. In this paper, we study this problem when the losses of the actions also satisfy certain structural properties, and especially, do show a trend structure. When this is true, we show that using *trend detection*, we can achieve regret of order $\tilde{O} (N \sqrt{TK})$ with respect to a switching strategy for the version of the problem where a single action is chosen in each round and $\tilde{O} (Nm \sqrt{TK})$ when $m$ actions are chosen each round. This guarantee is a significant improvement over the conventional benchmark. Our approach can, as a framework, be applied in combination with various well-known bandit algorithms, like Exp3. For both versions of the problem, we give regret guarantees also for the *anytime* setting, i.e. when length of the choice-sequence is not known in advance. Finally, we pinpoint the advantages of our method by comparing it to some well-known other strategies.'
author:
-
-
bibliography:
- 'references.bib'
title: Trend Detection based Regret Minimization for Bandit Problems
---
Multi-armed bandits, Switching regret, Trend detection
Introduction
============
Consider the following problem: Suppose you own an apparel store and have purchased a fixed number of ad slots on some website, say Facebook. For every time someone visits the website, you can choose a set of ad impressions to display. Let’s assume that an ad here consists of an image of a clothing item and that each image is associated with a click-through-rate unknown to you. Your goal is to choose images to display such that cumulative click-through-rate is maximized. How would you choose these images? This problem comes under the domain of reinforcement learning and more specifically, multi-armed bandit learning. Contrary to supervised learning (and most of current research in statistical pattern recognition and artificial neural networks), multi-armed bandit learning is characterized by its *interactive nature* between an agent and an uncertain environment. Such a learning algorithm makes its next move based on the history of its past decisions and their outcomes.
More specifically, a multi-armed bandit problem is a sequential learning problem where the learner chooses an action from a set of actions in every round. Associated with each action is a loss unknown to the learner[^1]. The goal of the learner is to minimize the loss incurred. Performance of the learning algorithm is measured by regret, compared to a certain benchmark strategy. Conventionally, in multi-armed bandit problems, the benchmark strategy is to always choose the single best action in hindsight, i.e. an action with minimum cumulative loss. This problem has been thoroughly studied in a variety of settings [@Auer2002; @Auer-UCB; @Bubeck2010; @Thompson]. A distinguishing feature of such problems is the inherent exploration-exploitation trade-off. When the losses are generated from a fixed but unknown distribution, there exist algorithms [@Auer-UCB; @Thompson; @Robbins] that can achieve regret guarantee of $O(\log T)$. On the other hand, when losses for the actions are generated under no statistical assumption, or alternately when losses are generated by an adversary, best possible regret guarantee that can be achieved is $O(\sqrt{T})$ [@Bubeck2010]. Recently, interest has been developing [@Seldin; @Hazan] in the question of achieving non-trivial regret guarantees when the loss model is semi-structured. Intuitively, more structure in the losses should enable more exploitation and hence allow for better regret guarantees. Along the lines of some of the recent work [@Seldin], we also define models exhibiting a certain degree of structure.
Often the real world problems do not exhibit adversarial behaviour and in many cases, the losses of different actions follow a trend structure, i.e. when one action is consistently better than others in a certain interval. For such more specialized models, the standard techniques prove insufficient since they do not take advantage of these properties. In this paper, we address this deficiency using the paradigm of trend detection. Broadly, we propose a strategy that keeps track of the current trend and restarts the regret minimization algorithm whenever a trend change is detected. This allows us to give regret guarantees with respect to a strategy that chooses the best action in each trend. This is a significantly stronger benchmark than the one conventionally considered. The regret guarantee with respect to this benchmark is also called switching regret.
More importantly, our proposed strategy is not specific to a particular regret minimization algorithm unlike the approaches in some recent works. In this paper, we use Exp3 as the underlying regret minimizing algorithm for its simplicity and almost optimal regret guarantee [@Auer2002]. However, one can use any other algorithm and analyze it in a similar way. Because of this modular structure of the algorithm, we can extend the arguments and proofs for the conventional multi-armed bandits problem to a more general setting where instead of a single action, the learner chooses multiple actions in each round [@Uchiya]. This problem has been studied in stochastic [@Kveton] and adversarial [@Bubeck2012] setting, but to the best of our knowledge, there are no prior works giving a switching regret guarantee for it.
One of the primary motivations for studying these bandit problems comes from the domain of recommender systems. Many web tasks such as ad serving and recommendations in e-commerce systems can be modeled as bandit problems. In these problems, the system only gets feedback for the actions chosen, for example whether the user selects the recommended items or not. Notice that these systems may recommend one or more items in each round. Motivation for using the paradigm of trend detection comes from the general observation that in many cases, the performance of actions follow a trend structure. In the abovementioned case of an apparel store, for example, swimsuits might be the best choice during the hottest weeks of the year, or for certain time periods, it might be best to show an item a famous celebrity was recently seen wearing.
**Summary of Contribution:** For the standard $K$-armed bandit problem, we propose a new algorithm called Exp3.T. This algorithm guarantees switching regret of $\tilde{O} \left( \frac{ N \sqrt{ TK}}{\Delta_{sp}} \right) $ where $N$ is the number of trend changes and not known to the learner. $\Delta_{sp}$ indicates the degree of structure in loss model. This guarantee also holds for the anytime setting i.e. when the duration of the run, $T$, is not known in advance. We extend the analysis of this problem to the case when instead of a single action, the learner chooses a basis of uniform matroid in each round. The underlying regret minimization algorithm used in this case is OSMD [@Bubeck2012]. The resulting algorithm achieves switching regret of $\tilde{O} \left( \frac{ Nm \sqrt{ TK}}{\Delta_{sp}} \right) $. Finally, we provide empirical evidence for this algorithm’s performance in the standard multi-armed bandit setting.
In general, our algorithm is particularly effective, i.e. gives better regret guarantees when little is known about the loss structure of actions except that the changes in the best action are not too frequent and actions are likely to be well-distinguishable. We argue that our loss models are more general and reasonable compared to the models conventionally studied: In most real world cases, we would expect to see a mixture of purely stochastic and purely adversarial data. We show that even such mixture of models allows us to give tight regret guarantees as long as the structural assumptions still hold.
Previous Work
=============
The problem of giving regret guarantees with respect to a switching strategy has been considered previously in several works (albeit in more restricted settings), all of which consider the case when the learner chooses exactly one action in each round. Auer et al proposed Exp3.S [@Auer2002] along the same lines as Exp3 by choosing an appropriate regularization factor for the forecaster. This enables the algorithm to quickly shift focus on to better performing actions. For abruptly changing stochastic model, Discounted-UCB[@DUCB] and SW-UCB [@Garivier] have been proposed along the lines of UCB. In the former algorithm, a switching regret bound is achieved by progressively giving less importance to old losses while in SW-UCB, authors achieve the same by considering a fixed size sliding window. Both these algorithms achieve a regret bound of $O(\sqrt{MT \log T})$, where $M$ is the number of times the distribution changes.
Our work is closest to the algorithm Exp3.R proposed by Feraud et al [@Exp3R]. They also follow a paradigm very similar to trend detection and the high level ideas used in their paper are similar to ours. However, their algorithm is specific to Exp3 and only for the version of bandit problem where one chooses a single action in each round. Further, the algorithm assumes a certain gap in the performance of actions that depends on the knowledge of run time of the algorithm. This makes it inapplicable for a number of real-world scenarios.
The trend detection idea used in our algorithm is similar to the change detection problem studied in statistical analysis. Similar ideas have also been used for detection of concept drift in online classification [@cd1; @cd2]. Common applications include fraud detection, weather prediction and in advertising. In this context, the statistical properties of target variable changes over time and the system tries to detect this change and learn the new parameters.
Problem Setting
===============
We consider a multi-armed bandit problem with losses for $K$ distinct actions. Let the set of these $K$ actions be denoted by $[K]$. The losses of these $K$ actions can be represented by a sequence of loss vectors $\{ \textbf{x} \}_{t}$ where $x = \{ ( x_1, x_2 \cdots x_K )\}_{t}$. The loss sequence is divided into $N$ *trends*. A trend is defined as a sequence of rounds where a set $S$ of $m$ actions is *significantly* better than others for the duration of this trend. We say that the trend has changed when this set of actions changes. Within each trend the losses of actions in set $S$ are “separated" from all others by a certain gap. Particularly, we consider a finer characterization of loss models than just stochastic or adversarial within a trend. Similar to the loss model introduced by Seldin et al [@Seldin], we focus on models exhibiting a “gap" in losses. Although this model is weaker than the adversarial model it still covers a large class of possible loss models. We express the gap in our loss models by an abstract term $\Delta_{sp}$, the separation parameter. Although the exact definition of this parameter changes depending on the actual model, in each case it conveys the same idea that a larger value of this parameter implies a larger gap between losses of actions in set $S$ and every other action.
1. **Dynamic Stochastic Regime (DSR)**: For the stochastic loss model, the loss of each action $a$ at round $t$ is drawn from an unknown distribution with mean $\mu_t^a$. Let $a^*$ and $a$ be any actions in sets $S$ and $[K] - S$ respectively. Then for all rounds $t$ in trend $\tau$, $ \mu_t^{a^*} < \mu_t^{a} $ and the separation parameter is defined as: $$\Delta_{sp} (\tau) = \min\limits_{t \in \tau} \{ \mu_t^a - \mu_t^{a^*} \}.$$ The loss model is stochastic with separation parameter $\Delta_{sp}$, when $\Delta_{sp} = \min\limits_{\tau} \Delta_{sp}(\tau) > 0$. The identity of best action $a^*$ changes $N$ times.
2. **Adversarial Regime with Gap (ARG)**: We use a modified version of the loss model introduced in [@Seldin]. Within each trend $\tau$, there exists a set $S$ of $m$ actions which is the best set for any interval of (sufficiently large) constant size, $C$. More precisely, let $\lambda_z(a) = \sum\limits_{t\in z} \ell_{a,t}$ be the cumulative loss of an action $a$ in interval $z$ consisting of $C$ rounds. Then for any action $a^* \in S$ and $a \in [K] - S$ we define the separation parameter for trend $\tau$ as: $$\Delta_{sp}(\tau) = \min\limits_{z \in \tau} \left\lbrace \frac{\min\limits_{a' \neq a^*} \lambda_z(a') - \lambda_z(a^*)}{|z|} \right\rbrace$$
It is the smallest average gap between any sub-optimal action and any action in set $S$ for any interval $z$ of size $C$. As in the above model, we say that a model satisfies ARG property with separation parameter $\Delta_{sp}$ when $\Delta_{sp} = \min\limits_{\tau} \Delta_{sp}(\tau) > 0$.
Notice that the first trend, spanning from the first round till some round $n$, each action satisfies the gap conditions defined above for all the constituent rounds (<span style="font-variant:small-caps;">DSR</span>) or intervals of size $C$ (ARG), for the respective setting. We define $n$ to be the last such round, i.e. these conditions are violated at round $n+1$, indicating the start of a new trend.
We study two variants of this problem. In the first variant, the algorithm chooses exactly one action every round while in the other, the algorithm can choose any set of $m$ actions. For both the variants, the algorithm observes losses only of the actions chosen (or the single action chosen for the former variant). We assume the presence of an oblivious adversary which decides on the exact loss sequences before the start of the game. The sequence is of course not known to the algorithm. We also make the standard assumption that losses are bounded in the $[0, 1]$ interval.
For the problem setting as described, our goal is to design an algorithm $\mathcal{A}$ to minimize the cumulative loss incurred in the $T$ rounds that the game is played. For the case when the algorithm chooses exactly one action every round, its performance is measured with respect to a strategy that chooses the best action in each trend. Specifically, let $I_t$ denote the action chosen by the algorithm in round $t$ and let $X_{I_t}^t$ denote the corresponding loss incurred by this action. Then the cumulative loss incurred by the algorithm is: $$L_{\mathcal{A}} = \sum\limits_{t=1}^T X_{I_t}^t .$$ Let $I^*_{[n]}$ be the best action in trend $n$, then the loss incurred by the switching strategy described above is: $$L^* = \sum\limits_{n=1}^N \sum\limits_{t=T_n}^{T_{n+1} -1} X_{I^*_{[n]}}^t ,$$ where trend $n$ occurs in the interval $[T_n, T_{n+1} -1]$. We define regret incurred by algorithm $\mathcal{A}$ as follows: $$R_T^* = L_{\mathcal{A}} - L^*.$$
Exactly analogous definitions apply to the case when the algorithm chooses multiple actions in each round.
For the algorithm considered in this paper, we assume that the loss model, either stochastic or adversarial regime with gap, has separation parameter lower bounded by $4 \Delta$, a constant known to us i.e. $\Delta_{sp} \geq 4 \Delta$.
The Algorithm
=============
The algorithm Exp3.T is composed of two primary ideas: The Exp3 algorithm and a trend detection routine. Exp3 gives almost optimal regret bound with respect to the single best action in hindsight when the loss model is adversarial. However, when the losses exhibit certain structure or when regret with respect to a stronger benchmark is desired, Exp3 proves to be insufficient. In this algorithm, we overcome this problem by identifying *trends* in losses and resetting the Exp3 algorithm whenever a change in trend is detected. One advantage of using Exp3 when losses exhibit trend structure is that Exp3 is robust to changes in the losses of actions as long as the best action remains same. We exploit this property in our algorithm so that it is applicable to a large class of loss models. In the analysis we use the following regret bound given by [@BubeckBook]
\[base2\] For any non-increasing sequence $ \{ \eta \}_{t \in \mathbb{N}}$, the regret of Exp3 algorithm with $K$ actions satisfies $$R_T \leq \frac{K}{2} \sum\limits_{t=1}^T \eta_t + \frac{\ln K}{\eta_T} .$$
Algorithm \[Exp3.T\] shows the skeleton of the procedure to achieve the desired switching regret bound. At a high level, the algorithm divides the total run into runs on smaller intervals. Within each interval the algorithm runs Exp3 (parameter $\eta$) with loss monitoring(LM) plays randomly interspersed among all rounds. The length of this interval is controlled by parameter $\gamma$. These loss monitoring plays choose different actions for a fixed number of rounds without regards to regret. The loss values collected from this process are used to give an estimation of the mean loss of each action in a given interval. The number of such plays required to give a good estimation of loss depends on the actual model under consideration and is captured by the parameter $t^*$. Based on this estimation, the trend detection module outputs with probability at least $1 - \delta$ whether the best action has changed or not, alternatively whether the trend has changed or not.
The $Make\_Schedule(\cdot)$ procedure assigns Exp3 plays and fixed action plays to monitor loss (exactly $t^*$ many per action) randomly to rounds at the start of an interval and returns the randomly generated schedule. The random generation of schedule protects the algorithm from making biased estimates of actual losses.
Set interval length $ |I| = \frac{K t^*}{\gamma}$ Schedule $\leftarrow$ Make\_Schedule($I$) Call Exp3\_play() Call LM\_play(Schedule($t$))
Restart Exp3
### Trend Detection {#trend-detection .unnumbered}
In any interval, the loss monitoring component of Algorithm \[Exp3.T\] chooses each action a sufficient number of times and these choices are randomly distributed over the interval. The samples obtained from these plays are used to give a bound on the deviation of the empirical mean of losses from the true mean. Particularly, we use the following lemma by Hoeffding [@Hoeffding] for sampling without replacement from a finite population.
\[base\] Let $\mathcal{X} = (x_1, x_2, \cdots x_N)$ be a finite population of $N$ real points, $X_1, X_2 \cdots X_n$ denote random sample without replacement from $\mathcal{X}$. Then, for all $\epsilon > 0$, $$\mathbb{P} \left( \frac{1}{n} \sum\limits_{i=1}^n X_i - \mu \geq \epsilon \right) \leq exp (-2n \epsilon^2)$$ where $\mu = \frac{1}{N} \sum\limits_{i=1}^N x_i$ is the mean of $\mathcal{X}$.
For each interval we maintain information about the empirical mean of losses for each action, i.e. mean over loss values actually seen by the algorithm. By Lemma \[base\], all of these estimates are close to the actual mean with probability at least $1 - \delta$ where $\delta$ is a parameter of the algorithm. In case of change in trend within an interval $I$, naturally these guarantees are void as the losses do not maintain a uniform pattern. Therefore, a change in trend can be detected by comparing the empirical estimates obtained at the end of the next interval to those obtained prior to the trend change. This idea is represented in Algorithm \[trend\].
Let $p$ be the index of the current interval $I_p^* \leftarrow$ action with minimum empirical mean loss, $\hat{\mu}$, in interval $p$. return False return True return False
Regret Analysis {#analysis}
===============
For ease of notation in the analysis, we define the *detector complexity*, $t^*$, as the number of loss monitoring samples required for each action so that the trend detection procedure works with probability at least $1 - \delta$, provided there is no trend change in the actual interval. In what follows, we give detector complexity bounds for different models and in regret computation use $t^*$ as an abstract parameter.
\[lem1\] The detector complexity in dynamic stochastic regime satisfies $$t^*_{DSR} = \frac{1}{2 \Delta^2} \ln \left( \frac{4K}{\delta} \right).$$
Fix an action $a$ and an interval $I$. Let the expected reward of action $a$ on interval $I$ be given by the sequence $ \{ \mu_t^a \}_{t \in I} $ and the actual realization of rewards be given by $ \{ X_t^a \}_{t \in I} $. First we observe that the expected reward of $a$ over the interval $I$ is given by $$\mu_{a, I} = \frac{\sum_{t \in I} \mu_t^a}{|I|}.$$
Let the set of loss monitoring samples collected by our algorithm for action $a$ be denoted by $\mathcal{Z}_a$. The algorithm uses these samples to calculate the empirical mean of rewards for the action $a$. We denote it by $ \hat{\mu}_{\mathcal{Z}_a}$.
*Step 1:* First we show that the empirical mean of losses over the entire interval is close to the expected mean, $\mu_{a, I}$. Let $ \{ X_t^a \}_{t \in I} $ be the sequence of actual reward realizations for arm $a$ in interval $I$. Denote by $\bar{\mu}_{a, I}$ the mean of these actual realizations. Applying Hoeffding’s inequality, $$\begin{aligned}
P ( | \mu_{a, I} - \bar{\mu}_{a, I} | > \Delta ) & \leq 2 \exp(- 2|I| \cdot {\Delta}^2)\\
& \leq 2 \exp(- 2 t^*_{DSR} \cdot {\Delta}^2) = \frac{\delta}{2K}
\end{aligned}$$ i.e. the empirical mean of losses for action $a$ over the interval $I$ is close to the actual mean with probability at least $ 1 - \frac{\delta}{2K}$.
*Step 2:* Now we show that the empirical mean of loss-monitoring samples collected for action $a$ is close to the mean of the actual realizations, $\bar{\mu}_{a, I}$. This follows from Lemma \[base\]: $$P( |\bar{\mu}_{a, I} - \hat{\mu}_{\mathcal{Z}_a} | > \Delta ) \leq 2 \exp( -2 t^*_{DSR} \Delta^2 ) = \frac{\delta}{2K}$$
Therefore, with probability at least $1 - \frac{\delta}{K}$ the mean of loss monitoring samples for any action is within $2 \Delta$ of the actual mean. By applying a union bound over all actions, with probability at least $1 - \delta$ the same guarantee holds over all actions, which in turn implies that the trend detection module can detect whether the best action has changed with the same probability.
\[lem2\] The detector complexity in the adversarial regime with gap satisfies $$t^*_{ARG} \geq \frac{(b - a)^2}{8 \Delta^2} \ln \left( \frac{2K}{\delta} \right)$$ when the losses in the given trend are drawn from interval $[a, b]$.
The proof for this Lemma goes along the same lines as for Lemma \[lem1\] except that in this case we do not need step 1. Further, in this case, we can allow the empirical mean of collected samples to be within $2 \Delta$ of the actual mean of all losses in the interval instead of just $\Delta$. For this particular loss model, if additional information about the range of losses within a trend is available, then using the generalized version of Hoeffding’s inequality we achieve a tighter detector complexity bound. We note if not defined otherwise, our losses are always drawn from range $[0,1]$.
In the rest of the analysis, instead of $t^*_{DSR}$ or $t^*_{ARG}$ we use the model-oblivious-parameter $t^*$.
\[main1\] The expected regret of Exp3.T is $$R_T = O \left( \frac{ N \sqrt{( TK \ln K) \ln \left( TK \ln K \right)} }{\Delta_{sp}} \right).$$
We divide the regret incurred by Exp3.T in three distinct components; first is the regret incurred just by running and restarting of Exp3. To bound this component of total regret we use the regret bound as in Lemma \[base2\]. Let $F(T)$ denote the number of *false trend detections* i.e. number of times when there was no change in detection but the detection algorithm still indicated a change. Then the regret incurred due to Exp3 is $$R_{Exp3} \leq \frac{K}{2} \sum\limits_{t=1}^T \eta_t + \frac{(N-1 + F(T)) \ln K}{\eta_T}.$$
As trend detection fails with probability at most $\delta$, the expected number of false detections is at most $$F(T) \leq \delta \left( \frac{T}{|I|} + 1 \right).$$
The second component of the total regret incurred is on account of intervals wasted due to delay in detection of trend change. Specifically, if the trend changes in a given interval $I$, the regret guarantee obtained as part of Exp3 is not with respect to the best action before and after trend change. As we cannot give the required guarantee for this interval, we count this interval as *wasted* and account it towards regret. Secondly, since the trend detection algorithm detects the change with probability at least $ 1 - \delta$, the expected number of trend detection calls required (or alternatively the expected number of intervals) is at most $ \frac{1}{1 - \delta}$. Therefore, the total number of wasted rounds is at most $$R_{wasted} \leq N \left( 1 + \frac{1}{1 - \delta} \right) |I|$$
The third and final component of regret incurred is due to the *loss monitoring plays* in each interval. No guarantee can be given about the regret incurred in these rounds and hence all such rounds are also accounted in regret. Since in each interval there are exactly $K t^*$ number of such plays, the total number of such rounds is at most $$R_{loss\_monitor} \leq K t^* \left( \frac{T}{|I|} + 1 \right) = \gamma T + Kt^*$$
Putting all together, the total regret is
$$\begin{multlined}
R_T \leq K \sum\limits_{t=1}^T \eta_t + \frac{(N-1 + \frac{\gamma \delta T}{K t^*}) \ln K}{\eta_T} + \\
\shoveleft[1cm] N \left( 1 + \frac{1}{1 - \delta} \right) \frac{K t^*}{\gamma} + \gamma T + Kt^*
\end{multlined}$$
Setting $\eta = \sqrt{\frac{\ln K}{TK}}$, $\gamma = \sqrt{\frac{K t^* \ln K}{T}}$ and $\delta = \sqrt{\frac{K}{T \ln K}}$, regret incurred by Exp3.T is
$$\begin{multlined}
\vspace{2mm}
R_T \leq \sqrt{TK \ln K} + N \sqrt{TK \ln K} + \\
\vspace{2mm}
\shoveleft[2cm] \sqrt{\frac{TK \ln K}{t^*}} + 2N \sqrt{\frac{TK t^*}{\ln K}} + \\
\shoveleft[2cm] 2N \frac{K \sqrt{t^*}}{\ln K} + \sqrt{t^* TK \ln K} + Kt^*
\end{multlined}$$
where $t^* = O \left( \frac{\ln \left( TK \ln K \right)}{\Delta_{sp}^2} \right)$.
Alternatively, $R_T = O \left( \frac{ N \sqrt{( TK \ln K) \ln \left( TK \ln K \right)} }{\Delta_{sp}} \right)$.
### Extension to Anytime Version {#extension-to-anytime-version .unnumbered}
The parameters derived to achieve the desired regret bound in Theorem \[main1\] depend on the knowledge of T, the length of the total run of the algorithm. This dependency can be circumvented by using a standard doubling trick. Particularly, we can divide the total time into periods of increasing size and run the original algorithm on each period. Since the guarantee of this algorithm rests crucially on the probability of correct trend detection, in our case we need to modify the $\delta$ parameter as well.
Let $T_i = 2^i T'$ Set $\gamma_i = \sqrt{\frac{K t^*_i \ln K}{T_i}}$, $\delta_i = \frac{1}{T_i^{3/2}}\sqrt{\frac{K}{ \ln K}}$ Run Exp3.T with parameters $\gamma_i, \delta_i$ in period $T_i$
\[minor1\] The expected regret of Anytime Exp3.T with $\eta_{i} = \sqrt{\frac{\ln K}{T_i K}}$, $\gamma_i = \sqrt{\frac{K t^*_i \ln K}{T_i}}$ and $\delta_i = \frac{1}{T_i^{3/2}}\sqrt{\frac{K}{ \ln K}}$ is $O \left( \frac{ N \sqrt{( TK \ln K) \ln \left( TK \ln K \right)} }{\Delta_{sp}} \right)$.
We follow the same steps as in the proof of Theorem \[main1\]. We divide the regret incurred into three different components: regret due to Exp3 algorithm, due to the wasted intervals during detection and due to the loss monitoring plays. Compared to the proof in Theorem \[main1\] the only difference is that here we have to sum regret of Exp3.T over multiple runs. If $T$ is the actal length of play, then the number of times we run Exp3.T is at most $\log T$. Regret due to Exp3 algorithm (running and restarting) is: $$R_{Exp3} \leq \sum\limits_{i = 0}^{\lceil \log T \rceil} \left( \frac{K}{2} T_i \eta_{i} + \frac{(N_i - 1 + F(T_i)) \ln K}{\eta_{i}} \right)$$ where $N_i$ and $F(T_i)$ are the number of changes in trend and number of false detections in $i$th run of Exp3.T respectively. As before, $$\begin{aligned}
F(T_i) \leq & \enspace \delta_i \left( \frac{T_i}{|I|_i} + 1 \right) \\
= & \enspace \frac{1}{T_i^{3/2}}\sqrt{\frac{K}{ \ln K}} \cdot \left( \frac{T_i}{K t^*_i} \sqrt{\frac{K t^*_i \ln K}{T_i}} + 1 \right) \leq \frac{2}{T_i}
\end{aligned}$$
Using this bound in above inequality $$\begin{aligned}
R_{Exp3} \leq & \enspace \sum\limits_{i=0}^{\lceil \log T \rceil} \left[ \frac{K T_i \eta_{i}}{2} + \frac{N \ln K}{\eta_{i}} + \frac{2 \ln K}{T_i \eta_{i}} \right] \\
\leq & \enspace \sqrt{K \ln K} \cdot \sum\limits_{i}^{\lceil \log T \rceil} \left( \frac{\sqrt{T_i}}{2} + N \sqrt{T_i} + \frac{2}{\sqrt{T_i}} \right)\\
\leq & \enspace C_1 \left( \sqrt{TK \ln K} + N \sqrt{ TK \ln K} \right)
\end{aligned}$$
The inequalities follow by using parameters $\eta_i$ and $\delta_i$ as defined in the algorithm. For ease of representation, we capture all constants with a single constant $C_1$. Regret incurred due to wasted intervals is
$$\begin{aligned}
R_{wasted} \leq & \enspace \sum\limits_{i=0}^{\lceil \log T \rceil} N_i \left( 1 + \frac{1}{1 - \delta_i} \right) |I_i|\\
\leq & \enspace \sum\limits_{i=0}^{\lceil \log T \rceil} 2N \left( 1 + \delta_i \right) \frac{K t^*_i}{\gamma_i} \\
\leq & \enspace \sum\limits_{i=0}^{\lceil \log T \rceil} \frac{4 N K t^*_i}{\gamma_i}\\
\leq & \enspace \sum\limits_{i=0}^{\lceil \log T \rceil} N \sqrt{\frac{t_i^* T_i K}{\ln K}} \\
\leq & \enspace C_2 \cdot \left( N \sqrt{\frac{TK t^*}{\ln K}} \right)
\end{aligned}$$
Here we use the fact that $t^*_i = O(t^*)$, the detector complexity had we known $T$ apriori. All the constants involved in the above inequality are captured by $C_2$. Similarly, regret due to loss monitoring plays is: $$\begin{aligned}
R_{loss\_monitor} \leq & \enspace K \sum\limits_{i=0}^{\lceil \log T \rceil} t^*_i \frac{T_i}{|I_i|}\\
\leq & \enspace \sum\limits_{i=0}^{\lceil \log T \rceil} \gamma_i T_i\\
\leq & \enspace C_3 \cdot \left( \sqrt{K T t^* \ln K} \right)
\end{aligned}$$ where the constant $C_3$ captures the constants involved. Combining the above mentioned bounds we get the desired claim. This bound is only a constant factor worse than the bound proved in Theorem \[main1\].
It is easy to verify that the above analysis holds if $\delta_i$ is of the order of $\delta$ and this condition is met when $T'$ is of order at least $T^{\frac 13}$. If, however, $T'$ is not a good estimate of $T$ in the above sense, the output of trend detection procedure in initial runs will not be correct with sufficiently high probability and hence aforementioned guarantees do not hold. We account for the regret incurred in the first few runs (till $T_i \geq T^{\frac 13}$) by simply disregarding all of them and consider them as *wasted* rounds.
The principle of trend detection and restarting of a base algorithm (Exp3 in our context) according to changes in the trend can be extended to any multi-armed bandit algorithm for adversarial setting. The final regret guarantee obtained naturally depends on the performance of the base algorithm. We notice however that due to the necessary number of exploration rounds, no base algorithm can allow us to achieve regret $o(\sqrt{T})$. In particular, by choosing an appropriate base algorithm, our framework can be adjusted to a number of different loss structures and problem settings. In the following section, we use exactly this principle to design an algorithm to minimize regret with respect to the $m$ best actions.
Extension to Top-$m$ Actions
=============================
In this section, we show how to extend the ideas introduced above to a setting where in each round we choose $m > 1$ actions out of the $K$ available. For this variant of the problem, the Exp3 algorithm cannot be used and hence we use a more general approach proposed by Audibert et al [@Bubeck2012]. This approach, named Online Stochastic Mirror Descent (OSMD) is based on a powerful generalization of gradient descent for sequential decision problems. Similar to Exp3, the regret guarantee given by this technique is with respect to the best combination of actions in hindsight and holds even for adversarial losses. We refer the reader to [@BubeckBook] for a thorough treatment of the technique. In our proposed algorithm, OSMD.T, we use the technique as a black box and only need the final guarantee.
\[base3\] The regret of OSMD algorithm in the $m$-set setting with $F(x) = \sum\limits_{i=1}^{K} x_i \log x_i - \sum\limits_{i=1}^{K} x_i$ and learning rate $\eta$ satisfies $$R_T \leq \frac{\eta T K}{2} + \frac{m \log \frac{K}{m}}{\eta}$$
Here $F(x)$ is a Legendre function and is a parameter used within the OSMD technique. The trend detection algorithm in this case uses the same idea as in Algorithm \[trend\] except that instead of a single action we now check if the set of $m$ best actions have changed with probability at least $1 - \delta$. Even in this case, we denote by $t^*$ the number of samples needed for each action to ensure that trend detection works with above mentioned probability. Bounds derived in Lemma \[lem1\] and Lemma \[lem2\] apply in this case too.
There are only a few differences in Algorithm \[osmd\] as compared to Algorithm \[Exp3.T\]. Firstly, instead of using Exp3 for regret minimization we use the more sophisticated technique of OSMD. This algorithm gives tight regret guarantees and is polynomial time computable[^2]. Secondly, the trend detection algorithm changes slightly as mentioned above. Finally, since we choose $m$ actions in every round, we need a factor of $m$ lesser number of loss monitoring plays. Alternately, the size of an interval $I$ is chosen to be $\frac{Kt^*}{m \gamma}$.
Set interval length $ |I| = \frac{K t^*}{m \gamma}$ Schedule $\leftarrow$ Make\_Schedule($I$) Call OSMD\_play() Call LM\_play(Schedule($t$) )
Restart OSMD
\[main2\] The expected regret of OSMD.T is $$R_T = O \left( \frac{ Nm \sqrt{ TK \ln \left( \frac{TK}{m} \right) } }{\Delta_{sp}} \right).$$
The main steps of analysis in this setting are exactly the same as Theorem \[main1\]. The component of regret due to OSMD algorithm is $$R_{osmd} \leq \frac{\eta T K}{2} + ( N - 1 + F(T) ) \frac{m \log \frac{K}{m}}{\eta},$$
where $F(T)$ is the number of false detections as before and given by $ F(T) \leq \delta \left( \frac{T}{|I|} + 1 \right) $. This inequality follows by Lemma \[base3\] and considering the fact that the algorithm is restarted at most $N-1 + F(T)$ times. Following the same arguments as in Theorem \[main1\], the regret incurred on account of wasted intervals is at most: $$R_{wasted} \leq N m \left( 1 + \frac{1}{1 - \delta} \right) |I|.$$ Unlike Theorem \[main1\], each wasted round incurs regret of $m$ instead of $1$ since we can’t guarantee regret for any of the chosen actions. Finally, since both the number of loss monitoring plays and the length of an interval is reduced by a factor of $m$, the regret incurred on account of loss monitoring plays is: $$R_{loss\_monitoring} \leq \left \lceil \frac{K t^*}{m} \right \rceil \cdot \left \lceil \frac{T}{|I|} \right \rceil = O \left( \gamma T \right).$$
Putting the above bounds together, $$\begin{multlined}
\vspace{2mm}
R_T = \enspace R_{osmd} + R_{wasted} + R_{loss\_monitoring} \\
\vspace{2mm}
\leq \enspace \frac{\eta T K}{2} + ( N - 1 + \frac{\delta \gamma mT}{K t^*} ) \frac{m \log \frac{K}{m}}{\eta} +\\
\vspace{2mm}
N m \left( 1 + \frac{1}{1 - \delta} \right) \frac{K t^*}{\gamma m} + \gamma T
\end{multlined}$$
By setting $\eta = m \sqrt{\frac{\ln \left( K/m \right)}{TK}}$, $\delta = \sqrt{\frac{mK}{T}}$ and $\gamma = \frac{1}{m} \sqrt{\frac{K t^*}{T}}$ we get
$$\begin{multlined}
\vspace{2mm}
R_T \leq m \sqrt{TK \ln \frac{K}{m}} + N m \sqrt{TK \ln \frac{K}{m}} \\
\vspace{2mm}
+ \sqrt{mTK \ln \frac{K}{m} t^*} + 2Nm \sqrt{TK t^*} \\
\vspace{2mm}
+ 2NK \sqrt{m t^*} + \frac{1}{m}\sqrt{t^* TK}
\end{multlined}$$
Alternately, $R_T = O \left( \frac{ Nm \sqrt{ TK \ln \left( \frac{TK}{m} \right) } }{\Delta_{sp}} \right)$.
Simulations
===========
Since our proposed algorithm comes under the domain of active learning, it is not possible to reliably use any fixed data set. Instead, to assess the performance of our algorithm we shall use artificially constructed loss generation models; a standard approach for problems of this nature.
[0.48]{} {width="\textwidth"}
[0.48]{} {width="\textwidth"}
[0.48]{} {width="\textwidth"}
[0.48]{} {width="\textwidth"}
For each of the two models introduced, we compare the performance of Exp3.T algorithm with Exp3.R[@Exp3R], an algorithm closest in spirit to our work. To emphasize that we obtain *switching regret* guarantee, a stronger benchmark than conventionally used, we also compare our algorithm with Exp3 i.e. the performance, measured in terms of the cumulative loss, is with respect to a switching strategy that chooses the *best* action in each trend. Each experiment is run independently 10 times and the mean of the results is shown in figures.
**Experiment 1: DSR model** Within each trend, we set the bias of the best action to $0.10$ and biases of other actions for the case when $\Delta_{sp} = 0.4$ is set to $0.5$ while for the case when $\Delta_{sp} = 0.55$, they are set to $0.65$. For each of the loss models, we run the experiment with $K = 2$ and $K = 10$ actions respectively. We have constructed the dynamic stochastic loss model in our experiments as a representative of a worst case scenario i.e. we do not assume any information about the loss structure except for the separation parameter $\Delta_{sp}$ (refer Fig. \[DSR\]). The performance of Exp3.T is almost identical to Exp3.R, an algorithm specifically designed for stochastic model. For a smaller gap, however, our algorithm still manages to do marginally better than Exp3.R. We note here that the parameters of Exp3.R algorithm are set such that the assumptions required for the algorithm hold.
**Experiment 2: ARG model** We design the semi-structured property of ARG model as follows: For $\Delta_{sp} = 0.3$ case, within each trend the loss of best action is a sequence of 100 consecutive 0s followed by 100 consecutive 1s. In the same rounds, losses of sub-optimal actions are 1 and 0.6 respectively. For $\Delta_{sp} = 0.4$ case, losses of the best action are same as before but losses of sub-optimal actions are kept constant at 0.9. These loss structures are chosen as representatives of the possible instances of the ARG model. The advantage of our algorithm is clearly highlighted in this more general model. The worse performance of Exp3.R is expected since it assumes more structure than provided by the model; Exp3.T in contrast is able to exploit the little structure available and detect changes much faster.
There exists a subtle case when the guarantees presented in this paper do not hold. This happens when the length of the interval is comparable to the total run time of algorithm i.e. $O(T)$. For example, if the length of interval is $ T / 2$, then Exp3.T does not provide any switching regret guarantee since for the first two intervals Exp3.T behaves exactly like Exp3. Therefore in worst case, the regret bounds presented here are void but the bounds of Exp3 still apply.
Conclusion
==========
We have proposed a new paradigm for regret minimization and defined a broader class of loss models where our algorithm is applicable. We have used this paradigm for the regret minimization problem when one chooses either a single action or a basis of a uniform matroid in each round. For these problems we proposed algorithms and gave switching regret bounds of $\tilde{O}( N\sqrt{TK})$ and $\tilde{O}( Nm \sqrt{TK})$ respectively. Such a paradigm is particularly suitable for regret minimization algorithms where one cannot distinguish exploration and exploitation steps, for example OSMD. Extension of this paradigm to more general problems like online linear optimization is currently in progress.
[^1]: The case with rewards is symmetric
[^2]: The OSMD technique can also be used when there are more generic combinatorial constraints on the set of actions chosen each round. For these generic cases, the algorithm need not be poly time computable. However, for the uniform matroid case (under consideration here) it is in fact poly time computable
|
---
abstract: 'We give an alternative proof of Kovács’ vanishing theorem. Our proof is based on the standard arguments of the minimal model theory. We do not need the notion of Du Bois pairs. We reduce Kovács’ vanishing theorem to the well-known relative Kawamata–Viehweg–Nadel vanishing theorem.'
address: 'Department of Mathematics, Faculty of Science, Kyoto University, Kyoto 606-8502, Japan'
author:
- Osamu Fujino
date: '2012/2/20, version 1.09'
title: 'A remark on Kovács’ vanishing theorem'
---
The following theorem is the main theorem of this paper, which we call Kovács’ vanishing theorem.
\[main\] Let $(X, \Delta)$ be a log canonical pair and let $f:Y\to X$ be a proper birational morphism from a smooth variety $Y$ such that ${{\operatorname{Exc}}}(f)\cup {{\operatorname{Supp}}}f_*^{-1}\Delta$ is a simple normal crossing divisor on $Y$. In this situation, we can write $$K_Y=f^*(K_X+\Delta)+\sum _i a_i E_i.$$ We put $E=\sum _{a_i=-1}E_i$. Then we have $$R^if_*\mathcal O_Y(-E)=0$$ for every $i>0$.
In this short paper, we reduce Kovács’ vanishing theorem to the well-known relative Kawamata–Viehweg–Nadel vanishing theorem by taking a dlt blow-up. Our proof makes Kovács’ vanishing theorem more accessible. From our viewpoint, Theorem \[main\] is a variant of the relative Kawamata–Viehweg–Nadel vanishing theorem.
Throughout this paper, we will work over an algebraically closed filed $k$ of characteristic zero and freely use the standard notation of the minimal model theory.
In [@kovacs], Kovács proved a rather general vanishing theorem for Du Bois pairs (cf. [@kovacs Theorem 6.1]) and use it to derive Theorem \[main\]. For the details, see [@kovacs].
Before we give a proof of Theorem \[main\], we make a small remark.
In [@kovacs Theorem 1.2], $X$ is assumed to be $\mathbb Q$-factorial. Therefore, the statement of Theorem \[main\] is slightly better than the original one (cf. [@kovacs Theorem 1.2]). However, we can check that Theorem \[main\] follows from [@kovacs Theorem 1.2].
The following remark is important and seems to be well known to the experts.
\[rem2\] The sheaf $R^if_*\mathcal O_Y(-E)$ is independent of the choice of $f:Y\to X$ for every $i$. It can be checked easily by the standard arguments based on the weak factorization theorem (cf. [@kovacs Lemma 6.5.1]). For related topics, see [@fujino-lc Lemma 4.2].
Let us start the proof of Theorem \[main\]. It is essentially the same as the proof of [@book Theorem 4.14] (see also [@fujino-lc Proposition 2.4]).
By shrinking $X$, we may assume that $X$ is quasi-projective. We take a dlt blow-up $g:Z\to X$ (see, for example, [@ssmmp Section 4]). This means that $g$ is a projective birational morphism, $K_Z+\Delta_Z=g^*(K_X+\Delta)$, and $(Z, \Delta_Z)$ is a $\mathbb Q$-factorial dlt pair. By using Szabó’s resolution lemma, we take a resolution of singularities $h:Y\to Z$ with the following properties.
- ${{\operatorname{Exc}}}(h)\cup {{\operatorname{Supp}}}h_*^{-1}\Delta_Z$ is a simple normal crossing divisor on $Y$.
- $h$ is an isomorphism over the generic point of any lc center of $(Z, \Delta_Z)$.
We can write $$K_Y+h_*^{-1}\Delta_Z=h^*(K_Z+\Delta_Z)+F.$$ We put $f=g\circ h: Y\to X$. In this situation, $E=\llcorner h_*^{-1}\Delta_Z\lrcorner$. Note that $\ulcorner F\urcorner$ is effective and $h$-exceptional by the construction. We also note that ${{\operatorname{Exc}}}(f)\cup {{\operatorname{Supp}}}f_*^{-1}\Delta$ is not necessarily a simple normal crossing divisor on $Y$ in the above construction. We consider the following short exact sequence $$0\to \mathcal O_Y(-E+\ulcorner F\urcorner)\to
\mathcal O_Y(\ulcorner F\urcorner)\to \mathcal O_{E}(\ulcorner
F|_{E}\urcorner)\to 0.$$ Since $-E+F\sim _{\mathbb R, h}K_Y+\{h_*^{-1}\Delta_Z\}$ and $F\sim _{\mathbb R, h}K_Y+h_*^{-1}\Delta_Z$, we have $$-E+\ulcorner F\urcorner \sim _{\mathbb R, h}K_Y
+\{h_*^{-1}\Delta_Z\}+\{-F\}$$ and $$\ulcorner F\urcorner \sim _{\mathbb R, h} K_Y+h_*^{-1}\Delta_Z+\{-F\}.$$ By the relative Kawamata–Viehweg vanishing theorem and the vanishing theorem of Reid–Fukuda type (see, for example, [@book Lemma 4.10]), we have $$R^ih_*\mathcal O_Y(-E+\ulcorner F\urcorner)=R^ih_*\mathcal O_Y(\ulcorner
F\urcorner)=0$$ for every $i>0$. Therefore, we have a short exact sequence $$0\to
h_*\mathcal O_Y(-E+\ulcorner F\urcorner)\to
\mathcal O_Z\to h_*\mathcal O_{E}(\ulcorner
F|_{E}\urcorner)\to 0$$ and $R^ih_*\mathcal O_{E}(\ulcorner F|_{E}\urcorner)=0$ for every $i>0$. Note that $\ulcorner F\urcorner $ is effective and $h$-exceptional. Thus we obtain $$\mathcal O_{\llcorner \Delta_Z\lrcorner }\simeq
h_*\mathcal O_{E}\simeq h_*\mathcal O_{E}
(\ulcorner F|_{E}\urcorner).$$ By the above vanishing result, we obtain $Rh_*\mathcal O_{E}(\ulcorner
F|_{E}\urcorner)\simeq
\mathcal O_{\llcorner \Delta_Z\lrcorner}$ in the derived category of coherent sheaves on $\llcorner \Delta_Z\lrcorner$. Therefore, the composition $$\mathcal O_{\llcorner \Delta_Z\lrcorner}\overset{\alpha}\longrightarrow
R h_*\mathcal O_{E}\overset{\beta}\longrightarrow
Rh_*\mathcal O_{E}(\ulcorner F|_{E}\urcorner)\simeq
\mathcal O_{\llcorner \Delta_Z\lrcorner}$$ is a quasi-isomorphism. Apply $R\mathcal Hom_{\llcorner \Delta_Z\lrcorner}
(\underline{\ \ \ } ,\, \omega^{\bullet}_{\llcorner \Delta_Z\lrcorner})$ to $$\mathcal O_{\llcorner \Delta_Z\lrcorner}\overset {\alpha}\longrightarrow
Rh_*\mathcal O_{E}\overset{\beta}\longrightarrow \mathcal O_{\llcorner
\Delta_Z\lrcorner},$$ where $\omega_{\llcorner \Delta_Z\lrcorner}^{\bullet}$ is the dualizing complex of $\llcorner \Delta_Z\lrcorner$. Then we obtain that $$\omega^{\bullet}_{\llcorner \Delta_Z\lrcorner }\overset{a}\longrightarrow
R h_*\omega^{\bullet}_{E}
\overset{b}\longrightarrow
\omega^{\bullet}_{\llcorner \Delta_Z\lrcorner}$$ and that $b\circ a$ is a quasi-isomorphism by the Grothendieck duality, where $\omega_E^{\bullet}\simeq \omega_E[\dim E]$ is the dualizing complex of $E$. Hence, we have $$h^i(\omega^{\bullet}_{\llcorner \Delta_Z\lrcorner})\subseteq R^ih_*\omega^{\bullet}_{E}
\simeq R^{i+d}h_*\omega_{E},$$ where $d=\dim E=\dim \llcorner \Delta_Z\lrcorner =\dim X-1$. By the vanishing theorem (see, for example, [@book Lemma 2.33] and [@vanishing Lemma 3.2]), $R^ih_*\omega_{E}=0$ for every $i>0$. Therefore, $h^i(\omega^{\bullet}_{\llcorner \Delta_Z\lrcorner})=0$ for every $i>-d$. Thus, $\llcorner \Delta_Z\lrcorner$ is Cohen–Macaulay. This implies $\omega_{\llcorner \Delta_Z\lrcorner}^{\bullet}\simeq \omega_{\llcorner \Delta_Z\lrcorner}[d]$. Since $E$ is a simple normal crossing divisor on $Y$ and $\omega_{E}$ is an invertible sheaf on $E$, every associated prime of $\omega_{E}$ is the generic point of some irreducible component of $E$. By $h$, every irreducible component of $E$ is mapped birationally onto an irreducible component of $\llcorner \Delta_Z\lrcorner$. Therefore, $h_*\omega_{E}$ is a pure sheaf on $\llcorner \Delta_Z\lrcorner$. Since the composition $$\omega_{\llcorner \Delta_Z\lrcorner}
\to h_*\omega_{E}\to \omega_{\llcorner \Delta_Z\lrcorner}$$ is an isomorphism, which is induced by $a$ and $b$ above, we obtain $h_*\omega_{E}\simeq \omega_{\llcorner \Delta_Z\lrcorner}$. It is because $h_*\omega_{E}$ is generically isomorphic to $\omega_{\llcorner \Delta_Z\lrcorner}$. By the Grothendieck duality, $$\begin{aligned}
Rh_*\mathcal O_{E}&\simeq
R\mathcal Hom _{\llcorner \Delta_Z\lrcorner}(Rh_*\omega^{\bullet}_{E},
\, \omega^{\bullet}_{\llcorner \Delta_Z\lrcorner})\\
&\simeq
R\mathcal Hom _{\llcorner \Delta_Z\lrcorner}(\omega^{\bullet}_{\llcorner
\Delta_Z\lrcorner},\,
\omega^{\bullet}_{\llcorner \Delta_Z\lrcorner})\simeq \mathcal O_{\llcorner \Delta_Z\lrcorner} \end{aligned}$$ in the derived category of coherent sheaves on $\llcorner \Delta_Z\lrcorner$. In particular, $R^ih_*\mathcal O_{E}=0$ for every $i>0$. Since $Z$ has only rational singularities, we have $R^ih_*\mathcal O_Y=0$ for every $i>0$ and $h_*\mathcal O_Y\simeq \mathcal O_Z$. Thus, we can easily check that $R^ih_*\mathcal O_Y(-E)=0$ for every $i>0$ by using the exact sequence $$0\to \mathcal O_Y(-E)\to \mathcal O_Y\to \mathcal O_E\to 0.$$ Note that $h_*\mathcal O_E\simeq \mathcal O_{\llcorner \Delta_Z\lrcorner}$. We can also check that $h_*\mathcal O_Y(-E)=\mathcal J(Z, \Delta_Z)$, where $\mathcal J(Z, \Delta_Z)$ is the multiplier ideal sheaf associated to the pair $(Z, \Delta_Z)$. Note that $\mathcal J(Z, \Delta_Z)=\mathcal O_Z(-\llcorner \Delta_Z\lrcorner)$ in our situation. Therefore, $$R^if_*\mathcal O_Y(-E)\simeq R^ig_*\mathcal J(Z, \Delta_Z)$$ for every $i$ by Leray’s spectral sequence. By the relative Kawamata–Viehweg–Nadel vanishing theorem, $R^ig_*\mathcal J(Z, \Delta_Z)=0$ for every $i>0$. Thus we obtain $R^if_*\mathcal O_Y(-E)=0$ for every $i>0$. Note that ${{\operatorname{Exc}}}(f)\cup {{\operatorname{Supp}}}f_*^{-1}\Delta$ is not necessarily a simple normal crossing divisor on $Y$ in the above construction. Let $\mathcal I_{{{\operatorname{Exc}}}(f)}$ be the defining ideal sheaf of ${{\operatorname{Exc}}}(f)$ on $Y$. Apply the principalization of $\mathcal I_{{{\operatorname{Exc}}}(f)}$. Then we obtain a sequence of blow-ups whose centers have simple normal crossings with ${{\operatorname{Exc}}}(h)\cup {{\operatorname{Supp}}}h_*^{-1}\Delta_Z$ (see, for example, [@kollar Theorem 3.35]). In this process, $R^if_*\mathcal O_Y(-E)$ does not change for every $i$ as in Remark \[rem2\] (see also [@fujino-lc 4.6]). Therefore, we may assume that ${{\operatorname{Exc}}}(f)\cup {{\operatorname{Supp}}}f_*^{-1}\Delta$ is a simple normal crossing divisor on $Y$. Remark \[rem2\] completes the proof of Theorem \[main\].
The author was partially supported by the Grant-in-Aid for Young Scientists (A) $\sharp$20684001 from JSPS.
[Ko]{}
O. Fujino, Introduction to the log minimal model program for log canonical pairs, preprint (2009).
O. Fujino, Semi-stable minimal model program for varieties with trivial canonical divisor, Proc. Japan Acad. Ser. A Math. Sci. [**87**]{} (2011), no. 3, 25–30
O. Fujino, On isolated log canonical singularities with index one, J. Math. Sci. Univ. Tokyo [**18**]{} (2011), 299–323.
O. Fujino, Vanishing theorems, preprint (2011).
J. Kollár, [*[Lectures on resolution of singularities]{}*]{}, Annals of Mathematics Studies, [**166**]{}. Princeton University Press, Princeton, NJ, 2007.
S. J. Kovács, Du Bois pairs and vanishing theorems, Kyoto J. Math. [**51**]{} (2011), no. 1, 47–69.
|
---
author:
- 'N. Castro , M. A. Urbaneja , A. Herrero , M. Garcia , S. Simón-Díaz , F. Bresolin , G. Pietrzy[ń]{}ski , R. -P. Kudritzki'
- 'W. Gieren'
bibliography:
- 'AA\_18253-11\_ncastro.bib'
title: 'The ARAUCARIA project: Grid-Based Quantitative Spectroscopic Study of Massive Blue Stars in [^1]'
---
[The quantitative study of the physical properties and chemical abundances of large samples of massive blue stars at different metallicities is a powerful tool to understand the nature and evolution of these objects. Their analysis beyond the Milky Way is challenging, nonetheless it is doable and the best way to investigate their behavior in different environments. Fulfilling this task in an objective way requires the implementation of automatic analysis techniques that can perform the analyses systematically, minimizing at the same time any possible bias.]{} [As part of the ARAUCARIA project we carry out the first quantitative spectroscopic analysis of a sample of 12 B-type supergiants in the galaxy NGC 55 at 1.94Mpc away. By applying the methodology developed in this work, we derive their stellar parameters, chemical abundances and provide a characterization of the present-day metallicity of their host galaxy.]{} [Based on the characteristics of the stellar atmosphere/line formation code [fastwind]{}, we designed and created a grid of models for the analysis of massive blue supergiant stars. Along with this new grid, we implemented a spectral analysis algorithm. Both tools were specially developed to perform fully consistent quantitative spectroscopic analyses of low spectral resolution of B-type supergiants in a fast and objective way.]{} [We present the main characteristics of our [[fastwind]{}]{} model grid and perform a number of tests to investigate the reliability of our methodology. The automatic tool is applied afterward to a sample of 12 B-type supergiant stars in NGC 55, deriving the stellar parameters, , , , and abundances. The results indicate that our stars are part of a young population evolving towards a red supergiant phase. For half of the sample we find a remarkable agreement between spectroscopic and evolutionary masses, whilst for the rest larger discrepancies are present, but still within the uncertainties. The derived chemical composition hints to an average metallicity similar to the one of the Large Magellanic Cloud, with no indication of a spatial trend across the galaxy.]{} [The consistency between the observed spectra and our stellar models supports the reliability of our methodology. This objective and fast approach allows us to deal with large samples in an accurate and more statistical way. These are two key issues to achieve an unbiased characterization of the stars and their host galaxies.]{}
Introduction
============
The latest generation of large telescopes has opened a wide range of possibilities in the study of massive blue stars, allowing for the first time analyses of resolved stars beyond the Magellanic Clouds, even to nearby galaxies beyond the limits of our Local Group. This new observational capability is especially important not only to reach a better knowledge of the nature of these objects, but also to understand the chemical and dynamical evolution of their host galaxies (e.g. @2009ApJ...704.1120U). The last two decades in particular have witnessed the use of massive stars as reliable metallicity tracers, complementing the classic approach based on regions (see for instance and @2009ApJ...700..309B), offering at the same time access to chemical species that are not accessible through region studies (like or ), and at distances where nebular results strongly rely on techniques which need to be carefully calibrated [the so-called strong-line methods, @1979MNRAS.189...95P]. Moreover, blue supergiant stars present themselves as very promising distance indicators, through the application of the wind-momentum–luminosity relationship (WLR, @1995svlt.conf..246K) and the flux-weighted gravity–luminosity relationship (FGLR, @2003ApJ...582L..83K).
In order to fully understand the nature of these objects, it is required to perform accurate analyses on large samples of massive stars both in the Milky Way and external galaxies. Whilst current multi-object spectrographs are certainly capable of producing such large collections of spectra (e.g. @2005Msngr.122...36E), the accurate modeling of their atmospheres is an intrinsically complex task, involving non-local thermodynamical equilibrium processes and strong stellar winds, hence requiring the use of highly sophisticated stellar model atmospheres. The important advances accomplished in the modeling of massive blue star atmospheres , together with the improvement in the computational facilities provide us with the tools to overcome these issues. With regard to the analysis technique, several alternatives have been proposed by different authors to minimize the subjective component present in the widely used [*by-eye*]{} techniques, by introducing objective, automatic and fast methods, such as: the methodology employed by based on the genetic algorithm PIKAIA [@1995ApJS..101..309C], the grid-based method proposed by @Lefever_2007 or the principal components analysis (PCA) algorithm designed by @2008ApJ...684..118U. In this work, we present a new automatic grid-based technique implemented over a grid of models calculated with the latest release of the model atmosphere/line formation code [fastwind]{} , specifically designed and optimized for the study of O- and B-type stars in the optical and infrared range. The combination of the [fastwind]{} code and a grid-based technique enable us to perform, in an objective and fast way, the analysis of B-type supergiants in the optical domain at low spectral resolution. This provides a very efficient way to handle the analysis of data collected by large spectroscopic surveys.
The growing interest in the nature of massive stars and their host galaxies has propelled the development of several studies within different members of the Local Group, for example: in the nearby Magellanic Clouds , M 31 , M 33 , NGC 6822 , NGC 3109 [@2007ApJ...659.1198E], WLM [@2003AJ....126.1326V; @2006ApJ...648.1007B; @2008ApJ...684..118U] or IC 1613 . Moreover, these studies have extended to galaxies beyond the Local Group, such as NGC 300 [@2002ApJ...567..277B; @2005ApJ...622..862U; @2008ApJ...681..269K] or NGC 3621 [@2001ApJ...548L.159B], the latter being at a distance of $\sim$6.7Mpc.
The ARAUCARIA project[^2] (P.I: W. Gieren, @2005Msngr.121...23G) is an ambitious project devoted to investigate the effects that the environment could have on different distance indicators. To that end, a number of nearby galaxies (NGC 6822, IC 1613, WLM, NGC 3109, NGC 55, NGC 247, NGC 300 and NGC 7793) have been targeted, both photometrically and spectroscopically. An important part of this ESO long-term project is focused on the young stellar population of these galaxies (see for instance @2007ApJ...659.1198E or @2008ApJ...684..118U), and one of its main results has been the discovery and partial exploitation of the FGLR of BA supergiant stars as a distance indicator [@2003ApJ...582L..83K]. Within the context of the ARAUCARIA project, we presented the first qualitative analysis of massive blue stars in NGC 55 (, hereafter C08) situated in the Sculptor filament at $1.94\, $Mpc [@2006AJ....132.2556P; @2008ApJ...672..266G]. We now present the first quantitative analysis on a sample of B-type supergiants in this galaxy. This is a key step for characterizing the prominent population of blue massive stars in NGC 55, noted by @2005Msngr.121...23G, particularly in terms of their evolutionary status.
The work presented in this paper is divided in two main parts. In the first one, Sect. \[tools\], we present a detailed description of the tools we have designed for the analysis of massive blue stars at low spectral resolution. The goodness-of-fit criteria and the parameter space covered by our model grid, along with several tests to identify the limitations and the reliability of our methodology are also presented. In the second part, we analyze 12 early B-type supergiant stars observed in NGC 55, using the previously discussed tools (Sect. \[Quantitative\]). We will use the results to study the chemical distribution of the present day metallicity of this galaxy, in addition to constraining the evolutionary status of the analyzed massive blue stars, in Sect. \[metallicity\]. Finally, Section \[Conclusions\] provides the final remarks and comments.
A grid-based quantitative analysis technique {#tools}
============================================
The quantitative analysis of optical spectra of B-type supergiant stars is based on well established methods (e.g. , ). At high spectral resolution, the determination of temperature and surface gravity is based on the ionization balance of different ionization stages of the same element (e.g. /), and the fit to Balmer lines wings respectively (see ). A slightly modified technique is applied in the analysis of low spectral resolution data. Although the temperature and gravity criteria are the same, restricting the analysis to individual (metal) lines at low spectral resolution is unreliable. The best approach is to reproduce the main features present in the spectrum simultaneously, as it was suggested by [@2003ApJ...584L..73U; @2005ApJ...622..862U] in the analysis of NGC 300 B-type supergiant stars. This technique has been also successfully applied in the analysis of massive blue stars in WLM [@2006ApJ...648.1007B], NGC 3109 [@2007ApJ...659.1198E] and IC 1613 [@2007ApJ...671.2028B].
The complete spectral analysis consists of two steps. In the first one, the fundamental stellar and wind parameters are derived by using a fixed set of models. The determination of the chemical abundances is carried out in a second step by computing tailored models. With the ultimate goal of performing quantitative studies of low resolution spectra of OB-type supergiants at different metallicities in a systematic and objective way, we have implemented an automatic algorithm to determine the stellar parameters by identifying those models in our grid that minimize the differences with respect to the observed optical spectra (a $\chi^2$ minimization). In the next sections we describe the main components of our automatic analysis method.
[fastwind]{} grid of models {#Atmospheric}
---------------------------
The cornerstone of the analysis of massive blue stars is the grid of model atmospheres employed to reproduce the different features of the spectrum. Because of its high computational efficiency, we have used the model atmosphere/line formation code [fastwind]{} . This code takes into account NLTE effects in spherical symmetry with an explicit treatment of the stellar wind effects by considering a $\beta$-like wind velocity law , and by ensuring a smooth transition between the pseudo-static photosphere and the inner wind layers. The main advantage with respect to other similar codes is the possibility of generating realistic models in a short period of time, a crucial point for building large sets of synthetic spectra.
### Stellar parameters
Each [fastwind]{} model is described by nine main parameters (we assume that the winds are homogeneous). Ideally, all of them should be considered free in our grid of models. However, this would imply the calculation of a very large number of models to explore the full parameter space. Therefore, we fixed and constrained some of them based on the previous knowledge of the physics of these objects. This saves a significant amount of computing time, without introducing any relevant limitation in the analyses. A brief description of the criteria used to fix some of the parameters follows, as well as the specific range explored in each case. As a general rule, the boundaries of the model grid were defined aiming at avoiding observed stars too close to the grid’s limits.
- [[**[Effective temperature ().]{}**]{} We computed models with temperatures between $9000$ and $35000\,$K, in steps of $1000\,$K. This interval covers objects with spectral types ranging $\sim$A1–O8, for solar metallicity.]{}
- [**[Surface gravity ().]{}**]{} Whilst our main interest here is the analysis of supergiant stars, we extended the calculations to higher gravity values. In order to select the gravities for each temperature, we considered the FGLR (@2003ApJ...582L..83K). As shown by [@2008ApJ...681..269K], this quantity is empirically related to the luminosity of normal blue supergiants $$M_{\mathrm{bol}}^{\mathrm{FGLR}}=(3.41\pm0.16)\,(\log\,g_{F}-1.5)-(8.02\pm0.04)
\label{Eq:log_gf}$$
where $M_{\mathrm{bol}}^{\mathrm{FGLR}}$ is the bolometric magnitude, and $\log\,g_{F}=\log\,g-4\, \log\,(T_{\rm{eff}}\times 10^{-4})$. Supergiant stars of luminosity class Ia and Ib, exhibit a range of $\log\,g_{F}$ between $\sim1.0$ and $1.5\,$dex. For objects with a temperature of $25000\,$K, this means $\log\,g\sim2.6-3.1\,$dex. Our models were calculated covering a $\log\,g_{F}$ range between $0.9$ and $2.5\,$dex, with an upper limit of $\log\,g=3.7\,$dex (i.e., main sequence stars are not considered in this grid).\
Figure \[Fig:red\_plot\] shows the location of our models in the Hertzsprung–Russell (HR) diagram, along with the evolutionary tracks from for an initial equatorial rotation of $300\,$kms$^{-1}$ and solar metallicity. As can be deduced from this figure, our parameter selection corresponds to stars with initial masses between $\sim8-60\,M_{\odot}$. Since we have constructed the grid with constant $\log\,g_{F}$ values instead of $\log\,g$, our models define constant luminosity sequences.
- [[**[Radius ().]{}**]{} For each given pair $\left[T_{\rm{eff}}, \log\,g\right]$ the radius was calculated from M$_{bol}$ by means of the FGLR (see Eq. \[Eq:log\_gf\]). Note that this relationship was observationally established for supergiant stars, but we have also used it for models that would represent stars that do not belong to this luminosity class. This, however, has no effect on the analysis.]{}
- [[**[Microturbulence ().]{}**]{} Three different values of the microturbulence, $7$, $17$ and $27\,\rm{km\,s^{-1}}$, were considered for the calculation of the model atmospheres. A larger number of values were used in the computation of the formal solutions. This procedure does not lead to inconsistencies in the spectrum as long as the value used in the formal solution does not depart far from the value used in the model atmosphere. Hence, the formal solutions were calculated with $\xi$ equal to $5$, $7$, $10$, $12$ km$s^{-1}$ (in the case of model atmospheres computed for $7\,\rm{km\,s^{-1}}$), $15$, $17$, $20$, $22$ km$s^{-1}$ (for $17\,\rm{km\,s^{-1}}$), $25$, $27$ and $30\,\rm{km\,s^{-1}}$ (for $27\,\rm{km\,s^{-1}}$).]{}
- [[**[Helium abundance (/).]{}**]{} The helium abundance by number is sampled with four points $0.05$, $0.1$ (solar), $0.15$ and $0.2$. We note here that, although our lowest value is below the primordial He abundance and hence is physically not realistic, it was set to avoid boundary issues, as previously discussed. ]{}
- [[**[Metallicity (Z).]{}**]{} Five values were used, from $Z=0.25$ to $1.25\,Z_{\odot}$ in steps of $0.25$, with the solar references taken from . For each metallicity, the elemental abundances (excluding He) are scaled by these values. ]{}
<!-- -->
- **Terminal velocity ().** For each model, this parameter was obtained from the escape velocity ($v_\mathrm{esc}$) by using an empirical calibration based on the works by , and . According to the studies of [@1995ApJ...455..269L] and there is a bimodal relationship between both quantities, with an abrupt change at the location of the so-called *’bi-stability jump’*. Contrary to this idea, argued that there is no such break in this relationship. Rather, these authors propose a smooth transition around $20000\,$K. With the goal of emulating the empirical values, a monotonous trend was considered in the range of temperatures where the jump is located,
$$%\[
{\footnotesize
\frac{v_\mathrm{\infty}}{v_\mathrm{esc}} =
\begin{cases}
1.10 & T_{\rm{eff}} \leq 15\,\rm{kK}\\
11.65\,\log\,T_{\rm{eff}}-47.62 & 15\,\rm{kK} < T_{\rm{eff}} < 24\,\rm{kK} \\
3.41 & T_{\rm{eff}} \geq 24\,\rm{kK}
\end{cases}
\label{Eq:Vinf}
%\]
}$$
To account for metallicity effects we assumed that the terminal velocity scales with metallicity as $v_\mathrm{\infty}(Z)\propto Z\,^{0.12}$ (@1992ApJ...401..596L, see also , and @2002ApJ...577..389K).
- **Wind velocity law, .** We adopted an empirical linear relationship between $\beta$ and $T_{\rm{eff}}$, based on results obtained by @2004Miguel_Tesis and for Galactic B-type supergiants with temperatures in the range $\sim$10000–$\sim$31000 K. Beyond those limits we used fixed values:
$${\footnotesize
\beta =
\begin{cases}
3.60 & T_{\rm{eff}} \leq 10\,\rm{kK}\\
-1.40\, (T_{\rm{eff}} 10^{-4})+5 & 10\,\rm{kK} < T_{\rm{eff}} < 30\,\rm{kK} \\
0.70 & T_{\rm{eff}} \geq 30\,\rm{kK}
\end{cases}
\label{Eq:beta}
}$$
- **Mass loss rate .** For each combination of $T_{\rm{eff}}$, $\log\,g$, $\xi$, Z and He/H, we considered 3 different values for the mass loss rate. showed that different combinations of mass loss rate, terminal velocity and radius can produce the same emergent synthetic profiles as long as the optical depth invariant, defined for smooth winds as $Q = \dot{M} \, / \, \left(R_{*}v_\mathrm{\infty}\right)^{1.5}$, remains constant. Since $R_{*}$ and $v_\mathrm{\infty}$ are not free parameters in our grid, variations of $\dot M$ are equivalent to variations of $Q$. In order to use realistic mass-loss rates for each model, we applied the empirical relationship between the wind momentum and the stellar luminosity found by @1995svlt.conf..246K and to define a proper $Q$-value. The wind momentum–luminosity relationship is defined by
$$\begin{aligned}
\log\,D_\mathrm{mom} &\cong x\,\log\,L_{*}/L_{\odot}+D_\mathrm{o} \nonumber \\
& \cong 2.31 \log\,L_{*}/L_{\odot}+15.94
\label{Eq:WLR}\end{aligned}$$
where $D_\mathrm{mom}$ is the modified wind momentum ($\log\,D_\mathrm{mom} = \log\,(\dot{M}v_\mathrm{\infty}R_{*}^{1/2})$). The values for $x$ and $D_\mathrm{o}$ used in Eq. \[Eq:WLR\] were derived from a linear regression to data from previous studies .
For each set of stellar parameters three different $Q$ values are considered: the one derived from Eq. \[Eq:WLR\], $\log\,Q$, and two others with $\log\,Q$ increased/decreased by 0.5dex. Finally, we set a lower limit for $\dot{M}$ at $10^{-8}\,M_{\odot}\,\rm{yr}^{-1}$, since at this low mass loss rate the wind effects on the optical profiles are negligible.
The metallicity dependence of the mass-loss rate is accounted for with a power-law, $\dot{M}(Z)\propto Z\,^\mathrm{m}$. For the exponent, we use the results from that found $m=0.83$ based on the analysis of Galactic, LMC and SMC stars.
### Atomic models {#atomo}
The atomic models used in the calculations will play an important role not only in the determination of stellar parameters but also in the computational time required per model. Detailed atomic models of , [@Jokuthy_2002], (N. Przybilla 2007, private communication) and of (; D. Kunze 1998, private communication) are explicitly considered (see below) during the stellar parameters determination. The inclusion of O is critical since at low spectral resolution the line of $4116 \AA$ and $4128-30\ \AA$ are blended with several transitions. During the chemical abundance analysis, detailed models for , and (K. Butler 1998, private communication) are also incorporated for the calculation of the tailored models.
We note here that the other species are treated in an implicit way to account for blanketing/blocking effects. For further details, the reader is referred to .
Stellar parameters determination {#metodo_auto}
--------------------------------
Ion $\lambda\,(\AA)$ Ion $\lambda\,(\AA)$
----- ------------------ ------- ------------------
HI 4101.74 HeII 4542.80
HI 4340.47 SiII 4128.07
HI 4861.33 SiII 4130.89
HeI 4026.19 SiIII 4552.62
HeI 4387.93 SiIII 4567.84
HeI 4471.48 SiIII 4574.76
HeI 4921.93 SiIV 4116.10
: Spectral features used in the determination of the fundamental parameters.
\[Tab:lineasLR\]
To avoid a subjective and time-consuming procedure we have implemented a straightforward $\chi^{2}$ technique, similar to the methodology proposed by . The observed spectrum is compared to a grid of synthetic models, in a number of relevant optical lines. To characterize the goodness-of-fit the differences are evaluated through the following the expression:
$$\chi^2_\mathrm{i}=\frac{1}{n_\mathrm{lines}}\,\sum_{\mathrm{j=0}}^{n_\mathrm{lines}} \frac{1}{n_{\nu}}\,\sum^{n_{\nu}} \left(\frac {y_\mathrm{ij}-y_\mathrm{obs}}{\sigma}\right) ^{2}
\label{Eq:chi2}$$
where $n_{\nu}$ is the number of wavelength points in the spectral line $j$, $y_\mathrm{obs}$ and $y_\mathrm{ij}$ are the observed and synthetic fluxes respectively (the index $i$ runs over the set of models), and the uncertainty $\sigma$ is estimated according to the signal-to-noise ratio (SNR). Eventually, the average of all the transitions is considered, with all the lines having the same weight. The method was tested giving more weight to those lines that could have more impact on particular stellar parameters (e.g. silicon transitions in the effective temperature). The results revealed a better match on average when no extra weights were imposed. The selected lines are listed in Table \[Tab:lineasLR\]. This selection is based on the observed range, the quality of the data and previous knowledge of modeling these spectral features.
The stellar parameters, and their uncertainties, are determined through two steps, aiming at accounting for different sources of uncertainties:
- From the $\chi^2$ distribution generated by Eq. \[Eq:chi2\] we selected all those models with a $\chi^2$ value below its $\chi^2$ minimum plus $15\%$. This percentage was not chosen arbitrarily, but using a sample of high resolution spectra with SNR larger than 150 obtained as part of the IACOB[^3] spectroscopic database. We carried out their analyses by a classic method and with our new $\chi^2$ minimization. By comparing the results, we could identify the percentage that recovered similar errors in both methods.
Those models with a $\chi^2$ that fulfill this criterion were chosen to derive the stellar parameters. The values and their errors were calculated by averaging all these models, weighted by $e^{-0.5\chi^2}$ (normal distribution probability), and their standard deviations respectively.
We investigated the effect that applying different percentage cuts would have on the derived parameters. For example, the difference between a 10% or 20% cut is very small when compared with the errors, with an increment of $\sim200\,$K in the effective temperature uncertainty.
- [This method was complemented with a Monte Carlo simulation. Given the SNR of the spectrum, a random array of 100 elements around the continuum position was defined. New re-rectifications of the stellar continuum were performed using these shifts, then the stellar parameters and errors were re-calculated (through the step described before). This allows us to evaluate the impact that the SNR and the continuum rectification have on the results.]{}
Both steps produce a set of errors in the final stellar parameters. We are aware that they are not independent and both estimations are linked to the spectral SNR and the intrinsic uncertainties introduced by the grid design. The final uncertainties result from the quadratic sum of these two uncertainty sources.
### An example: HD 14818
To illustrate the determination of the stellar parameters, we present the application of our algorithm to the case of the Galactic B-type supergiant HD 14818. A high quality optical spectrum is first degraded to the characteristics of our FORS2 data, $R=1000$ and $SNR=100$. The $\chi^2$ calculations were applied on individual wavelength windows, covering the spectral features shown in Table \[Tab:lineasLR\]. Each wavelength range was carefully selected to include the entire line profile. Figure \[Fig:MT\] displays the results obtained from the Monte Carlo simulation for each individual parameter. As indicated, the spectral resolution, SNR and the set of lines were selected to represent the conditions of the spectra that will be analyzed in the forthcoming section (H$\alpha$ is not included in our analysis since it is not available in the case of the NGC 55 stars).
The quality of the final results is illustrated by Fig. \[Fig:hd14818\_MT\], where the [*observed*]{} spectrum of HD 14818 is compared with a model atmosphere computed for the parameters obtained by our automatic analysis algorithm, described in previous sections. Note that only the lines listed in Table \[Tab:lineasLR\] (marked in the figure) are considered and fitted for the analysis.
### Tests to the method {#test}
Before applying the routine to the analysis of real data, we carried out a number of tests to check the reliability of the proposed methodology. In the first one we analyzed a sample of three synthetic spectra generated with the [fastwind]{} code. For the second test, we considered the spectra of three Galactic stars (HD 209975 O9.5 Ib, HD 38771 B0.5 Ia and HD 14818 B2 Ia), originally from the IACOB spectroscopic database. In both cases, the test spectra were degraded to $R=1000$ and $SNR=100$ to simulate our FORS2 NGC 55 spectra. Taking into account that our FORS2 data do not include the H${\alpha}$ line, we decided to repeat the tests to evaluate the impact of including this line (columns ’Output’ and ’Output+H${\alpha}$’ in Tables \[Tab:Test\_Sint\]–\[Tab:Test\_Real\] respectively).
#### [Comparison to synthetic spectra]{}
\
Three sets of stellar parameters in the range of OB supergiant stars ($\log\,g_{F}\sim 1.0-1.5\,$dex) were randomly chosen from our model grid and analyzed according to the method presented in Sect. \[tools\]. Table \[Tab:Test\_Sint\] presents the input parameters of the models and the stellar parameters recovered by the method. At this spectral resolution, different combinations of parameters, like microturbulence, silicon and helium abundance, can produce similar profiles which has a clear impact on the uncertainties. Nevertheless, our algorithm recovers the input values within the errors.
The analysis yields a better estimate of the wind parameter $Q$ when H$\alpha$ is incorporated. This also produces a slight variation in the rest of parameters, but always within the uncertainties of the previous results obtained without H$\alpha$. It seems possible to constrain $Q$ even without H$\alpha$, just relying on the rest of Balmer lines (mainly H$\beta$). Only for the very hot case ($30900\,$K) we find large differences when H$\alpha$ is not included in the analysis, with a shift of $0.20\,$dex respect to the input value. For the other two cases, the differences with respect to the input value do not exceed $0.05\,$dex.
Additional tests were performed with the spectra degraded to $SNR\,=\,50$ (see Table \[Tab:Test\_Sint\]). As could be expected, a wider set of models are compatible with the observations in this case, which translates in larger uncertainties. Nevertheless, the input parameters and the recovered values are in good agreement, always within 1$\sigma$.
#### [Comparison to Galactic stars]{}
\
The results obtained in the analysis of the three Galactic supergiants are collected in Table \[Tab:Test\_Real\], along with the parameters derived by and [@2004Miguel_Tesis]. We have restricted the comparison to these two works because we want to minimize the possible effects introduced by the application of different stellar atmosphere codes; here, we are interested in the performance characteristics of our methodology. Both studies used [fastwind]{} models, although there are some unavoidable differences. For instance, kept the
microturbulence fixed to $10\,\rm{km\,s^{-1}}$, considering it as a secondary parameter; HD 14818 was analyzed by [@2004Miguel_Tesis] with an early version of [fastwind]{} that did not include line blocking/blanketing, which explains the difference in temperature (and gravity) with respect to our values.
The main conclusion of these tests is that there is a good agreement between the values obtained by our algorithm applied to the low spectral resolution data and the previous studies based on high spectral resolution data.
We note that there are important differences in $\log\,Q$ for the three stars, although the values are consistent within the errors. Synthetic H$\alpha$ profiles depend not only on the wind parameters, but also on the effective temperature and surface gravity. Thus, changes in these two fundamental parameters will modify its shape with the consequent adjustment of the value recovered for $Q$. The uncertainties in the stellar parameters will propagate to $\log\,Q$ as well. The differences we have found could also be a reflection of real changes in the observed profile, due to the use of H$\alpha$ profiles collected in different observational campaigns (H$\alpha$ is variable in these kind of stars, ). Finally, we should keep in mind that there could be (some) differences inherited from employing different versions of the same model atmosphere code. Nevertheless, in spite of these differences, the values recovered by our method are comparable with the ones suggested by these other works based on high spectral resolution data.
As in the case of the synthetic models discussed above, we performed the analysis of the observed spectra degraded also to $SNR=50$. The outcome of this test is the same; the loss of information due to the lower SNR is reflected in larger uncertainties.
Both sets of tests confirm the reliability of the technique in finding the main stellar parameters. The quality of the data (SNR) and the available observed wavelength range define the accuracy that can be achieved. We have shown that, by using enough information (see Table \[Tab:lineasLR\]), it is possible to reach solid results, making our analysis algorithm a very promising tool for the analysis of large collections of optical spectra of OB stars.
Our main focus is the analysis of supergiant stars. Hence we have designed the grid and selected the spectral features for the analysis accordingly. However, this methodology can be applied to other spectral types/luminosity classes. Of course, this would require different diagnostic transitions, a different set of models, and would present different challenges. The reader is referred to for a thorough discussion and application of a very similar methodology in the case of Galactic dwarf and giant stars.
Abundance determination {#AbunDeter}
-----------------------
Once the fundamental parameters have been determined, we proceed with the analysis of the chemical abundances. As shown by @2003ApJ...584L..73U [@2005ApJ...622..862U], it is possible to derive individual chemical abundances from low resolution optical spectra of early B-type supergiants because of the relative low number density of metal lines (hence minimizing the blends of lines from different species) and because there is a good number of relatively strong features that can be easily detected. Following these considerations, the methodology applied in the chemical abundance analysis relies on simultaneously modeling all the diagnostic lines for a given species.
Using the derived stellar parameters, a new set of tailored models is computed for each star under analysis by varying the abundances of the relevant species (, , and ) in steps of $0.20\,$dex. The chemical analysis is performed in two complementary steps. First, an automatic $\chi^2$ fitting algorithm is used. However, the main features were sometimes weak or blended, and a second visual check is required.
The abundance uncertainties are estimated from this new set of models, taking into consideration the SNR of the observed spectra. Therefore, the range of abundances defined by the uncertainties, for each individual species, account for the feature-to-feature scattering, in a similar way as it would be done in a classic analysis.
The precision achieved in the chemical analysis depends not only on the quality of the spectra, but also on the spectral type and on the reliability of the atomic models. For a mid B-type, most of the considered transitions cannot be detected (either they are not present at these temperatures, or they are too weak to be detected at this low resolution and SNR). Note, however, that the and features are stronger for late B-type stars, so we can extract accurate information for those species. In a similar fashion, for a given spectral type and SNR, the abundance uncertainties will depend on the metallicity, with the expectation that errors become larger with decreasing metallicity, eventually reaching the limit where only upper limits could be placed.
For each individual element, we attended to a particular group of spectral features (see @2005ApJ...622..862U and @2007ApJ...659.1198E). Briefly, the lines considered for the chemical analysis, and summarized in Table \[Tab:Lineasabun\], were:
----- ------------------ ----- ------------------ ----- ------------------ ----- ------------------ ----- ------------------
Ion $\lambda\,(\AA)$ Ion $\lambda\,(\AA)$ Ion $\lambda\,(\AA)$ Ion $\lambda\,(\AA)$ Ion $\lambda\,(\AA)$
$4116$ $3919$ $3995$ $4076$ $4481$
$4552$ $3921$ $5007$ $4319$
$4567$ $4267$ $5045$ $4350$
$4574$ $4416$
$4128$
$4130$
----- ------------------ ----- ------------------ ----- ------------------ ----- ------------------ ----- ------------------
\[Tab:Lineasabun\]
- [**Silicon.** We used the transitions of $4552-67-74\,\AA$, $4128-30\,\AA$ and $4116\,\AA $. The lines of are only available at low temperatures, B2 and later spectral types. On the other hand, the profile of will be accessible at late O and early B-type stars. ]{}
- [**Oxygen.** The analysis looks for the best fit to the transitions around $4076$, $4319$, $4350$ and $4416\,\AA$; at low spectral resolution they are in fact blends. There are additional features around $4585$ and $4700\,\AA$, but they are blended with other elements. The latter will be used as a consistency check for the abundance derived using the other lines.]{}
- [**Nitrogen.** Some of the most prominent N features in the wavelength range covered by our FORS2 spectra are blended with other elements (for example around $4650\,\AA$), but there are still some isolated features that can be used to constrain the nitrogen abundance. The study was centered on $3995$, $5007$ and $5045\,\AA$. Note that if nebular subtraction is not accurate, the \[\] $5007\,\AA$ transition could severely affect the overlapping nitrogen line.]{}
- [**Carbon.** Given the spectral quality and the wavelength range ($\sim3900-5000\,\AA$) the strongest transition is $4267\,\AA$. We consider that the results based (only) on this line are currently not as reliable as for the rest of the species (see ). Alternative lines are $3919-21\,\AA$, though too weak in many cases. The transitions around $4650\,\AA$, while blended with O and N, could serve as a secondary check. ]{}
- [**Magnesium.** Its abundance is determined using the line of $4481\,\AA$. Note that this transition could be blended with $4479\,\AA$ [@2005MNRAS.358..193L].]{}
Figure \[Fig:CNO\_ejemplo\] illustrates the results of the chemical analysis of HD 14818. The main diagnostic lines used in the elemental abundance determination, identified in the figure, show a good match with the final model.
Quantitative spectral analysis of NGC 55 B-type supergiant stars {#Quantitative}
================================================================
In C08 we presented the first spectral catalog of massive blue stars in NGC 55. Very briefly, optical spectra of $\sim$200 sources were collected with the FOcal Reducer/low dispersion Spectrograph 2 (FORS2, @1998Msngr..94....1A) at the Very Large Telescope (VLT-UT2). The instrument was equipped with the 600B grism, providing optical spectra in the range $\sim3900-6000\,\AA$ at a resolving power $R \sim1000$. A complete description of the data and the reduction process can be found in the aforementioned reference.
From this previous work, we selected twelve B-type supergiants for a detailed quantitative analysis. The selection was based on the good SNR, spectral type and the lack of obvious contamination by other sources, including strong nebular lines (when possible). The selection of spectral types between late O- and early B-types guarantee that the main spectral features required for the analysis (see Table \[Tab:lineasLR\]) are available. The stars are spatially distributed across the galaxy (see Fig. \[Fig:NGC55\_cand\]), which will allow us to investigate the distribution of its chemical composition. Table \[catalog1\] summarizes all the relevant information, as well as provides revised photometry. The stars are identified following C08.
\[catalog1\]
Stellar parameters
------------------
The stellar parameters of our sample of 12 NGC 55 B-type supergiants, obtained from the application of the analysis methodology described in the previous sections, are listed in Table \[StPa\]. The left side of Figs. \[Fig:NGC55\_stars\_page2\] and \[Fig:NGC55\_stars\_page3\] show the comparison of the observations with tailored models computed for the parameters derived in this work. It can be seen that these final models provide a good match to the (in some cases rather noisy) observed spectra. The right side of the figures display $\log\,\chi^2$ isocontours on the $T_{\rm{eff}}-\log\,g$ plane. Each black dot represents a model in the grid, whilst the white dots identify those models fulfilling the $\chi^2 - \chi^2_{\mathrm{min}} $ criteria defined in Sect. \[metodo\_auto\].
The impact that the SNR has on the derived parameters (uncertainties) can be gauged from these plots. For stars like A\_11 and B\_31, the models that reproduce the observations enclose a relatively smaller area in the $T_{\rm{eff}}-\log\,g$ plane than in the case of C1\_45. The higher quality is clearly reflected in the range of stellar parameters, i.e. synthetic models, that are compatible with the observed spectra. Note how the wide range of temperatures and gravities covered by our grid prevented, for our 12 stars, the presence of border-effects, i.e. objects located too close to the limits of the model grid.
\[StPa\]
Chemical abundances
-------------------
With the stellar parameters in hand, we performed the analysis of the elemental chemical abundances as described in Sect. \[AbunDeter\]. The results are compiled in Table \[Abun\]. The characteristic metallicity, given in the last column, is obtained averaging the differences of the derived O, Mg and Si abundances relative to the solar reference, taken from . Projected distances to the galactic center are given in the second column, in units of the semi-major axis.
The final models presented in Figures \[Fig:NGC55\_stars\_page2\] and \[Fig:NGC55\_stars\_page3\] include the derived elemental abundances. Overall, these figures show a good agreement between the final tailored models, computed for the final abundances, and the observed spectra. The average abundance uncertainty in these chemical analyses is $\sim$0.25dex, with the case of C1\_45, with its poor SNR, reaching 0.30dex. Our results indicate that it is possible to chemically characterize these stars with good precision, even for cases of relatively low SNR, such as C1\_53 or C1\_45.
We note here that the derived Mg abundances, based solely on the 4481 Å feature, could be affected by a blend with 4479Å. Following the arguments in the works by [@2005MNRAS.358..193L] and [@2011arXiv1109.6661D], a small effect would be expected, given the low metallicity of NGC55. Moreover, [@2005ApJ...635..311U] suggested that there is a negative luminosity dependence of the strength of the $4479\,\AA$ line, making its contribution even less relevant for supergiant stars, even more since $4481\,\AA$ presents the opposite behavior, with the feature strengthening with increasing luminosity.
ID $\rho/\rho_{o} $ $\epsilon_{\ion{Si}{}}$ $\epsilon_{\ion{C}{}}$ $\epsilon_{\ion{N}{}}$ $\epsilon_{\ion{O}{}}$ $\epsilon_{\ion{Mg}{}}$ \[/$_\odot$\]
--------- ------------------ ------------------------- ------------------------ ------------------------ ------------------------ ------------------------- ---------------
$\odot$ $ 7.51 $ $ 8.43 $ $ 7.83 $ $ 8.69 $ $7.60 $
A\_8 $ 0.64 $ $ 7.18 $ $ 7.27 $ $ 7.33 $ $ 8.28 $ $-- $ $ -0.37 $
C1\_44 $ 0.22 $ $ 7.08 $ $ 7.71 $ $ 7.56 $ $ 8.07 $ $-- $ $ -0.53 $
C1\_9 $-0.11 $ $ 6.80 $ $ 7.23 $ $ 7.95 $ $ 8.66 $ $6.80 $ $ -0.51 $
C1\_13 $-0.06 $ $ 7.07 $ $ 7.60 $ $ 7.63 $ $ 8.45 $ $6.92 $ $ -0.45 $
C1\_45 $ 0.22 $ $ 7.13 $ $ 8.07 $ $ 8.17 $ $ 8.34 $ $7.00 $ $ -0.44 $
A\_17 $ 0.75 $ $ 7.06 $ $ 7.77 $ $ 7.64 $ $ 8.37 $ $7.02 $ $ -0.45 $
D\_27 $-0.32 $ $ 7.01 $ $ 7.37 $ $ 8.28 $ $ 8.26 $ $7.27 $ $ -0.42 $
A\_27 $ 0.80 $ $ 7.04 $ $ 7.77 $ $ 8.52 $ $ 8.07 $ $6.98 $ $ -0.57 $
C1\_53 $ 0.28 $ $ 7.25 $ $ 7.52 $ $ 8.22 $ $ 8.63 $ $7.28 $ $ -0.21 $
B\_31 $ 0.50 $ $ 7.10 $ $ 7.22 $ $ 8.25 $ $ 8.59 $ $7.26 $ $ -0.28 $
A\_26 $ 0.78 $ $ 7.43 $ $ 7.48 $ $ 8.22 $ $ 8.60 $ $7.33 $ $ -0.15 $
A\_11 $ 0.68 $ $ 7.17 $ $ -- $ $ -- $ $ -- $ $7.20 $ $ -0.37 $
: Stellar abundances determined for , , , and .
\[Abun\]
Comments on individual targets
------------------------------
Here we discuss some details of individual targets, along with problems encountered during their analysis.
- [**A\_8.** This is an O9.7 I star whose main features are well represented by our final model. At this hot temperature ($27700\,$K) the line has vanished and the lines are weak. At the same time, lines are present, allowing for a precise determination of the effective temperature. The line, blended with , is well reproduced. The apparent slight discrepancy around the $4079\,\AA$ line is most likely due to an effect of the normalization inside the H$\delta$ wing. Note also that the $4414-16\,\AA$ blend is weaker than the prediction. $4686\ \AA$ shows also a clear mismatch but, without a reliable wind estimation we cannot go further in its analysis. ]{}
- [**C1\_44.** The main spectral features are well reproduced by the final model, with the exception of some transitions like: $4686\ \AA$, possibly affected by the wind, and not considered in the analysis; or $4089\ \AA$, not included because of its blend with . The doublet at $4414-16\ \AA$ is also poorly reproduced, though the rest of the oxygen lines are consistent with the obtained abundance. At this temperature the line is quite weak. The transition of $5007\ \AA$ is filled by nebular contamination.]{}
- [**C1\_9.** The final model shows a good global match to the majority of the considered transitions. The lines of O, Mg and Si are not particularly strong at this temperature, but they are well reproduced. The temperature derived corresponds to a spectral type cooler than the one assigned.]{}
- [**C1\_13.** This B1 supergiant presents a very nice match between the final model and the observed spectrum. The blends of $4079\ \AA$ and $4319\ \AA$ show the largest difference, but still within the abundance uncertainties of $\pm$0.25dex. Some contamination by nebular emission is still apparent in the $5007\ \AA$ line. The is weak, as expected for this temperature.]{}
- [**C1\_45.** This spectrum has the lowest SNR ratio in the sample and this is clearly reflected in the errors derived for the stellar parameters. Figure \[Fig:NGC55\_stars\_page2\] shows a large spread of compatible models in the $T_{\rm{eff}}-\log\,g$ plane. Nonetheless, the model corresponding to our solution reproduces well the main features. The observed $3995\ \AA$ line is not well represented by the model, and the transition at $5007\ \AA$ is contaminated by nebular emission. As typical for a B1 I, the transition is weak.]{}
- [**A\_17.** This object shows a discrepancy at the continuum around H$\delta$, likely due to the rectification around H$\delta$ wings. The parameters and abundances derived reproduce well the rest of the spectrum. The magnesium transition is weak. Note in Fig. \[Fig:NGC55\_stars\_page2\] the lack of symmetry in $\chi^2$ distribution around the average values. There is a plume of suitable models towards high temperatures that also fulfill our goodness-of-fit criteria.]{}
- [**D\_27.** The line of at $4144\ \AA$ presents a mismatch. Note that this line is not included in the analysis.]{}
- [**A\_27.** This B2 I is well represented by the final model. Besides from the mismatch around the $5007\ \AA$ region, the rest of the spectral features are well reproduced.]{}
- [**C1\_53.** In spite of the low SNR spectrum obtained for this star, the match between the main features analyzed and the best model is quite good. The spectrum also displays some residuals of nebular contamination at \[\] $5007\ \AA$.]{}
- [**B\_31.** This B2.5 supergiant star is nicely reproduced by our model, as it is shown in Fig. \[Fig:NGC55\_stars\_page3\]. The main transitions used in the analysis are well reproduced.]{}
- [**A\_26.** This B2.5 I star shows a good agreement with the model. Its better SNR ratio results in a well constrained set of parameters. The oxygen lines are weaker at this temperature but they are all well modeled except for $4079\ \AA$. The latter issue may be caused by the continuum rectification, as in A\_17, or residuals from cosmic ray subtraction.]{}
- [**A\_11.** This star has the best SNR spectrum, and this is reflected in the small uncertainties. Its effective temperature suggests a spectral type a bit earlier than the one proposed by C08. At these cool temperatures the transitions of , and are very weak or have even completely vanished. The lines are weak but the transitions of (although blended with lines) become stronger and allow us to derive effective temperature and constrain the silicon abundance.]{}
Discussion {#metallicity}
==========
In this section we discuss the results obtained for our sample of 12 B-type supergiant stars in the galaxy NGC 55. The physical characterization provided by the stellar parameters and chemical abundances, supplies us with the necessary information to discuss their evolutionary status, as well as to carry out a comparison with the predictions of current evolutionary models. We adopt a distance modulus to NGC 55 of $\mu=26.434\pm0.037$ mag from @2008ApJ...672..266G.
Stellar properties {#Stellar_properties}
------------------
\[Photo\]
Table \[Photo\] gathers the fundamental stellar properties derived for the stars. The color excess of each object was calculated using observed photometry (see Table \[catalog1\]) and the synthetic colors obtained from the final tailored models, adopting the extinction curve by [@1989ApJ...345..245C] and a total-to-selective-extinction ratio R$_\mathrm{v}=3.1$, although several authors have shown that high $R_v$ are not rare for massive stars (see, for instance, @2011ApJ...729L...9B or ). Only for one of the objects, C1\_9, we find a non-physical value (i.e. negative), although very small and compatible with zero within the error bars. The high inclination of the galaxy, as well as the fact that these objects could be, to some extent, surrounded by ionized gas, could bias the observed photometry which would explain this negative value, an effect also suggested by [@2007ApJ...659.1198E]. Three other objects, A\_17, A\_26 and C\_53, present high reddening values, $\sim$0.4 mag, whilst the rest of the sample show $E(B-V)$ values consistent with the mean value derived by [@2008ApJ...672..266G] , $E(B-V)=0.127\pm0.019$ mag, from multi-wavelength observations of Cepheid stars.
@1983MNRAS.204..743W studied a sample of 7 regions distributed along the southern half of NGC 55, approximately covering the same range in galactocentric distances as our stellar sample. From their published $C\left(H\beta\right)$ values we can obtain reddening values by adopting $E(B-V) = 0.676\,C\left(H\beta\right)$ [@1979MNRAS.187P..73S]. The regions present reddening in the range 0.16–0.32 mag, with a simple mean of $E(B-V)=0.24\pm0.07$ mag, with 3 regions showing $E(B-V)>0.3$ mag. The mean value of our sample, $E(B-V)=0.20\pm0.14$ mag, compares well with the nebular mean, albeit with a larger scatter.
The distribution of the stars in the HR diagram is shown in Fig. \[Fig:NGC55\_HR\], together with the evolutionary tracks by . According to the metallicity of the sample (Table \[Abun\]), we consider a linear interpolation between evolutionary tracks computed for the Small Magellanic Cloud metallicity and solar metallicity tracks . The evolutionary masses derived from the interpolation of these tracks are shown in the last column of Table \[Photo\]. These results were also checked with the recent evolutionary calculations for LMC metallicity by , finding very similar results.
The comparison of spectroscopic and evolutionary masses has been an important source of discrepancy between the evolutionary theories and the stellar atmosphere modeling for decades . Improvements in both fields have minimized this issue. Nonetheless, a systematic shift can still be measured in the analysis of B-type supergiant stars. For instance, the analysis carried out by [@2009ApJ...704.1120U] in M 33 revealed an average difference of $0.06\,$dex between spectroscopic and evolutionary masses. Although small, it is still a systematic issue. Figure \[Fig:MM\] displays the relationship between these two mass estimations for our NGC 55 B-type supergiant stars. The errors in the spectroscopic masses are large enough to make the full sample consistent with the one-to-one relation (left panel in Fig. \[Fig:MM\]); half of the sample shows a very good agreement between both measurements and no systematic trend is evident, whilst four stars (A\_17, A\_26, C1\_45 and C1\_53, marked in the figure with gray squares) show significantly higher evolutionary than spectroscopic masses. Table \[Photo\] reveals that three of them show the highest color excess in the sample, which could point to a nebular contamination effect in the observed photometry. On the other hand, A\_27 and B\_31 show the opposite behavior (marked with gray diamonds in Fig. \[Fig:MM\]). Due to the spatial and spectroscopic resolution we cannot rule out additional unresolved companion(s) that would be affecting the photometry (but are not evident in the optical spectra).
Flux-weighted Gravity–Luminosity Relationship
---------------------------------------------
Figure \[Fig:FGLR\] shows the flux weighted gravity–luminosity relationship for the NGC 55 stars, together with the results published by [@2008ApJ...681..269K], [@2008ApJ...684..118U] and [@2009ApJ...704.1120U]. The location of the NGC 55 stars shows a good agreement with these studies, worst for those stars for which we have found disagreements between the different mass estimations. The other six stars follow the same trend with a slight shift towards higher bolometric magnitudes, although they are within the observed scatter of the distribution [@2008ApJ...681..269K]. An independent determination of the distance to NGC 55 based on the FGLR is deferred to a future publication, since information from BA supergiant stars, with $\log\,g_F>1.5$ dex, is mandatory to properly determine the distance modulus.
Evolutionary chemical status
----------------------------
Current massive star evolutionary models, accounting for the effects of mass-loss and rotation, predict a tight relationship between / and / surface ratios as a consequence of the mixing with CNO processed material from inner layers during the stellar evolution (e.g. ). Previous studies have observationally found this relationship in Galactic stars (see for instance and references therein), although the large uncertainties leave open a broad range of interpretations. The detailed study presented by on a sample of Galactic B-type dwarfs and A-type supergiant stars of the solar neighborhood revealed a very good agreement with the theoretical predictions of single star evolutionary models, in the particular range of stellar masses sampled by these objects, $\sim$20–40$M_\odot$. It has to be still proven that this is also the case for other mass ranges. Our sample in NGC 55 contains an heterogeneous group of stars throughout the disc of NGC 55, nonetheless the left panel of Fig. \[Fig:NGC55\_CNO\_evol\] shows that these stars follow the theoretical predictions in a qualitative sense.
It would seem that our derived N abundances define two groups of stars, with the first one clustering around 7.62$\pm$0.22 dex (simple mean and standard deviation) and a second one at 8.28$\pm$0.125 dex. Adopting as N baseline the region values derived by @1983MNRAS.204..743W, 6.63$\pm$0.10 dex and a mean N/O=0.015, all our stars show a high degree of N processing: N/O=0.15 and 0.69 (mean values for each group). Interestingly enough, similar values of enrichment have been recently reported for O type stars in the LMC by [@2011arXiv1110.5148R]. These authors find two distinct groups of strongly enriched objects, with N abundances 7.5 dex and 8.1 dex, with the LMC baseline N abundance being at 6.9 dex. Our B-type supergiant stars in NGC55 show a remarkable agreement with the N abundances of O type stars in the LMC. This strongly supports the idea that our B supergiant stars belong to a young population evolving away from the Main Sequence towards the red part of the HR diagram. We see no indication of any object being in a blue-loop, i.e. being a post Red Supergiant object: besides the previous discussion, current evolutionary models predict that the blue loops cannot reach the temperatures of our objects. Therefore, the main conclusion is then that none of these objects is in an advanced evolutionary stage.
Metallicity distribution in the disk of NGC 55
----------------------------------------------
Mean$\,\pm\,\sigma$ *a* *b*
-- --------------------- ------------------- -------------------
$ -0.40\pm0.15 $ $ 0.05\pm0.07 $ $ -0.42\pm0.03 $
$ -0.30\pm0.21 $ $ -0.06\pm0.18 $ $ -0.28\pm0.09 $
$ -0.49\pm0.18 $ $ 0.11\pm0.15 $ $ -0.53\pm0.08 $
$ -0.40\pm0.13 $ $ 0.09\pm0.10 $ $ -0.43\pm0.05 $
: Element abundances of , , and estimated averaged over the whole sample.
\[grad\]
Previous works based on the study of the emission line spectra of regions have found that the present-day metallicity (oxygen) of NGC 55 is very similar to that of the LMC [@1983MNRAS.204..743W; @2005ApJ...622..279D]. From our sample of 12 B-type supergiant stars we find a mean metallicity of $-0.40\pm0.13\,$dex (see the first column in Table \[grad\]), a value quite close to the LMC metallicity . We reached a similar conclusion with our previous qualitative analysis (C08).
We also calculated the / abundance ratio for the four regions for which [@1983MNRAS.204..743W] reported the detection of the \[\] 4363$\AA$ auroral line. This allows us to provide ‘direct’ nebular abundances, which are independent of various calibrations of strong-line methods. Electron temperatures were calculated with the [*temden*]{} program in IRAF[^4], using the \[\] $4363/(4959+5007)$ line ratio and updated atomic parameters, as in [@2009ApJ...700..309B]. The $^{+}$ and $^{++}$ ionic abundances were then calculated with the program [*ionic*]{}. We then obtained / as result of the sum of these two ionic abundances. Figure \[Fig:NGC55\_CNO\] displays the excellent agreement between these new abundances (shown as stars in the figure) and the ones derived from our sample of B-type supergiants.
The inclination of NGC 55, along with its apparently irregular shape, make this galaxy a difficult object for morphological classification. The presence of a metallicity gradient across the galaxy would hint to a possible spiral disc. The study of [@1983MNRAS.204..743W] did not find any trace of spatial variations in the southern part of the galaxy. To investigate this issue, the distribution of silicon, oxygen and magnesium were measured from our sample of B-type supergiants. We show the spatial trends in Fig. \[Fig:NGC55\_CNO\]. The individual elemental abundances hint to an almost null gradient. In order to quantify these results, we fit radially dependent gradients, $ [\ion{X/H}{}]=a\,(\rho/\rho_{o} )+b$, to our stellar data. We exclude from the fits those objects with values beyond $\pm2\,\sigma$ of the mean value. In the previous expression, $[\ion{X/H}{}]=\log\,(\ion{X/H}{})-\log\,(\ion{X/H}{})_{\odot}$, $X/H$ represents the abundance of each element relative to H by number, $(X/H)_{\odot}$ is the solar reference, and $\rho$ is the projected galactocentric distance (the semi-major axis, $\rho_{o}\sim16'$ for NGC 55). The parameters of the linear regression, $a$ and $b$ are collected in Table \[grad\]. Considering the errors in the regression coefficients (Table \[grad\]), all the elements show a spatial distribution consistent with no gradient, supporting the results by [@1983MNRAS.204..743W], but in this case based not only O, but also Mg and Si. Nonetheless, it would be highly desirable to expand the sample in the western half of the galaxy, before drawing a definitive conclusion. The right side of Fig. \[Fig:NGC55\_CNO\] shows the 2D abundance distribution over NGC 55. There is not a clear abundance pattern when 2 dimensions are considered either. We cannot rule out any projection effects that could blur a chemical gradient due to the high inclination of NGC 55.
Summary and Conclusions {#Conclusions}
=======================
Motivated by the necessity of analyzing large samples of optical spectra of massive blue stars in an objective way, even for the case of low spectral resolution data, we have undertaken the steps to implement a grid-based methodology. We first computed an extensive model grid with the model atmosphere/line formation code [fastwind]{}. This new grid was specifically designed for the spectral analysis of blue supergiant stars of spectral types O9 to A0. Secondly, we implemented an algorithm that determines the stellar parameters by finding the subset of models in the grid fulfilling the criteria of minimizing the differences with respect to the observed spectrum.
We have shown, through a number of control tests, that our methodology is well suited for the analysis of optical spectra of B-type supergiants, even in the low spectral resolution case. The analysis of synthetic models (degraded to the expected observational conditions), showed that the stellar parameters are recovered with a high degree of fidelity. Furthermore, the study of three Galactic stars, degraded to low resolution and SNR, provided answers that are consistent with results based on high spectral resolution analysis present in the literature.
As a first application of our model grid and analysis algorithm we have analyzed a sample of 12 early B-type supergiants located in the Sculptor filament galaxy NGC 55. Our methodology allowed us to obtain a complete characterization of these stars, in terms of their stellar parameters and surface chemical composition, in spite of the low spectral resolution and SNR. The tailored final models provided an accurate match to the observations.
Half of the objects in our sample presented a good agreement between the evolutionary and spectroscopic masses. For the rest, the agreement is not so good, but the results could be considered in agreement within the uncertainties of the analysis. The location of the stars in the FGLR, for the adopted distance to NGC 55, showed a good correspondence with results obtained in previous studies.
The average metallicity of the sample is $\log\,\left(Z/Z_\odot\right)\sim-0.40$ dex. Our results indicate that NGC 55 does not sustain radial abundance gradients, thus confirming previous works based on regions. Nonetheless, the inclination and morphological structure of NGC 55 make this galaxy an interesting target for studies of the 2D metallicity distribution, key for understanding the chemical evolution of NGC 55. The derived CNO compositions show that our stars are evolving away from the Main Sequence, and that none of these objects is returning from an excursion to the red side of the HR diagram. We have found an apparent separation in the nitrogen abundances in two groups, both being strongly enriched in comparison with the N baseline abundance defined by the regions. The derived values are in good agreement with a recent study of LMC O type stars by [@2011arXiv1110.5148R], strongly supporting the idea that our objects are evolving directly from the Main Sequence.
We have shown the reliability of our new automatic, objective and fast methodology for the analysis of massive blue stars, even at low spectral resolution. Its application to large samples will enable us to tackle different issues in a statistical and systematic way. In future work, we will apply this method to an extended sample of massive blue stars in NGC 55. This analysis will provide us with additional information on the discrepancy of masses for B-type supergiant stars, their evolution, the FGLR and a detailed description of the 2D galactic chemical distribution.
The authors would like to thank the referee, I. Hunter, for his useful comments and very helpful suggestions to improve this paper. A. Z. Bonanos is also acknowledged for her careful reading of the manuscript. This project has been supported by Spanish grants number AYA2008-06166-C03-01, AYA2010-21697-C05-04 and was partially funded by the Spanish MICINN under the Consolider-Ingenio 2010 Program grant CSD2006-00070 (http://www.iac.es/consolider-ingenio-gtc) and the Gobierno Autónomo de Canarias under project PID2010119. NC acknowledges research and travel support from the European Commission Framework Program Seven under the Marie Curie International Reintegration Grant PIRG04-GA-2008-239335. WG and GP gratefully acknowledge financial support for this work from the Chilean Center for Astrophysics FONDAP 15010003, and from the BASAL Centro de Astrofísica y Tecnologías Afines (CATA) PFB-06/2007. MAU, FB and RPK were supported by the National Science Foundation under grant AST-1008798. In addition, RPK acknowledges support by the Alexander-von-Humboldt Foundation and the hospitality of the MPI for Astrophysics and the University Observatory Munich, where part of this work were carried out. The authors would like to thank the Instituto de Astrofísica de Canarias computer network and CONDOR (http://www.cs.wisc.edu/condor) facilities. Support from the FOCUS and TEAM subsidies of the Foundation for Polish Science (FNP) and the Ideas Plus grant of the Ministry of Science and Higher Education is also acknowledged.
[^1]: Based on observations obtained at the ESO VLT Large Programme 171.D-0004.
[^2]: https://sites.google.com/site/araucariaproject/
[^3]: The IACOB spectroscopic database (Simón-Díaz et al., in preparation) presently contains $\sim200$ high quality spectra of O- and B-type Galactic stars ($R=46000$, $SNR>150$)
[^4]: IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.
|
---
abstract: 'The X-ray transient XTE J1719–291 was discovered with *RXTE/PCA* during its outburst in 2008 March, which lasted at least 46 days. Its 2-10 keV peak luminosity is 7 $\times$ $10^{35}$ erg s$^{-1}$ assuming a distance of 8 kpc, which classifies the system as a very faint X-ray transient. The outburst was monitored with *Swift*, *RXTE*, *Chandra* and *XMM-Newton*. We analysed the X-ray spectral evolution during the outburst. We fitted the overall data with a simple power-law model corrected for absorption and found that the spectrum softened with decreasing luminosity. However, the *XMM-Newton* spectrum can not be fitted with a simple one-component model, but it can be fit with a thermal component (black body or disc black body) plus power-law model affected by absorption. Therefore, the softening of the X-ray spectrum with decreasing X-ray luminosity might be due to a change in photon index or alternatively it might be due to a change in the properties of the soft component. Assuming that the system is an X-ray binary, we estimated a long-term time-averaged mass accretion rate of $ \sim $ 7.7 $\times$ $10^{-13}$ $\sol$ yr$^{-1}$ for a neutron star as compact object and $ \sim $ 3.7 $\times$ $10^{-13}$ $\sol$ yr$^{-1}$ in the case of a black hole. Although no conclusive evidence is available about the nature of the accretor, based on the X-ray/optical luminosity ratio we tentatively suggest that a neutron star is present in this system.'
author:
- |
M. Armas Padilla$^{1}$ [^1], N. Degenaar$^{1}$, A. Patruno$^{1}$, D. M. Russell$^{1}$, M. Linares$^{2}$, T.J. Maccarone$^{3}$, J. Homan$^{2}$ and R. Wijnands$^{1}$\
$^{1}$Astronomical Institute “Anton Pannekoek”, University of Amsterdam, Science Park 904, 1098 XH, Amsterdam, The Netherlands\
$^{2}$MIT Kavli Institute for Astrophysics and Space Research, 70 Vassar Street, Cambridge, MA 02139, USA\
$^{3}$School of Physics and Astronomy, University of Southampton, Hampshire SO17 1BJ,United Kingdom
bibliography:
- 'bibliography.bib'
title: 'X-ray softening in the new X-ray transient XTE J1719–291 during its 2008 outburst decay'
---
X-rays:binaries –stars:individual: XTE J1719–291 –accretion, accretion discs
Introduction
============
X-ray transients spend most of the time in a dim quiescent state, with an X-ray luminosity of $10^{31-34}$ erg s$^{-1}$. It is mostly during outbursts that these systems are discovered, when the luminosity increases by more than two orders of magnitude. The nature of these X-ray transients is varied. Many of them are compact objects (black holes or neutron stars) accreting matter from a companion star. In these systems the outbursts are attributed to a strong increase in the accretion rate onto the compact object due to a hydrogen ionization instability in the accretion disc [@Lasota2001].
The peak luminosity reached during these accretion outbursts ($L^{peak}_{X}$; 2-10 keV) covers a wide range, from $10^{34}$ to $10^{39}$ erg s$^{-1}$. Depending on this luminosity, X-ray transients can be classified as *bright* ($L^{peak}_{X}\sim10^{37-39}$), *faint* ($L^{peak}_{X}\sim10^{36-37}$) or *very faint* ($L^{peak}_{X}\sim10^{34-36}$; see @wijnands2006). This classification is not strict since hybrid systems do exist which exhibit large variations in their peak $L_{X}$ from outburst to outburst (e.g. SAX J1747.0-2853; @Werner2004; @Wijnands2002).\
Very faint X-ray transients (VFXTs) have been discovered in the last decade thanks to the improvement in sensitivity of X-ray instruments. Currently several tens of VFXTs are known, but despite the reasonable number of sources detected, only very few of them have been studied in detail during outbursts. Hence the characteristics of these peculiar sources as well as their nature are still poorly understood. Some of them are neutron stars accreting from, most likely, low-mass stars, since these systems have shown Type-I bursts (e.g. @Cornelisse2002; @DelSanto2007; @Chelovekov2007; @Degenaar2009). Classical novae are a possible class of these very faint transients too. @Mukai2008 have argued that those systems can be a small part of the X-ray transients population in the Galactic center, since they can reach peak luminosities in the 10$^{34-35}$ erg s$^{-1}$ range through nuclear fusion of the matter accreted on the white dwarf surface. Another possibility are the symbiotic X-ray binaries, a small sub-class of low mass X-ray binaries (LMXBs) in which the compact primary, most likely a neutron star, is accreting matter from the wind of an M-type giant companion. Only a few such symbiotic X-ray binaries have been identified so far (e.g., @Masetti2007). Also several strongly magnetized neutron stars (B$\sim$10$^{14-15}G$, magnetars) have shown X-ray outbursts with peak luminosities of $\sim$ 10$^{35}$ ergs s$^{-1}$ (e.g., @Ibrahim2004; @Muno2007). They are the only known non-accreting systems that can exhibit VFXT outbursts. The cause of these transient outbursts is not fully understood, but it likely is related to a decay in the strong magnetic field of the neutron star [@Ibrahim2004]. It is also possible that a fraction of these under-luminous transients are high mass X-ray binaries (HMXBs), i.e., compact objects accreting from a circumstellar disc or the strong stellar wind of a star with a mass higher than 10$\sol$ (e.g., @Okazaki2001).\
The low luminosities of VFXTs in which a compact object accretes from low-mass donor in combination with duty cycles of $\lesssim$10% (as is common for the brighter X-ray transients) imply that the mean accretion rates in these systems are very low (e.g. @Degenaar2009). Therefore, such VFXTs provide us with new regimes to study accretion onto compact objects. For example, by studying the outburst properties of the systems that harbour a neutron star (e.g. displaying X-ray pulsations or bursts) new ways of studying ultra-dense matter can be performed [@Wijnands2008]. Moreover, VFXTs yield new inputs for the outburst and evolution models that were developed to explain the bright systems, but are not able to account for all the VFXT manifestations (e.g., @King2006).\
In this work we present an extensive X-ray analysis of XTE J1719-291, which was discovered with Rossi X-ray Timing Explorer/Proportional Counter Array (*RXTE*/PCA) bulge scans on 2008 March 21 [@Markwardt2008]. Its outburst was monitored with [*Swift*]{}, which initially showed an X-ray flux decrease (@Markwardt2008, @Degenaar2008) but then rebrightened [@Degenaar2008b]. A duration of 46 days elapsed between the source’s discovery and the time when it was no longer detectable [@Degenaar2008a]. The most accurate position was obtained with *Chandra*, $\alpha$= 17h 19m 17.18s, $\delta$= -29d 04’ 10.0“ with an uncertainty of 0.2” (J2000, 90$\%$ confidence; @Greiner2008). At this position a source was detected with the MPI/ESO 2.2m telescope at La Silla, which likely represents the optical counterpart. In a second optical pointing performed 24 days later, the source was not detected [@Greiner2008]. From the upper limits during this observation and assuming a distance of 8 kpc an absolute V magnitude of >5.8 was derived, which suggests a companion star K0V or later [@Greiner2008].\
Observations and analysis
=========================
XTE J1719-291 was observed over a sixty day time-span with *RXTE*, [*Chandra*]{}, [*XMM-Newton*]{} and [*Swift*]{} between 2008 March 24 and 2008 May 14. Altogether nine pointed observations were carried out, six observations with [*Swift*]{}/XRT, one with [*XMM-Newton*]{}/EPIC, one with [*Chandra*]{}/HRC and one with *RXTE*/PCA. A log of the different observations is given in Table \[t1\]. Apart from these pointed observations, we obtained from the literature four additional flux measurements from *RXTE*/PCA scans made in the period of 2008 March 15-25 [@Markwardt2008].
------------- --------------------- ----------- --------------- ----------------- --
Observation Date MJD (UTC) Exposure (ks) Instrument
and start time (UT)
1 2008-03-24 03:38 54549.152 1.7 *RXTE*/PCA
2 2008-03-30 08:54 54555.371 44.7 *XMM*/EPIC
3 2008-03-30 12:27 54555.519 5 *Swift*/XRT
4 2008-04-03 00:07 54559.005 2.5 *Swift*/XRT
5 2008-04-09 13:27 54565.561 1.9 *Swift*/XRT
6 2008-04-16 15:28 54572.645 1.7 *Swift*/XRT
7 2008-04-27 18:23 54583.766 2.2 *CHANDRA*/HRC-I
8 2008-04-30 00:44 54586.031 1.9 *Swift*/XRT
9 2008-05-14 13:20 54600.556 1.2 *Swift*/XRT
------------- --------------------- ----------- --------------- ----------------- --
*RXTE* data
-----------
We analysed the [*RXTE*]{} observation of XTE J1719–291 taken on March 24, 2008. We extracted a spectrum from the proportional counter array (including PCU2 only), using Standard 2 data of all layers. The background was estimated using [pcabackest]{} (v. 3.6) and the faint source model. A response matrix was created using [pcarsp]{} (v. 10.1), taking into account the $\sim$0.2 degree offset between the [*RXTE*]{} pointing and XTE J1719–291. We grouped the resulting spectrum to have a minimum of 20 counts per energy bin and applied a systematic error of 1%.\
*XMM-Newton* data
-----------------
XTE J1719-291 was observed with [*XMM-Newton*]{} on 2008 March 30, with an exposure time of 44 ks. The data were taken using the EPIC detectors, the two MOS and the pn CCD cameras, operated in full window mode with the medium and thick optical blocking filter, respectively. The data were processed with the standard [*XMM-Newton*]{} Science Analysis System (SAS v.9.0) to obtain calibrated event lists and scientific products. The observation was affected by a strong background flare. We exclude the data where the count rate exceeded 1 and 0.5 counts s$^{-1}$ for the pn and MOS data, respectively, which results in a total live time of 17 ks. The extraction of spectra were carried out using the [xmmselect]{} task, as well as the associated response matrices (RMF) and the ancillary response files (ARF) using the standard analysis threads[^2]. The spectra were grouped to contain 20 counts per bin using the [FTOOL grppha]{}. Finally, we checked that the data were not affected by pile-up using the SAS task [epatplot]{}.\
*Swift* data
------------
Six observations were carried out with the XRT. All data were collected in Photon Counting (PC) mode. The data were processed running the [xrtpipeline]{} task in which standard event grades of 0–12 were selected. For every observation, spectra, lightcurves and images were obtained with the [Xselect]{} (v.2.3) package. Source spectra were extracted from a circular region with a radius of 17 pixels. For the background, three circular regions of similar size as the source region were used over nearby source-free regions. The spectra were grouped to have a minimum of 5 counts per energy bin with [grppha]{}.\
The spectra were corrected for the fractional exposure loss due to bad columns on the CCD. For this, we created exposure maps with the [xrtexpomap]{} task, which is used as input to generate ARF with the [xrtmkarf]{} task. For the RMF the latest version was used from HEASARC calibration database (v.11).\
Observations 5 and 6 (see Table 1) have the highest count rates (0.5-0.7 counts/sec) and might be affected by pile-up. To test this, we have used the software [ximage]{}[^3]. We compared the point spread function of the data with that expected for the XRT, and we found there was no pile-up.\
In the last XRT observation (Obs. 9) the source was not detected. The upper limit on the flux was calculated with WebPIMMS HEARSAC tool[^4]. An absorbed power-law model with a photon index of 2.74 and a hydrogen column density () of 0.53$\times 10^{22}$ cm$^{-2}$ (see section 3) was assumed, and the count rate was calculated using the prescription for small numbers of counts given by @Gehrels1986.\
*Chandra* data
--------------
The [*Chandra*]{} observation was performed using the High Resolution Camera (HRC-I) on 2008 April 27 for an exposure time of 2.1 ks (see also @Greiner2008). We obtained these data from the [*Chandra*]{} data archive. The intrinsic energy resolution of the HRC-I is poor so no spectral fitting can be carried out.\
Data reduction was performed using the Chandra Interactive Analysis Software (CIAO v 4.1). We calculated the net source count with [dmextract]{} task over a circular region with a radius of 12 pixels and the background was taken with an annulus around the source (of 56 pix inner radius, of 98 pix outer radius). The flux was calculated with WebPIMMS HEARSAC tool assuming a power-law model with a photon index of 2.74 and a hydrogen column density () of 0.53$\times 10^{22}$ cm$^{-2}$ (see section 3).\
----- ------------------ ----------------------------- --------------------------- ---------------------------- -- ------------------------- --------------------------- -------------------------
Obs $\Gamma$ $_{,abs}\ ^{a}$ $_{,unabs}\ ^{a}$ $\lx\ ^{b}$ $_{,abs}\ ^{a}$ $_{,unabs}\ ^{a}$ $\lx\ ^{b}$
1 $2.02 \pm 0.08$ $ 112\pm 11$ $173 \pm 11$ $133 \pm 8$ $ 86.8 \pm 10.0$ $92 \pm 11$ $70 \pm 8$
2 $2.74 \pm 0.05 $ $ 2.71 \pm 0.13 $ $6.21 \pm 0.1$ $4.75 \pm 0.07$ $ 1.62^{+0.12}_{-0.11}$ $1.72 \pm 0.12$ $1.33 \pm 0.09$
3 $2.83 \pm 0.25 $ $ 1.93 ^{+ 0.47 }_{-0.36 }$ $4.7^{+0.3}_{-0.2 }$ $3.6^{+ 0.3}_{-0.2 }$ $ 1.1^{+0.4}_{-0.3}$ $1.19 ^{+0.44 }_{-0.33 }$ $0.91 ^{+0.32}_{-0.25}$
4 $2.6 \pm 0.4$ $ 1.89 ^{+0.92 }_{-0.58}$ $3.97^{+0.81 }_{- 0.41 }$ $3.04^{+ 0.62 }_{- 0.31 }$ $ 1.19^{+0.85}_{-0.51}$ $1.28 ^{+0.89 }_{-0.54}$ $0.98 ^{+0.68}_{-0.41}$
5 $2.32 \pm 0.11 $ $ 20.5 ^{+ 2.7}_{-2.3 }$ $36.5^{+2.5 }_{- 2.1 }$ $27.9^{+ 1.9 }_{- 1.6 }$ $ 14.4^{+2.5}_{-2.1}$ $15.3 ^{+2.6 }_{-2.2 }$ $11.7 ^{+2.0}_{-1.7}$
6 $2.15 \pm 0.09 $ $ 34.9 ^{+ 4.0 }_{-3.5}$ $57.2^{+3.9 }_{- 3.3 }$ $43.8^{+ 2.9 }_{- 2.5 }$ $ 25.9^{+3.8}_{-3.3}$ $27.4 ^{+3.9 }_{-3.4 }$ $20 \pm 3$
7 2.74 (fix) $ 2.57 \pm 0.15$ $5.75^{+0.34 }_{- 0.33 }$ $4.4 \pm 0.3$ $ 1.5 \pm 0.1 $ $1.61 ^{+0.10 }_{-0.09 }$ $1.23 \pm 0.07$
8 $2.7 \pm 0.4$ $ 2.23 ^{+ 1.05 }_{-0.66 }$ $4.95^{+0.89}_{- 0.40 }$ $3.79^{+ 0.69}_{-0.31 }$ $ 1.36^{+0.95}_{-0.57}$ $1.46 ^{+1.00 }_{-0.60 }$ $1.12 ^{+0.76}_{-0.46}$
9 2.74 (fix) $<$0.05 $<$0.11 $<$0.08 $<$0.03 $<$0.03 $<$0.02
----- ------------------ ----------------------------- --------------------------- ---------------------------- -- ------------------------- --------------------------- -------------------------
Note.- has been fixed to 0.53 $\times 10^{22}$ cm$^{-2}$, the value obtained from the *XMM-Newton* power-law fitting (see section 3) .
$^{a}$ Flux in units of $10^{-12}$ erg cm$^{-2}$ s$^{-1}$.
$^{b}$ X-ray luminosity in units of $10^{34}$ erg s$^{-1}$ calculated from the unabsorbed flux by adopting a distance of 8 kpc.
Results
=======
To fit the spectra of the observations we used [XSPEC]{} (v 12.6.0). The spectra corresponding to the *XMM-Newton* observation (of three EPIC cameras, the pn and the two MOS) are shown in Figure 1 and were fitted simultaneously with all parameters tied between the 3 detectors in order to provide the best constraints on the spectral parameters. The long effective exposure time ($\sim$ 17 ks) of this observation allows us to obtain the most accurate hydrogen column density, and it has good enough statistics to distinguish between fits using different models.\
Firstly, we tried a power-law continuum model affected by absorption. The returned photon index was 2.74 $\pm$ 0.05 and the obtained was (0.53 $\pm$ 0.02)$\times 10^{22}$ cm$^{-2}$. However, this model led a poor fit (=1.2 for 544 d.o.f.). Adding a blackbody as a soft component the fit improves notably (=1.06 for 541 d.o.f.; see Fig.1). The parameters obtained with this model are (0.33 $\pm$ 0.03)$\times 10^{22}$ cm$^{-2}$, which is consistent with the value found by @Kalberla2005 at the source position, a photon index of 1.7 $\pm$ 0.1, and a temperature (*kT*) of 0.32 $\pm$ 0.02 keV. The soft component contributes nearly 30$\%$ of the 0.5-10 keV source flux. An f-test indicates a probability of 2.6 $\times 10^{-16}$ of achieving this level of improvement by chance.\
The result is almost identical if we use a multicolor disc blackbody as the soft component. The was 0.37 $\pm$ 0.03 $\times 10^{22}$cm$^{-2}$; photon index was 1.6 $\pm$ 0.1 and a temperature at inner disc radius (T$_{in}$) was 0.45 $\pm$ 0.03 keV.
The soft component cannot be constrained with *Swift* data since their statistics are poorer, and neither with the *RXTE* spectrum because it is not sensitive to energies below 2 keV. In a first attempt to study the evolution of the outburst, we calculate the X-ray colour using the *Swift*/XRT data only to avoid calibration uncertainties between the different instruments. The color is defined as the ratio of counts between a hard band (2-10 keV) and a soft band (0.5-2 keV) and its values are shown in Fig.2 (c) and Fig.3 (b). We see that the spectrum becomes harder during the outburst, and it turns soft again when the outburst decays. This plot of the hardness ratio (HR) shows the spectral behaviour independently of the assumed spectral model.\
To exclude the possibility that the observed spectral softening is due to pile-up (see also Section 2.3), we repeat the HR calculations using annular regions to exclude the photons coming from the center. We use annuli with an outer radius equal to the size of the circular region that was used previously (17 pixels; see Section 2.3), and three different sizes for the inner radius (7, 4 and 2 pixels). Our results using these different annuli are consistent with what is shown in Fig.2 (c) and Fig.3 (b), indicating that the softening is not related to pile-up.\
In order to investigate the nature of this softening, we have carried out different spectral fits. First, we tested if the thermal component of the two component model varies. Since the poor statistics of the *Swift* data do not permit fitting with a two component model, we made some assumptions. We fixed the and the photon index parameters with the values obtained in the *XMM-Newton* fit. We took a power-law to represent the accretion flow, and the blackbody to represent the boundary layer. We fixed the power-law/blackbody ratio assuming that the relative efficiencies for the disc and the boundary layer will not vary, and we let the temperature vary freely. This was only possible for the two observations with highest count rates, observation 5 and 6 (see Table \[t1\]). The temperatures resulting are 0.46 $\pm$ 0.06 keV and $0.56 ^{+ 0.05}_{-0.09 }$ keV, respectively. While the variation in temperature is not statistically significant, it is interesting to note that the data are consistent with the idea that only the blackbody temperature is varying.\
To test the evolution along the outburst, we use a single power-law with absorption, since this is the only model that can fit all observations. The two component model is more unstable so the error estimates are much larger. The was fixed to the value obtained from the [*XMM-Newton*]{} data (=0.53 $\times 10^{22}$ cm$^{-2}$), while the photon index and normalization components were left as free parameters. For the [*Chandra*]{} and the 6$^{th}$ [*Swift*]{} observation (Obs. 9 in Table 1) we used WebPIMMS to convert the obtained count rate into flux using the spectral parameters obtained in the [*XMM-Newton*]{} fitting.
For all cases, we calculated the absorbed and unabsorbed fluxes for both the 0.5-10 keV and the 2-10 keV energy ranges as well as the corresponding X-ray luminosities assuming a distance of 8 kpc, given the proximity of the source to the Galactic center. These results are reported in Table \[t2\].\
![(a) The light curve of XTE J1719-291 where the energy band is 2-10 keV. The first four white squares indicates *RXTE* values that are taken from @Markwardt2008. (b) Photon index evolution. (c) Hardness ratio evolution (ratio of counts in the hard, 2-10 keV, and soft, 0.5-2 keV energy bands) using only the *Swift* data.[]{data-label="f2"}](LalfaHR.ps)
The light curve (2-10 keV) is displayed in Fig. \[f2\] (a). In the plot we have included the four previous points from *RXTE*/PCA bulge scans reported by @Markwardt2008. There are two peaks in the curve and the luminosity varies by $\sim$2 orders of magnitude. The peak luminosity value is 7 $\times$ $10^{35}$erg s$^{-1}$ on 2008 May 24. This low luminosity justifies a classification as a VFXT. The upper limit on the quiescent 2-10 keV luminosity inferred from the non-detection by *Swift/XRT* on 2008 May 14 (Obs.9) is 2 $\times$ $10^{32}$erg s$^{-1}$. The outburst thus lasted at least 46 days.\
In Fig. \[f2\] (b) the evolution of $\Gamma$ in time is plotted. We see variation of $\Gamma$ along the outburst, with values between 2 and 2.8. Comparing this figure with Fig. \[f2\] (a) it can be seen that $\Gamma$ increases with decreasing luminosity. In order to see this softening more clearly, we show a plot of $\Gamma$ versus $\lx$ in Fig. \[f3\] (a).
![Photon index (a) and hardness ratio using only *Swift* data (b) (ratio of counts in the hard, 2-10 keV, and soft, 0.5-2 keV, energy bands) versus luminosity in the 2-10 keV energy band.[]{data-label="f3"}](gammahrL.ps)
Time-averaged accretion rate
----------------------------
From the mean unabsorbed outburst flux we can estimate the average mass-accretion rate during outburst following the relation = $RL_\mathrm{acc}/GM$, where *G* is the gravitational constant. $L_\mathrm{acc}$ is the 0.1-100 keV accretion luminosity which we estimate from the mean 2-10 keV unabsorbed outburst luminosity applying a bolometric correction factor of 3 [@Zand2007]. *R* and *M* are the radius and mass of the compact object, respectively. We obtain = 5.57 $\times$ $10^{-11}$ $\sol$ yr$^{-1}$ for a canonical neutron star (i.e. *M* = 1.4$\sol$, *R* = 10 km), and = 2.68 $\times$ $10^{-11}$ $\sol$ yr$^{-1}$ for a black hole (assuming *M* = 10$\sol$, *R* = 34 km). Once is obtained, we determine the long-term averaged value using the relation = $\times$ $t_{\mathrm{ob}}$/$t_{\mathrm{rec}}$, where $t_{\mathrm{ob}}$ is the outburst duration and $t_{\mathrm{rec}}$ is the system’s recurrence time, i.e., the sum of the outburst and quiescence time-scales. The factor $t_{\mathrm{ob}}$/$t_{\mathrm{rec}}$ is the duty cycle of the system.
For XTE J1719–291, $t_{\mathrm{ob}}$ is at least 46 days (see Fig.2). However, we do not know the quiescence time-scale because no other outbursts have been observed so far. We will assume a quiescence time ($t_{\mathrm{q}}$) of at least 9 years, which is the time since RXTE-PCA has monitored this region during Galactic bulge scans (1999 February) till the discovery of XTE J1719–291 on 2008 March. Taken this $t_{\mathrm{ob}}$, the duty cycle is $<$ 1.3$ \% $. This results in an estimated $ \lesssim $ 7.7 $\times$ $10^{-13}$ $\sol$ yr$^{-1}$ for a neutron star compact object and $ \lesssim $ 3.7 $\times$ $10^{-13}$ $\sol$ yr$^{-1}$ for a black hole. We note however, that outbursts could have been missed during the periods that the source could not be observed due to solar constraints.\
We also have to consider is the fact that black holes systems might be radiatively inefficient at low accretion flows. Part of the generated accretion energy could be advected into the black hole or converted into jet power (e.g. @Blandford1999; @Fender2003; @Narayan2008), therefore the estimation of from the X-ray luminosity could be underestimated.\
Discussion
==========
We have presented *RXTE*, *Chandra*, *XMM-Newton* and *Swift* data analysis of the 2008 outburst of the newly discovered X-ray transient XTE J1719-291. The source was discovered on 2008 March 21 during *RXTE*-PCA bulge scans and the outburst duration was at least 46 days (see Fig.2). The outburst light curve shows two peaks; the unabsorbed flux varies between (1.3-92) $\times$ $10^{-12}$ erg cm$^{-2}$s$^{-1}$ (2-10 keV). Adopting a distance of 8 kpc, the inferred outburst peak luminosity is $\sim$ 7 $\times$ $10^{35}$ erg s$^{-1}$ . This luminosity lies within the very faint X-ray regime, where $L^{peak}_{X}$(2-10 keV) $<$ $10^{36}$ erg s$^{-1}$. The nature of XTE J1719-291 is unknown. An accreting white dwarf is very unlikely because these systems generally exhibit outburst peak luminosities below $10^{34}$ ergs s$^{-1}$. Some classical novae have reached values of a few times $10^{34-35}$ erg s$^{-1}$ for weeks to months, but none of them with a value as high as we find for XTE J1719–291 [@Mukai2008]. Therefore the most likely origin of this X-ray luminosity value is a neutron star or black hole accreting system.\
X-ray spectral behaviour
------------------------
The high signal-to-noise of the *XMM-Newton* spectra permits us to try different models to fit them. We found that a two component model, blackbody as a soft component and a power-law for the hard one, could fit the spectra more accurately than a single component model. The best fit returned a temperature (kT) of 0.33 keV, a of 0.33 $\times~10^{22}$ cm$^{-2}$ and a photon index of 1.74. The blackbody component contributes 30$\%$ of the total flux (0.5-10 keV). This soft component could be thermal emission from the surface of a neutron star or the boundary layer. One possible cause is accretion onto the neutron star at very low rates [@Zampieri1995], but also can be incandescent thermal emission from the neutron star surface resulting from deep crustal heating [@Brown1998], which could be visible when the accretion disc becomes smaller. It was also possible to fit the spectra with a multicolor disc blackbody as the soft component (see Section 3). Therefore the possibility that the emission comes from the accretion disc cannot be discarded. In fact, if the compact object is a black hole, the soft emission has to come from the disc.\
We could detect the soft component robustly only in the *XMM-Newton* data; the *Swift* data lack sufficient signal-to-noise while *RXTE*’s lower energy threshold of 2 keV is too high to allow detection of such a soft component. Therefore, in order to study the outburst spectral evolution, we fit all the data with the same model, this is a power-law continuum model affected by an equivalent hydrogen column. The photon index evolution shows a spectral softening, in other words, luminosity and photon index are anti-correlated. As we saw in section 3, we cannot rule out that the difference in the spectrum is produced by the blackbody soft component, i.e, the blackbody becomes stronger at lower $\lx$. In any case, the X-ray color diagram (Fig.3b) confirms the softening independently of the model used.\
This behaviour differs from the bright transients systems, whose spectra evolve towards the low-hard state at the end of the outburst (see @Belloni2010; @Klis2006). However such softening towards even lower luminosities has been observed before in some black hole transients returning to quiescence from the hard-state. The photon index of XTE J1650-500 softens from 1.66 to 1.93 in the hard state at X-ray luminosities down to $L_{X}$=1.5 $\times~10^{34}$ erg s$^{-1}$ [@Tomsick2004]. XTE J1550-564 and XTE J1650-500 begin gradual softenings at low luminosities $L^{peak}_{X}$ $\lesssim$ $10^{36}$ erg s$^{-1}$ [@kalemci2002]. Also @Corbel2008 found that the photon index of V404 Cyg is softer in quiescence than in the hard state. This behaviour is consistent with the advection-dominated accretion flow (ADAF) model [@Esin1997] which predicts a gradual softening of the power-law photon index as the luminosity drops (see e.g. discussion in @Tomsick2004). However, this is not always seen for all black holes in the last part of their outbursts. @Jonker2009 did not find any evidence for this softening in the decay during the 2008 outburst of H 1743-322. It is worth pointing out that the black holes systems are fully described by a simple power-law model at these low luminosities, whereas we also detect a disc component in our *XMM-Newton* spectrum. Therefore, the softening in our source also might be due to variations in the properties of the soft component.\
On the other hand, we studied a thermal component evolution (see Section 3). We found hints that the temperature increases when the spectrum is brighter and harder. This could indicate that the softening is due to the variability of the temperature of the neutron star surface. According to Medvedev & Narayan (2001) solutions, a hot optically thin region should be present in low L/L$_{\mathrm{edd}}$ neutron star systems, with a cooler boundary layer in the neutron star surface where the rotational energy is released.
Optical counterpart and orbital period
--------------------------------------
An optical/NIR counterpart of XTE J1719–291 was first observed by @Greiner2008 during an observation made on April 11, 2008. The counterpart was observed in several optical bands (i’, r’, g’, z’) with a magnitude between 22.3 and 23.0. The closest X-ray observation in time was made on April 9 by Swift (Obs. 5; Table \[t1\]), with the source at a 0.5–10 keV luminosity of $~\sim ~3 \times 10^{35}$ erg s$^{-1}$ (see Table 2). The optical counterpart was not detected in a subsequent observation made on May 4, 2008 when the X-ray luminosity was already below the sensitivity level of *Swift*/XRT. Therefore the counterpart observed on April 11 is very likely optical emission from the accretion disc.
It was shown [@Russell2006; @Russell2007a] that black holes and neutron stars occupy different regions of an optical–X-ray luminosity diagram when these transients are accreting at low luminosities ($L_{\rm X} \simlt 10^{36}$ erg s$^{-1}$). At a given X-ray luminosity, a neutron star transient is typically $\sim 20$ times optically fainter than a black hole. We can therefore use the above quasi-simultaneous optical and X-ray luminosities of XTE J1719–291 to investigate the nature of its compact object by placing these data on this diagram. We estimate the de-reddened optical flux density adopting an extinction A$_{\rm i'}=2.11$, which has been calculated using the tabulated value reported by @Schlegel1998 and by converting the value for the visual extinction of A$_{\rm V}=3.3$, as reported in @Greiner2008. To obtain the optical monochromatic luminosity $L_{\rm \nu, i'}$ [flux density scaled to distance; see @Russell2006] and the X-ray 2–10 keV luminosity $L_{\rm X}$ we assume a distance of 8 kpc and an X-ray power law with photon index $\Gamma = 2.32$ (as measured for observation 5; Table \[t2\]).
In Fig. \[optx\] we plot the optical–X-ray luminosity diagram including data of all black holes, neutron stars and high-mass X-ray binaries (HMXBs) collected in @Russell2006 [@Russell2007a; @Russell2007b], and overplot our data for XTE J1719–291. Errors are propagated from those quoted with the i’-band magnitude reported by @Greiner2008 and the X-ray flux in Table \[t2\]. At an assumed distance of 8 kpc, XTE J1719–291 lies amongst the other neutron star transients in the optical–X-ray luminosity diagram (Fig. \[optx\]). At this X-ray luminosity, it is optically fainter than all the black holes in the sample, and $\sim 20$ times fainter in optical than a typical black hole. This provides evidence favouring a neutron star accretor in this VFXT, but this alone is no proof of the nature of the compact object; the source could indeed be an unusual black hole transient with a remarkably low optical/X-ray ratio.
The detection of an optical counterpart and the knowledge of the X-ray luminosity of the source are also useful to place some initial constraints on the orbital period of the system. According to @Paradijs1994, the absolute visual magnitude of LMXBs correlates with the orbital period of the binary and the X-ray luminosity. If we assume M(i’) $\approx$ M(V), the continuum spectral index is approximately flat ($F_{\nu} \propto \nu^{\sim 0.1}$), as may be expected for an LMXB disc at low luminosities slightly redder than a typical LMXB in outburst because the disc is probably cooler for this VFXT [@Hynes2005; @Maitra2008] and again adopt A$_{\rm i'}=2.11$, we obtain M(i’) and reach the following orbital period constraints.
If the compact object is a neutron star of $M=1.4\,M_{\odot}$ then Log$\left(\frac{P_{\rm orb}}{1\,\mathrm{hr}}\right)=-0.3^{+0.8}_{-0.7}$, whereas Log$\left(\frac{P_{\rm orb}}{1\,\mathrm{hr}}\right)=0.0^{+1.1}_{-0.4}$ in the case of a black hole of $M=10\,M_{\odot}$. The orbital period is therefore in the range 0.4 $< P_{orb}< 12$ hr for a 10$M_{\odot}$ black hole, and 0.1 $< P_{\rm orb} < 3$ hr in case of a neutron star accretor (1$\sigma$ confidence intervals). If the system indeed hosts a neutron star, the binary is most likely to be compact or ultracompact since $P_{\rm orb} < 3$ hr, whereas this is not necessarily true for a black hole system, where $P_{orb}< 12$ hr.
@Russell2006 [@Russell2007a] showed that the global empirical relations observed for a large sample of black holes and neutron stars can be approximated by the @Paradijs1994 model; however the black holes are on average 10 times more luminous in optical than neutron stars. The scatter in optical monochromatic luminosity, defined as the mean of the differences between the data and the model, is $\pm 0.29$ dex for black holes and $\pm 0.36$ dex for neutron stars (both a factor of $\sim 2$). This scatter may be due to uncertainties in the distance, inclination, interstellar absorption and masses of each system, and possibly real, intrinsic effects. These relations and their scatter can be used to further constrain the likely value of the orbital period of XTE J1719–291 if it harbours either a neutron star or a black hole. If we again assume a neutron star of mass $M_1=1.4\,M_{\odot}$ and a companion of mass $M_2=0.6\,M_{\odot}$ [typical values for the sample in @Russell2007a], XTE J1719–291 would be consistent with the empirical relation for neutron stars if its orbital period is $P_{\rm orb} = 5.0^{+12.1}_{-3.5}$ hours. Alternatively, if the compact object is a black hole, XTE J1719–291 would only be consistent with the relation for black holes if its orbital period is $P_{\rm orb} = 0.08^{+0.13}_{-0.05}$ hours. This assumes a combined mass of the black hole and companion star of $M_1 + M_2 =10\,M_{\odot}$ [typical for the sample in @Russell2006]. The significant differences between the orbital periods derived using the @Paradijs1994 and @Russell2006 relations result from the empirical systematic offset between black hole and neutron star sources found by the latter authors. The original @Paradijs1994 relation was normalized to a collection of data containing two data points from black holes, and this systematic offset between black hole and neutron star accretors was only identified in a larger collection of sources using many data points from each source [data from 15 black hole candidates and 19 neutron stars were used in @Russell2006; @Russell2007a].
These results favour a neutron star accretor in XTE J1719–291, with a likely orbital period of $1.5 \simlt P_{\rm orb} \simlt 17$ hr. It is also worth noting that XTE J1719–291 lies close to data of SAX J1808.4–3658 in the optical–X-ray luminosity diagram (Fig. \[optx\]) which has an orbital period of 2.0 hours.
Long-term average accretion rate
--------------------------------
We have calculated the long-term time-averaged accretion rate for both a neutron star and a black hole accretor (see Section 3.1). We find values of $10^{-13}$ to $10^{-12}$ $\sol$ yr$^{-1}$. These low accretion rates are difficult to explain with the current LMXB evolution models and it might be necessary to invoke exotic scenarios, such as neutron stars accreting from brown dwarfs or planetary companions (see @King2006), although detail binary evolution calculations still need to be performed to support these conclusions. Other possibilities for these subluminous transients are the dissipation via radiatively inefficient flows of the accretion power for black holes (e.g. @Fender2003; @Narayan2008) or the propeller mechanism for neutron stars, where only a fraction of the mass transferred from the donor is accreted onto the neutron star (e.g. @Illarionov1975; @Alpar2001; @Romanova2005 ).\
Acknowledgments {#acknowledgments .unnumbered}
===============
This work was supported by an ERC starting grant awarded to RW. AP and D.M.R acknowledge support from the Netherlands Organization for Scientific Research (NWO) Veni Fellowship
[^1]: E-mail:[email protected]
[^2]: See http://xmm.esac.esa.int/sas/current/documentation/threads/
[^3]: See http://www.swift.ac.uk/pileupthread.shtml for *Swift* pile-up thread
[^4]: Available from http://heasarc.gsfc.nasa.gov/Tools/w3pimms.html
|
---
abstract: 'The relation between the girth and the guaranteed error correction capability of $\gamma$-left regular LDPC codes when decoded using the bit flipping (serial and parallel) algorithms is investigated. A lower bound on the size of variable node sets which expand by a factor of at least $3 \gamma/4$ is found based on the Moore bound. An upper bound on the guaranteed error correction capability is established by studying the sizes of smallest possible trapping sets. The results are extended to generalized LDPC codes. It is shown that generalized LDPC codes can correct a linear fraction of errors under the parallel bit flipping algorithm when the underlying Tanner graph is a good expander. It is also shown that the bound cannot be improved when $\gamma$ is even by studying a class of trapping sets. A lower bound on the size of variable node sets which have the required expansion is established.'
author:
- 'Shashi Kiran Chilappagari, Dung Viet Nguyen, Bane Vasic, and Michael W. Marcellin, [^1] [^2] [^3]'
title: On Trapping Sets and Guaranteed Error Correction Capability of LDPC Codes and GLDPC Codes
---
[Submitted to IEEE Transactions on Information Theory, May 2008]{}
**Index Terms**
[Low-density parity-check codes, bit flipping algorithms, trapping sets, error correction capability]{}
Introduction {#section1}
============
Iterative algorithms for decoding low-density parity-check (LDPC) codes [@gallager] have been the focus of research over the past decade and most of their properties are well understood [@richardsonurbanke; @richardsonurbankeshokrollahi]. These algorithms operate by passing messages along the edges of a graphical representation of the code known as the Tanner graph, and are optimal when the underlying graph is a tree. Message passing decoders perform remarkably well which can be attributed to their ability to correct errors beyond the traditional bounded distance decoding capability. However, in contrast to bounded distance decoders (BDDs), the guaranteed error correction capability of iterative decoders is largely unknown.
The problem of recovering from a fixed number of erasures is solved for iterative decoding on the binary erasure channel (BEC). If the size of the minimum stopping set in the Tanner graph of a code is at least $t+1$, then the decoder is guaranteed to recover from any $t$ erasures. Orlitsky *et al.* [@orlitsky] studied the relation between stopping sets and girth and derived bounds on the smallest stopping set in any $d$-left regular Tanner graph with girth $g$.
An analogous result does not exist for decoding on other channels such as the binary symmetric channel (BSC) and the additive white Gaussian noise (AWGN) channel. In this paper, we present such a result for hard decision decoding algorithms. Gallager [@gallager] proposed two binary message passing algorithms, namely Gallager A and Gallager B, for decoding over the BSC. He showed that for the column-weight $\gamma \geq 3$ and $\rho >\gamma$, there exist $(n,\gamma,\rho)$ [^4] regular LDPC codes for which the bit error probability asymptotically tends to zero whenever we operate below the threshold. The minimum distance was shown to increase linearly with the code length, but correction of a linear fraction of errors was not shown. Zyablov and Pinsker [@zyablov] analyzed LDPC codes under a simpler decoding algorithm known as the bit flipping algorithm, and showed that almost all the codes in the regular ensemble with $\gamma \geq 5$ can correct a constant fraction of worst case errors. Sipser and Spielman [@spielman] used expander graph arguments to analyze two bit flipping algorithms, serial and parallel. Specifically, they showed that these algorithms can correct a fraction of errors if the underlying Tanner graph is a good expander. Burshtein and Miller [@burshtein] applied expander based arguments to show that message passing algorithms can also correct a fixed fraction of worst case errors when the degree of each variable node is more than five. Feldman *et al.* [@feldman] showed that the linear programming decoder [@feldman2] is also capable of correcting a fraction of errors. Recently, Burshtein [@burshteinisitpaper] showed that regular codes with variable nodes of degree four are capable of correcting a linear number of errors under bit flipping algorithm. He also showed tremendous improvement in the fraction of correctable errors when the variable node degree is at least five.
Tanner [@tanner] studied a class of codes constructed based on bipartite graphs and short error correcting codes. Tanner’s work is a generalization of the LDPC codes proposed by Gallager [@gallager] and hence these codes are referred to as generalized LDPC (GLDPC) codes. Tanner proposed code construction techniques, decoding algorithms and complexity and performance analysis to analyze these codes and derived bounds on the rate and minimum distance for these codes. Sipser and Spielman [@spielman] analyzed a special case of GLDPC codes (which they termed as expander codes) using expansion arguments and proposed explicit constructions of asymptotically good codes capable of correcting a fraction of errors. Zemor [@zemor] improved the fraction of correctable errors under a modified decoding algorithm. Barg and Zemor in [@barg] analyzed the error exponents of expander codes and showed that expander codes achieve capacity over the BSC. Janwa and Lal [@janwa] studied GLDPC codes in the most general setting by considering unbalanced bipartite graphs. Miladinovic and Fossorier [@fossorier] derived bounds on the guaranteed error correction capability of GLDPC codes for the special case of failures only decoding.
The focus of this paper is to establish lower and upper bounds on the guaranteed error correction capability of LDPC codes and GLDPC codes as a function of their column-weight and girth. For the case of GLDPC codes, we also find the expansion required to guarantee correction of a fraction of errors under the parallel bit flipping algorithm, as a function of the error correction capability of the sub-code. Our approach can be summarized as follows: (a) to establish lower bounds, we determine the size of variable node sets in a left regular Tanner graph which are guaranteed to have the expansion required by bit flipping algorithms, based on the Moore bound [@biggs p.180] and (b) to find upper bounds, we study the sizes of smallest possible trapping sets [@rich] in a left regular Tanner graph.
It is well known that a random graph is a good expander with high probability [@spielman]. However, the fraction of nodes having the required expansion is very small and hence the code length to guarantee correction of a fixed number of errors must be large. Moreover, determining the expansion of a given graph is known to be NP hard [@alon], and spectral gap methods cannot guarantee an expansion factor of more than $1/2$ [@spielman]. On the other hand, code parameters such as column weight and girth can be easily determined or are assumed to be known for the code under consideration. We prove that for a given column-weight, the error correction capability grows exponentially in girth. However, we note that since the girth grows logarithmically in the code length, this result does not show that the bit flipping algorithms can correct a linear fraction of errors.
To find an upper bound on the number of correctable errors, we study the size of sets of variable nodes which lead to decoding failures. A decoding failure is said to have occurred if the output of the decoder is not equal to the transmitted codeword [@rich]. The conditions that lead to decoding failures are well understood for a variety of decoding algorithms such as maximum likelihood decoding, bounded distance decoding and iterative decoding on the BEC. However, for iterative decoding on the BSC and AWGN channel, the understanding is far from complete. Two approaches have been taken in this direction, namely trapping sets [@rich] and pseudo-codewords [@koetter]. We adopt the trapping set approach in this paper to characterize decoding failures. Richardson [@rich] introduced the notion of trapping sets to estimate the error floor on the AWGN channel. In [@chilappagarione], trapping sets were used to estimate the frame error rate of column-weigh-three LDPC codes. In this paper, we define trapping sets with the help of fixed points for the bit flipping algorithms (both serial and parallel). We then find bounds on the size of trapping sets based on extremal graphs known as cage graphs [@cage], thereby finding an upper bound on the guaranteed error correction capability. By saying that a code with column weight $\gamma$ and girth $2g'$ is not guaranteed to correct $k$ errors, we mean that there exists a code with column weight $\gamma$ and girth $2g'$ that fails to correct $k$ errors.
The rest of the paper is organized as follows. In Section \[section2\], we provide a brief introduction to LDPC codes, decoding algorithms and trapping sets [@rich]. In Section \[section3\], we prove our main theorem relating the column weight and girth to the size of variable node sets which expand by a factor of at least $3 \gamma/4$. We derive bounds on the size of trapping sets based on cage graphs in Section \[section4\]. In Section \[section5\], we prove that the parallel bit flipping algorithm can correct a fraction of errors if the underlying Tanner graph is a good expander. We conclude with a few remarks in Section \[section6\].
Preliminaries {#section2}
=============
In this section, we first establish the notation and then proceed to give a brief introduction to LDPC codes and hard decision decoding algorithms. We then give the relation between the error correction capability of the code and the expansion of the underlying Tanner graph. We finally describe trapping sets for the algorithms.
Graph Theory Notation
---------------------
We adopt the standard notation in graph theory (see [@bollobas] for example). $G=(U,E)$ denotes a graph with set of nodes $U$ and set of edges $E$. When there is no ambiguity, we simply denote the graph by $G$. An edge $e$ is an unordered pair $(u_1,u_2)$ of nodes and is said to be incident on $u_1$ and $u_2$. Two nodes $u_1$ and $u_2$ are said to be adjacent (neighbors) if there is an edge $e=(u_1,u_2)$ incident on them. The order of the graph is $|U|$ and the size of the graph is $|E|$. The degree of $u$, $d(u)$, is the number of its neighbors. A node with degree one is called a leaf or a pendant node. A graph is $d$-regular if all the nodes have degree $d$. The average degree $\overline{d}$ of a graph is defined as $\overline{d}=2|E|/|U|$. The girth $g(G)$ of a graph $G$, is the length of smallest cycle in $G$. $H=(V \cup C,E')$ denotes a bipartite graph with two sets of nodes; variable (left) nodes $V$ and check (right) nodes $C$ and edge set $E'$. Nodes in $V$ have neighbors only in $C$ and vice versa. A bipartite graph is said to be $\gamma$-left regular if all variable nodes have degree $\gamma$, $\rho$-right regular if all check nodes have degree $\rho$ and $(\gamma,\rho)$ regular if all variable nodes have degree $\gamma$ and all check nodes have degree $\rho$. The girth of a bipartite graph is even.
LDPC Codes and Decoding Algorithms
----------------------------------
LDPC codes [@gallager] are a class of linear block codes which can be defined by sparse bipartite graphs [@shokrollahi]. Let $G$ be a bipartite graph with two sets of nodes: $n$ variable nodes and $m$ check nodes. This graph defines a linear block code $\mathcal{C}$ of length $n$ and dimension at least $n-m$ in the following way: The $n$ variable nodes are associated to the $n$ coordinates of codewords. A vector $\mathbf{v}=(v_1,v_2,\ldots,v_n)$ is a codeword if and only if for each check node, the modulo two sum of its neighbors is zero. Such a graphical representation of an LDPC code is called the Tanner graph [@tanner] of the code. The adjacency matrix of $G$ gives a parity check matrix of $\cal{C}$. An $(n,\gamma,\rho)$ regular LDPC code has a Tanner graph with $n$ variable nodes each of degree $\gamma$ (column weight) and $n\gamma/ \rho$ check nodes each of degree $\rho$ (row weight). This code has length $n$ and rate $r \geq 1-\gamma/\rho$ [@shokrollahi].
We now describe a simple hard decision decoding algorithm known as the parallel bit flipping algorithm [@zyablov; @spielman] to decode LDPC codes. As noted earlier, each check node imposes a constraint on the neighboring variable nodes. A constraint (check node) is said to be satisfied by a setting of variable nodes if the sum of the variable nodes in the constraint is even; otherwise the constraint is unsatisfied.
[**Parallel Bit Flipping Algorithm**]{}
- In parallel, flip each variable that is in more unsatisfied than satisfied constraints.
- Repeat until no such variable remains.
A serial version of the algorithm is also defined in [@spielman] and all the results in this paper hold for the serial bit flipping algorithm also. The bit flipping algorithms are iterative in nature but do not belong to the class of message passing algorithms (see [@burshtein] for an explanation).
Expansion and Error Correction Capability
-----------------------------------------
Sipser and Spielman [@spielman] analyzed the performance of the bit flipping algorithms using the expansion properties of the underlying Tanner graph of the code. We summarize the results from [@spielman] below for the sake of completeness. We start with the following definitions from [@spielman].
Let $G=(U,E)$ with $|U|=n_1$. Then *every set of at most $m_1$ nodes expands by a factor of $\delta$* if, for all sets $S \subset U$ $$|S|\leq m_1 \Rightarrow |\{y: \exists x \in S \mbox{~such that~} (x,y) \in E \}| > \delta |S|.$$
We consider bipartite graphs and expansion of variable nodes only.
A graph is a $(\gamma,\rho,\alpha,\delta)$ expander if it is a $(\gamma,\rho)$ regular bipartite graph in which every subset of at most $\alpha$ fraction of the variable nodes expands by a factor of at least $\delta$.
The following theorem from [@spielman] relates the expansion and error correction capability of an $(n,\gamma,\rho)$ LDPC code with Tanner graph $G$ when decoded using the parallel bit flipping decoding algorithm.
[@spielman Theorem 11] Let $G$ be a $(\gamma, \rho, \alpha, (3/4 +\epsilon)\gamma)$ expander over $n$ variable nodes, for any $\epsilon > 0$. Then, the simple parallel decoding algorithm will correct any $\alpha_0 < \alpha(1 + 4\epsilon)/2$ fraction of errors after $\log_{1-4\epsilon}(\alpha_0 n)$ decoding rounds.
*Notes:*
1. The serial bit flipping algorithm can also correct $\alpha_0 < \alpha/2$ fraction of errors if $G$ is a $(\gamma, \rho, \alpha, (3/4)\gamma)$ expander.
2. The results hold for any left regular code as expansion is needed for variable nodes only.
From the above discussion, it is observed that finding the number of variable nodes which are guaranteed to expand by a factor of at least $3 \gamma/4$, gives a lower bound on the guaranteed error correction capability of LDPC codes.
Decoding Failures and Trapping Sets
-----------------------------------
We now characterize failures of the iterative decoders using fixed points and trapping sets. Some of the following discussion appears in [@colwtthreepaper], [@chilappagarione], [@ucsdpaper] and we include it for sake of completeness.
Consider an LDPC code of length $n$ and let $\mathbf{x}=(x_1 x_2 \ldots x_n)$ be the binary vector which is the input to the iterative decoder. Let $S(\mathbf{x})$ be the support of $\mathbf{x}$. The support of $\mathbf{x}$ is defined as the set of all positions $i$ where $x_i \neq 0$. The set of variable nodes (bits) which differ from their correct value are referred to as corrupt variables.
[@colwtthreepaper] A decoder failure is said to have occurred if the output of the decoder is not equal to the transmitted codeword.
$\mathbf{x}$ is a fixed point of the bit flipping algorithm if the set of corrupt variables remains unchanged after one round of decoding.
[@chilappagarione] The support of a fixed point is known as a trapping set. A $(V,C)$ trapping set $\cal{T}$ is a set of $V$ variable nodes whose induced subgraph has $C$ odd degree checks.
If the variable nodes corresponding to a trapping set are in error, then a decoder failure occurs. However, not all variable nodes corresponding to a trapping set need to be in error for a decoder failure to occur.
[@chilappagarione] The minimal number of variable nodes that have to be initially in error for the decoder to end up in the trapping set $\cal{T}$ will be referred to as [*critical number*]{} $m$ for that trapping set.
[@colwtthreepaper] A set of variable nodes which if in error lead to a decoding failure is known as a *failure set*.
Column Weight, Girth and Expansion {#section3}
==================================
In this section, we prove our main theorem which relates the column weight and girth of a code to its error correction capability. We show that the size of variable node sets which have the required expansion is related to the well known Moore bound [@biggs p.180]. We start with a few definitions required to establish the main theorem.
Definitions
-----------
The *reduced graph* $H_r=(V \cup C_r, E'_r)$ of $H=(V \cup C,E')$ is a graph with vertex set $V \cup C_r$ and edge set $E'_r$ given by $$\begin{aligned}
C_r &=& C \setminus C_p, ~C_p =\{c \in C : \mbox{c is a pendant node}\} \nonumber \\
E'_r&=& E' \setminus E'_p,~ E'_p = \{(v_i,c_j) \in E : c_j \in C_p\}. \nonumber \end{aligned}$$
Let $H=(V \cup C, E')$ be such that $\forall v \in V, d(v) \leq \gamma$. The *$\gamma$ augmented graph* $H_{\gamma}=(V \cup C_{\gamma}, E'_{\gamma})$ is a graph with vertex set $V \cup C_{\gamma}$ and edge set $E'_{\gamma}$ given by $$\begin{aligned}
C_{\gamma} &=& C \cup C_a, \mbox{~where~} C_a = \bigcup_{i=1}^{|V|}C_a^i \mbox{~and~} \nonumber \\
C_a^i &=& \{c_1^i,\ldots,c_{\gamma-d(v_i)}^i\}; \nonumber \\
E'_{\gamma}&=& E' \cup E'_{a}, \mbox{~where~} E'_a = \bigcup_{i=1}^{|V|} E_{a}^{'i} \mbox{~and} \nonumber \\
E_a^{'i} &=& \{(v_i,c_j)\in V \times C_{a}: c_j \in C_a^i\}. \nonumber\end{aligned}$$
[@spielman Definition 4] The *edge-vertex incidence graph* $G_{ev}=(U \cup E, E_{ev})$ of $G=(U,E)$ is the bipartite graph with vertex set $U \cup E$ and edge set $$E_{ev}=\{(e,u) \in E \times U : \mbox{$u$ is an endpoint of e}\}.$$
*Notes:*
1. The edge-vertex incidence graph is right regular with degree two.
2. $|E_{ev}|=2|E|$.
3. $g(G_{ev})=2g(G)$.
An *inverse edge-vertex incidence graph* $H_{iev}=(V, E'_{iev})$ of $H=(V \cup C, E')$ is a graph with vertex set $V$ and edge set $E'_{iev}$ which is obtained as follows. For $c \in C_r$, let $N(c)$ denote the set of neighbors of $c$. Label one node $v_i \in N(c)$ as a root node. Then $$\begin{aligned}
E'_{iev}&=&\{(v_i,v_j) \in V \times V: v_i \in N(c), v_j \in N(c), \nonumber \\
& &i \neq j,\mbox { $v_i$ is a root node, for some $c \in C_r$} \}. \nonumber\end{aligned}$$
*Notes:*
1. Given a graph, the inverse edge-vertex incidence graph is not unique.
2. $g(H_{iev}) \geq g(H)/2$, $|E'_{iev}| = |E'_r| - |C_r|$ and $|C_r| \leq |E'_r|/2$.
3. $|E'_{iev}| \geq |E'_r|/2$ with equality only if all checks in $C_r$ have degree two.
4. The term inverse edge-vertex incidence is used for the following reason. Suppose all checks in $H$ have degree two. Then the edge-vertex incidence graph of $H_{iev}$ is $H$.
The *Moore bound* [@biggs p.180] denoted by $n_0(d,g)$ is a lower bound on the least number of vertices in a $d$-regular graph with girth $g$. It is given by $$\begin{aligned}
n_0(d,g)=n_0(d,2r+1) &=& 1 + d \sum_{i=0}^{r-1} (d-1)^i, ~g~\mbox{odd} \nonumber\\
n_0(d,g)=n_0(d,2r)&=& 2 \sum_{i=0}^{r-1}(d-1)^i \nonumber, ~g~\mbox{even}.\end{aligned}$$
In [@mooreirreg], it was shown that a similar bound holds for irregular graphs.
[@mooreirreg] The number of nodes $n(\overline{d},g)$ in a graph of girth $g$ and average degree at least $\overline{d} \geq 2$ satisfies $$n(\overline{d},g) \geq n_0(\overline{d},g).$$
Note that $\overline{d}$ need not be an integer in the above theorem.
The Main Theorem
----------------
We now state and prove the main theorem.
\[thm1\] Let $G$ be a $\gamma \geq 4$-left regular Tanner graph $G$ with $g(G)=2g'$. Then for all $k < n_0(\gamma/2,g')$, any set of $k$ variable nodes in $G$ expands by a factor of at least $3 \gamma/4$.
Let $G^{k}=(V^k \cup C^k, E^k )$ denote the subgraph induced by a set of $k$ variable nodes $V^{k}$. Since $G$ is $\gamma$-left regular, $|E^{k}|=\gamma k$. Let $G^{k}_r=(V^{k} \cup C^{k}_r ,E^{k}_r)$ be the reduced graph. We have $$\begin{aligned}
|C^{k}| &=& |C^{k}_r| + |C^{k}_p| \nonumber \\
|E^k| &=& |E^k_p| + |E^k_r| \nonumber \\
|E^k_p| &=& |C^{k}_p| \nonumber \\
|C^{k}_p| &=& \gamma k - |E^{k}_r|. \nonumber \end{aligned}$$ We need to prove that $|C^k| > 3\gamma k/4$.
Let $f(k,g')$ denote the maximum number of edges in an arbitrary graph of order $k$ and girth $g'$. By Theorem 2, for all $k < n_0 (\gamma/2,g')$, the average degree of a graph with $k$ nodes and girth $g'$ is less than $\gamma/2$. Hence, $f(k,g') < \gamma k/4$. We now have the following lemma.
The number of edges in $G^{k}_r$ cannot exceed $2f(k,g')$ i.e., $$|E^{k}_r| \leq 2 f(k,g').$$
The proof is by contradiction. Assume that $|E^{k}_r| > 2f(k,g')$. Consider $G^{k}_{iev}=(V^{k}, E^{k}_{iev})$, an inverse edge vertex incidence graph of $G^{k}$. We have $$|E^{k}_{iev}| > f(k,g').$$ This is a contradiction as $G^{k}_{eiv}$ is a graph of order $k$ and girth at least $g'$.
We now find a lower bound on $|C^k|$ in terms of $f(k,g')$. We have the following lemma.
$|C^{k}| \geq \gamma k - f(k,g')$.
Let $|E^{k}_{r}| = 2f(k,g') - j$ for some integer $j \geq 0$. Then $|E^{k}_{p}| = \gamma k - 2f(k,g') + j$. We claim that $|C^{k}_{r}| \geq f(k,g') + j$. To see this, we note that $$\begin{aligned}
|E^{k}_{iev}| &=& |E^{k}_{r}| - |C^{k}_{r}|, \mbox{~or} \nonumber \\
|C^{k}_{r}| &=& |E^{k}_{r}| - |E^{k}_{iev}|. \nonumber \end{aligned}$$ But $$\begin{aligned}
|E^{k}_{iev}| &\leq& f(k,g') \nonumber \\
\Rightarrow |C^{k}_{r}| &\geq& 2f(k,g') - j - f(k,g') \nonumber \\
\Rightarrow |C^{k}_{r}| &\geq& f(k,g') - j .\nonumber\end{aligned}$$ Hence we have, $$\begin{aligned}
|C^{k}| &=& |C^{k}_{r}| + |C^{k}_{p}| \nonumber \\
\Rightarrow |C^{k}| &\geq& f(k,g') - j + \gamma k - 2f(k,g') + j \nonumber \\
\Rightarrow |C^{k}| &\geq& \gamma k - f(k,g'). \nonumber\end{aligned}$$
The theorem now follows as $$f(k,g') < \gamma k/4$$ and therefore $$|C^{k}| > 3\gamma k/4.$$
Let $\mathcal{C}$ be an LDPC code with column-weight $\gamma \geq 4$ and girth $2g'$. Then the bit flipping algorithm can correct any error pattern of weight less than $n_0(\gamma/2,g')/2$.
Cage Graphs and Trapping Sets {#section4}
=============================
In this section, we first give necessary and sufficient conditions for a given set of variables to be a trapping set. We then proceed to define a class of interesting graphs known as cage graphs [@cage] and establish a relation between cage graphs and trapping sets. We then give an upper bound on the error correction capability based on the sizes of cage graphs. The proofs in this section are along the same lines as in Section \[section3\]. Hence, we only give a sketch of the proofs.
\[thm2\] Let $\mathcal{C}$ be an LDPC code with $\gamma$-left regular Tanner graph $G$. Let $\cal{T}$ be a set consisting of $V$ variable nodes with induced subgraph $\cal{I}$. Let the checks in $\cal{I}$ be partitioned into two disjoint subsets; $\cal{O}$ consisting of checks with odd degree and $\cal{E}$ consisting of checks with even degree. Then $\cal{T}$ is a trapping set for bit flipping algorithm iff : (a) Every variable node in $\cal{I}$ has at least $\left\lceil \gamma/2 \right\rceil$ neighbors in $\cal{E}$, and (b) No $\left\lfloor \gamma/2 \right\rfloor + 1$ checks of $\cal{O}$ share a neighbor outside $\cal{I}$.
We first show that the conditions stated are sufficient. Let $\mathbf{x_{\mathcal{T}}}$ be the input to the bit flipping algorithm, with support $\mathcal{T}$. The only unsatisfied constraints are in $\mathcal{O}$. By the conditions of the theorem, we observe that no variable node is involved in more unsatisfied constraints than satisfied constraints. Hence, no variable node is flipped and by definition $\mathbf{x_{\mathcal{T}}}$ is a fixed point implying that $\mathcal{T}$ is a trapping set.
To see that the conditions are necessary, observe that for $\mathbf{x}_{\mathcal{T}}$ to be a trapping set, no variable node should be involved in more unsatisfied constraints than satisfied constraints.
*Remark:* Theorem \[thm2\] is a consequence of Fact 3 from [@rich].
To determine whether a given set of variables is a trapping set, it is necessary to not only know the induced subgraph but also the neighbors of the odd degree checks. However, in order to establish general bounds on the sizes of trapping sets given only the column weight and the girth, we consider only condition (a) of Theorem \[thm2\] which is a necessary condition. A set of variable nodes satisfying condition (a) is known as a *potential trapping set*. A trapping set is a potential trapping set that satisfies condition (b). Hence, a lower bound on the size of the potential trapping set is a lower bound on the size of a trapping set. It is worth noting that a potential trapping set can always be extended to a trapping set by successively adding a variable node till condition (b) is satisfied.
[@cage] A $(d,g)$-*cage graph*, $G(d,g)$, is a $d$-regular graph with girth $g$ having the minimum possible number of nodes.
A lower bound, $n_l(d,g)$, on the number of nodes $n_c(d,g)$ in a $(d,g)$-cage graph is given by the Moore bound. An upper bound $n_u(d,g)$ on $n_c(d,g)$ (see [@cage] and references therein) is given by $$\begin{aligned}
n_u(3,g)&=& \left\{\begin{array}{cl}\frac{4}{3} + \frac{29}{12}~2^{g-2} & \mbox{for g odd} \\
\frac{2}{3} + \frac{29}{12}~2^{g-2} & \mbox{for g even} \end{array} \right. \nonumber \\
n_u(d,g)&=& \left\{\begin{array}{cl} 2(d-1)^{g-2} & \mbox{for g odd} \\
4(d-1)^{g-3}& \mbox{for g even} \end{array} \right. . \nonumber\end{aligned}$$
\[thm3\] Let $\mathcal{C}$ be an LDPC code with $\gamma$-left regular Tanner graph $G$ and girth $2g'$. Let $\mathcal{T}(\gamma,2g')$ denote the size of smallest possible potential trapping set of $\mathcal{C}$ for the bit flipping algorithm. Then, $$|\mathcal{T}(\gamma,2g')| = n_c(\left\lceil \gamma/2 \right\rceil,g').$$
We first prove the following lemma and then exhibit a potential trapping set of size $n_c(\left\lceil \gamma/2 \right\rceil,g')$.
$|\mathcal{T}(\gamma,2g')| \geq n_c(\left\lceil \gamma/2 \right\rceil,g')$.
Let $\mathcal{T}_1$ be a trapping set with $|\mathcal{T}_1|< n_c(\left\lceil \gamma/2 \right\rceil,g')$ and let $G_1$ denote the induced subgraph of $\mathcal{T}_1$. We can construct a $(\left\lceil \gamma/2 \right\rceil,g'')$- cage graph $(g'' \geq g)$ with $|\mathcal{T}_1|< n_c(\left\lceil \gamma/2 \right\rceil,g')$ nodes by removing edges (if necessary) from the inverse edge-vertex of $G_1$ which is a contradiction.
We now exhibit a potential trapping set of size $n_c(\left\lceil \gamma/2 \right\rceil,g')$. Let $G_{ev}(\left\lceil \gamma/2 \right\rceil,g')$ be the edge-vertex incidence graph of a $G(\left\lceil \gamma/2 \right\rceil,g')$. Note that $G_{ev}(\left\lceil \gamma/2 \right\rceil,g')$ is a left regular bipartite graph with $n_c(\left\lceil \gamma/2 \right\rceil,g')$ variable nodes of degree $\left\lceil \gamma/2 \right\rceil$ and all checks have degree two. Now consider $G_{ev,\gamma}(\left\lceil \gamma/2 \right\rceil,g')$, the $\gamma$ augmented graph of $G_{ev}(\left\lceil \gamma/2 \right\rceil,g')$. It can be seen that $G_{ev,\gamma}(\left\lceil \gamma/2 \right\rceil,g')$ is a potential trapping set.
There exists a code $\mathcal{C}$ with $\gamma$-left regular Tanner graph of girth $2g'$ which fails to correct $n_c(\left\lceil \gamma/2 \right\rceil,g')$ errors.
Let $G_{ev,\gamma}(\left\lceil \gamma/2 \right\rceil,g')$ be as defined in Theorem \[thm3\]. Now construct a code $\mathcal{C}$ with column-weight $\gamma$ and girth $2g'$ starting from $G_{ev,\gamma}(\left\lceil \gamma/2 \right\rceil,g')$ such that the set of variable nodes in $G_{ev,\gamma}(\left\lceil \gamma/2 \right\rceil,g')$ also satisfies condition (b) of Theorem \[thm2\]. Then, by Theorem \[thm2\] and Theorem \[thm3\], the set of variable nodes in $G_{ev,\gamma}(\left\lceil \gamma/2 \right\rceil,g')$ with cardinality $n_c(\left\lceil \gamma/2 \right\rceil,g')$ is a trapping set and hence $\mathcal{C}$ fails to decode an error pattern of weight $n_c(\left\lceil \gamma/2 \right\rceil,g')$.
*Remark:* We note that for $\gamma=3$ and $\gamma=4$, the above bound is tight. Observe that for $d=2$, the Moore bound is $n_0(d,g)=g$ and that a cycle of length $2g$ with $g$ variable nodes is always a potential trapping set. In fact, for a code with $\gamma=3$ or $4$, and Tanner graph of girth greater than eight, a cycle of the smallest length is always a trapping set (see [@colwtthreepaper] for the proof).
Generalized LDPC Codes {#section5}
======================
In this section, we first consider two bit flipping decoding algorithms for GLDPC codes. We then establish a relation between expansion and error correction capability. We also establish a lower bound on the number of variable nodes that have the required expansion. We then exhibit a trapping set and as a consequence show that the bound on the required expansion cannot be improved when $\gamma$ is even. We also establish bounds on the size of trapping sets.
We begin with the definition of GLDPC codes by adopting the terminology from expander codes [@spielman].
: Let $G$ be a $(\gamma,\rho)$ regular bipartite graph between $n$ variable nodes $(v_1,v_2,\ldots,v_n)$ and $n\gamma/\rho$ check nodes $(c_1,c_2,\ldots,c_{n\gamma/\rho})$. Let $b(i,j)$ be a function designed so that, for each check node $c_i$, the variables neighboring $c_i$ are $v_{b(i,1)}, v_{b(i,2)},\ldots,v_{b(i,\rho)}$. Let $\cal{S}$ be an error correcting code of block length $\rho$. The GLDPC code $\mathcal{C}(G,\mathcal{S})$ is the code of block length $n$ whose codewords are the words $(x_1,x_2,\ldots,x_n)$ such that, for $1 \leq i \leq n\gamma/\rho$, $(x_{b(i,1)},\ldots,x_{b(i,\rho)})$ is a codeword of $\mathcal{S}$.
The terms column-weight, row-weight, check nodes, variable nodes and trapping sets mean the same as in case of LDPC codes. The code $\mathcal{S}$ at each check node is sometimes referred to as the sub-code.
Decoding algorithms
-------------------
Tanner [@tanner] proposed different hard decision decoding algorithms to decode GLDPC codes. We now describe an iterative algorithm known as parallel bit flipping algorithm originally described in [@tanner], which is employed when the sub-code is capable of correcting $t$ errors.
**Parallel bit flipping algorithm:** Each decoding round consists of the following steps.
- A variable node sends its current estimate to check nodes.
- A check node performs decoding on incoming messages and finds the nearest codeword. For all variable nodes which differ from the codeword, the check node sends a flip message. If the check node does not find a unique codeword, it does not send any flip messages.
- A variable node flips if it receives more than $\gamma/2$ flip messages.
The set of variable nodes which differ from their correct value are known as corrupt variables. The rest of the variable nodes are referred to as correct variables. Following the algorithms, we have the following definition adopted from [@spielman]:
A check node is said to be *confused* if it sends flip messages to correct variable nodes, or if it does not send flip message to corrupt variable nodes, or both. Otherwise, a check node is said to be *helpful*.
*Remarks:*
1. For the parallel bit flipping decoding algorithm, a check node with sub-code of minimum distance at least $d_{min}=2t+1$ can be confused only if it is connected to more than $t$ corrupt variable nodes.
2. The parallel bit flipping algorithm is different from the algorithm presented by Sipser and Spielman in [@spielman] for expander codes, but is similar to the algorithm proposed by Zemor in [@zemor]. However, we note that the codes considered in [@zemor] are based on $d$-regular bipartite graphs and are a special case of doubly generalized LDPC codes, where each variable node is also associated with an error correcting code.
3. Apart from helpful checks and confused checks, Sipser and Spielman defined unhelpful checks. However, our definition of confused checks includes unhelpful checks as well.
4. Miladinovic and Fossorier in [@fossorier] considered a decoding algorithm where the decoding at every check either results in correct decoding or a failure but not miscorrection. While this assumption is reasonable when the sub-code is a long code, it is not true in general. We however, point out that the methodology we adopt can be applied to this case as well.
5. The work by Sipser and Spielman [@spielman], Zemor [@zemor], Barg and Zemor [@barg] and Janwa and Lal [@janwa] focused on asymptotic results and explicit construction of expander codes. The proofs and constructions are based on spectral gap and as noted earlier, such methods cannot guarantee expansion factor of more than 1/2. Our proofs require a greater expansion factor.
Expansion and Error Correction Capability
-----------------------------------------
We now prove that the above described algorithm can correct a fraction of errors if the underlying Tanner graph is a good expander.
\[thm4\] Let $\mathcal{C}(G,\mathcal{S})$ be a GLDPC code with a $\gamma$-left regular Tanner graph $G$. Assume that the sub-code $\mathcal{S}$ has minimum distance at least $d_{min}=2t+1$ and is capable of correcting $t$ errors. Let $G$ be a $(\gamma,\rho,\alpha,\beta\gamma)$ expander where $$\begin{aligned}
1> \beta>\frac{t+2}{2(t+1)}. \nonumber\end{aligned}$$ Then the parallel bit flipping decoding algorithm will correct any $\alpha_0 \leq \alpha$ fraction of errors.
Let $n$ be the number of variable nodes in $\mathcal{C}$. Let $V$ be the set of corrupt variables at the beginning of a decoding round. Assume that $|V|\leq\alpha n$. We will show that after the decoding round, the number of corrupt variables is strictly less than $|V|$.
Let $F$ be the set of corrupt variables that fail to flip in one decoding round, and let $C$ be the set of variables that were originally uncorrupt, but which become corrupt after one decoding round. After one decoding round, the set of corrupt variables is $F\cup C$. In the worst case scenario, a confused check sends $t$ flip messages to the uncorrupt variables and no flip message to the corrupt variables. We now have the following lemma:
Let $C_k$ be the set of confused checks, then $$\begin{aligned}
|C_k|<\frac{(1-\beta)\gamma|V|}{t}. \label{lm1}\end{aligned}$$
The total number of edges connected to the corrupt variables is $\gamma|V|$. Each confused check must have at least $t+1$ neighbors in $V$. Let S be the set of helpful checks that have at least one neighbor in $V$. Then, $$\begin{aligned}
\gamma|V|\geq|C_k|(t+1)+|S|. \label{lm11}\end{aligned}$$ By expansion, $$\begin{aligned}
|S|+|C_k|>\beta\gamma|V|. \label{lm12}\end{aligned}$$ By (\[lm11\]) and (\[lm12\]), we obtain $$\begin{aligned}
|C_k|<\frac{(1-\beta)\gamma|V|}{t}. \nonumber\end{aligned}$$
We now prove that $|F\cup C|<|V|$. The proof is by contradiction. Assume that $|F\cup C|\geq|V|$. Then there exists a subset $C'\subset C$ such that $|F\cup C'|=|V|$. We observe that a variable node in $F$ can have at most $\lfloor\gamma/2\rfloor$ neighbors that are not in $C_k$. Also, a variable node in $C'$ must have at least $\lfloor\gamma/2\rfloor + 1$ neighbors in $C_k$, and hence can have at most $\lceil\gamma/2\rceil-1$ neighbors that are not in $C_k$. Let $N(F\cup C')$ be the set of neighbors of $F\cup C'$. Then, $$\begin{aligned}
N(F\cup C')&\leq&|C_k|+\lfloor\frac{\gamma}{2}\rfloor|F|+\left(\lceil\frac{\gamma}{2}\rceil-1\right)|C'| \nonumber\\
&<&|C_k|+\frac{\gamma}{2}|F|+\frac{\gamma}{2}|C'|=|C_k|+\frac{\gamma}{2}|V|.\label{th1}\end{aligned}$$ Substituting (\[lm1\]) into (\[th1\]), we obtain $$\begin{aligned}
N(F\cup C')<\left(\frac{1-\beta}{t}+\frac{1}{2}\right)\gamma|V|.\nonumber\end{aligned}$$ Now $$\begin{aligned}
&&\beta>\frac{t+2}{2(t+1)} \nonumber\\
&=>&\frac{1-\beta}{t}<\frac{2\beta-1}{2} \nonumber\\
&=>&\frac{1-\beta}{t}+\frac{1}{2}<\beta \nonumber\\
&=>&N(F\cup C')<\beta\gamma|V|\nonumber\end{aligned}$$ which is a contradiction.
*Remark:* The above theorem proves that the parallel bit flipping algorithm can correct a fraction of errors in linear number of rounds (in code length). However, if we assume an expansion of $(\beta+\epsilon)\gamma$, it can be shown that the number of errors decreases by a constant factor with every iteration resulting in convergence in logarithmic number of rounds.
The following theorem establishes a lower bound on the number of nodes in a left regular graph which expand by a factor required by the above algorithms.
\[thm6\] Let $G$ be a $\gamma$-left regular bipartite graph with $g(G)=2g'$. Then for all $k < n_0(\gamma t/(t+1),g')$, any set of $k$ variable nodes in $G$ expands by a factor of at least $\beta \gamma$, where $$\begin{aligned}
\beta = \frac{t+2}{2(t+1)}. \nonumber\end{aligned}$$
The proof is similar to the proof of Theorem \[thm1\]. Following the notation from Theorem \[thm1\], we note that for all $k < n_0(\gamma t/(t+1),g')$, $$\begin{aligned}
f(k,g') < \frac{k\gamma t}{2(t+1)}. \nonumber\end{aligned}$$ Since $|C^k|\geq \gamma k-f(k,g')$, we have $$\begin{aligned}
|C^k|>\frac{t+2}{2(t+1)}\gamma k. \nonumber\end{aligned}$$
Note that the above theorem holds when $\gamma t/(t+1) \geq 2$.
Let $\mathcal{C}(G,\mathcal{S})$ be a GLDPC code with a $\gamma$-left regular Tanner graph $G$ and $g(G)=2g'$. Assume that the sub-code $\mathcal{S}$ has minimum distance at least $d_{min}=2t+1$ and is capable of correcting $t$ errors. Then the parallel bit flipping algorithm can correct any error pattern of weight less than $n_0(\gamma t/(t+1),g')$.
Trapping Sets of GLDPC Codes
----------------------------
We now exhibit a trapping set for the parallel bit flipping algorithm. By examining the expansion of the trapping set, we show that the bound given in Theorem \[thm4\] cannot be improved when $\gamma$ is even.
\[thm7\] Let $\mathcal{C}$ be a GLDPC code with $\gamma$-left regular Tanner graph $G$. Let $\cal{T}$ be a set consisting of $V$ variable nodes with induced subgraph $\cal{I}$ with the following properties: (a) The degree of each check in $\cal{I}$ is either $1$ or $t+1$; (b) Each variable node in $V$ is connected to $\left\lceil \gamma/2 \right\rceil$ checks of degree $t+1$ and $\left\lfloor \gamma/2 \right\rfloor$ checks of degree $1$; and (c) No $\left\lfloor \gamma/2 \right\rfloor + 1$ checks of degree $t+1$ share a variable node outside $\cal{I}$. Then, $\cal{T}$ is a trapping set.
Observe that all the checks of degree $t+1$ in $\cal{I}$ are confused. Further, each confused check does not send flip messages to variable nodes in $V$. Since any variable node in $V$ is connected to $\left\lceil \gamma/2 \right\rceil$ confused checks, it remains corrupt. Also, no variable node outside $\cal{I}$ can receive more than $\left\lfloor \gamma/2 \right\rfloor$ flip messages. Hence, no variable node which is originally correct can get corrupted. By definition, $\cal{T}$ is a trapping set.
It can be seen that the total number of checks in $\cal{I}$ is equal to $|V|(\left\lfloor \gamma/2 \right\rfloor + \left\lceil \gamma/2 \right\rceil/(t+1))$. Hence, the set of variable nodes $V$ expands by a factor of $\gamma(t+2)/(2(t+1))$ when $\gamma$ is even. Hence, the bound given in Theorem \[thm4\] cannot be improved in this case.
For a set of variable nodes to be a trapping set, it is necessary that every variable node in the set is connected to at least $\left\lceil \gamma/2 \right\rceil$ confused checks. This observation leads to the following bound on the size of trapping sets.
Let $\mathcal{C}$ be a GLDPC code with $\gamma$-left regular Tanner graph $G$ and $g(G)=2g'$. Let $n_c(d_l,d_r,2g')$ denote the number of left vertices in a $(d_l,d_r)$ regular bipartite graph of girth $2g'$. Then the size of the smallest possible trapping set of $\cal{C}$ is $n_c(\left\lceil \gamma/2 \right\rceil, t+1 ,2g')$.
Follows from Theorem \[thm3\] and Theorem \[thm7\]
Let $\mathcal{C}(G,\mathcal{S})$ be a GLDPC code with a $\gamma$-left regular Tanner graph $G$ and $g(G)=2g'$. Assume that the sub-code $\mathcal{S}$ has minimum distance at least $d_{min}=2t+1$ and is capable of correcting $t$ errors. Then the parallel bit flipping algorithm cannot be guaranteed to correct all error patterns of weight greater than or equal to $n_c(\left\lceil \gamma/2 \right\rceil, t+1 ,2g')$.
Concluding Remarks {#section6}
==================
We derived lower bounds on the guaranteed error correction capability of LDPC and GLDPC codes by finding bounds on the number of nodes that have the required expansion. The bounds depend on two important code parameters namely: column-weight and girth. Since the relations between rate, column-weight, girth and code length are well explored in the literature (see [@gallager; @tanner] for example), bounds on the code length needed to achieve certain error correction capability can be derived for different column weights and sub-codes (for GLDPC codes). The bounds presented in the paper serve as guidelines in choosing code parameters in practical scenarios.
The lower bounds derived in this paper are weak. However, extremal graphs avoiding three, four and five cycles have been studied in great detail (see [@extremalone; @extremaltwo]) and these results can be used to derive tighter bounds when the girth is eight, ten or twelve. Also, since an expansion factor of $3 \gamma/4$ is not necessary (see [@spielman Theorem 24]) for LDPC codes, it is possible that tighter lower bounds can be derived for some cases. The results can be extended to message passing algorithms as well. There is a considerable gap between the lower bounds and upper bounds on the error correction capability. Deriving lower bounds based on the sizes of trapping sets rather than expansion may possibly lead to bridging this gap.
Our approach can be used to derive bounds on the guaranteed erasure recovery capability for iterative decoding on the BEC by finding the number of variable nodes which expand by a factor of $\gamma/2$. In [@orlitsky], the bounds on the guaranteed erasure recovery capability were derived based on the size of the smallest stopping set. Both approaches give the same bounds, which also coincide with the bounds given by Tanner [@tanner] for the minimum distance. Results similar to the ones reported by Miladinovic and Fossorier [@fossorier] based on the size of generalized stopping sets can also be derived.
[10]{} \[1\][\#1]{} url@rmstyle \[2\][\#2]{}
R. G. Gallager, *Low Density Parity Check Codes*.1em plus 0.5em minus 0.4emCambridge, MA: M.I.T. Press, 1963.
T. J. Richardson and R. Urbanke, “The capacity of low-density parity-check codes under message-passing decoding,” *IEEE Trans. Inform. Theory*, vol. 47, no. 2, pp. 599–618, Feb. 2001.
T. J. Richardson, M. Shokrollahi, and R. Urbanke, “Design of capacity-approaching irregular low-density parity-check codes,” *IEEE Trans. Inform. Theory*, vol. 47, no. 2, pp. 638–656, Feb. 2001.
A. Orlitsky, R. Urbanke, K. Viswanathan, and J. Zhang, “Stopping sets and the girth of [T]{}anner graphs,” in *Proc. of IEEE International Symposium on Information Theory*, 2002, p. 2.
R. M. Tanner, “A recursive approach to low complexity codes,” *IEEE Trans. Inform. Theory*, vol. 27, no. 5, pp. 533–547, Sept. 1981.
V. V. Zyablov and M. S. Pinsker, “Estimation of the error-correction complexity for [G]{}allager low-density codes,” *Problems of Information Transmission*, vol. 11, no. 1, pp. 18–28, 1976.
M. Sipser and D. Spielman, “Expander codes,” *IEEE Trans. Inform. Theory*, vol. 42, no. 6, pp. 1710–1722, Nov. 1996.
D. Burshtein and G. Miller, “Expander graph arguments for message-passing algorithms,” *IEEE Trans. Inform. Theory*, vol. 47, no. 2, pp. 782–790, Feb. 2001.
J. Feldman, T. Malkin, R. A. Servedio, C. Stein, and M. J. Wainwright, “L[P]{} decoding corrects a constant fraction of errors,” *IEEE Trans. Inform. Theory*, vol. 53, no. 1, pp. 82–89, Jan. 2007.
J. Feldman, M. J. Wainwright, and D. R. Karger, “Using linear programming to decode binary linear codes,” *IEEE Trans. Inform. Theory*, vol. 51, no. 3, pp. 954–972, March 2005.
D. Burshtein, “On the error correction of regular [LDPC]{} codes using the flipping algorithm,” in *Proc. of IEEE International Symposium on Information Theory*, June 2007, pp. 226–230.
G. Zemor, “On expander codes,” *IEEE Trans. Inform. Theory*, vol. 47, no. 2, pp. 835–837, Feb. 2001.
A. Barg and G. Zemor, “Error exponents of expander codes,” *IEEE Trans. Inform. Theory*, vol. 48, no. 6, pp. 1725–1729, Jun. 2002.
H. Janwa and A. K. Lal, “On [T]{}anner codes: minimum distance and decoding,” *Appl. Algebra Eng. Commun. Comput.*, vol. 13, no. 5, pp. 335–347, 2003.
N. Miladinovic and M. Fossorier, “Generalized [LDPC]{} codes with [R]{}eed-[S]{}olomon and [BCH]{} codes as component codes for binary channels,” vol. 3, 28 Nov. -2 Dec. 2005, pp. 6–10.
N. Biggs, *Algebraic graph theory*.1em plus 0.5em minus 0.4em Cambridge: Cambridge University Press, 1993.
T. J. Richardson, “Error floors of [LDPC]{} codes,” in *Proc. of 41st Annual Allerton Conf. on Communications, Control and Computing*, 2003, pp. 1426–1435.
N. Alon, “Spectral techniques in graph algorithms,” in *LATIN ’98: Proceedings of the Third Latin American Symposium on Theoretical Informatics*.1em plus 0.5em minus 0.4emLondon, UK: Springer-Verlag, 1998, pp. 206–215.
P. O. Vontobel and R. Koetter, “Graph-cover decoding and finite-length analysis of message-passing iterative decoding of [LDPC]{} codes,” May 2007, accepted for IEEE Transactions on Information Theory. \[Online\]. Available: <http://www.citebase.org/abstract?id=oai:arXiv.org:cs/0512078>
S. K. Chilappagari, S. Sankaranarayanan, and B. Vasic, “Error floors of [LDPC]{} codes on the binary symmetric channel,” in *Proc. of IEEE International Conference on Communications*, vol. 3, June 11-15 2006, pp. 1089–1094.
E. W. Weisstein, “Cage graph.” \[Online\]. Available: <http://mathworld.wolfram.com/CageGraph.html>
B. Bollobas, *Extremal graph theory*.1em plus 0.5em minus 0.4emLondon: Academic Press Inc., 1978.
A. Shokrollahi, “An introduction to low-density parity-check codes,” in *Theoretical aspects of computer science: advanced lectures*.1em plus 0.5em minus 0.4emNew York, NY, USA: Springer-Verlag New York, Inc., 2002, pp. 175–197.
S. K. Chilappagari and B. Vasic, “Error correction capability of column-weight-three [LDPC]{} codes,” submitted to IEEE Trans. Inform. Theory. \[Online\]. Available: <http://arxiv.org/abs/0710.3427>
S. Sankaranarayanan, S. K. Chilappagari, R. Radhakrishnan, and B. Vasic, “Failures of the [G]{}allager [B]{} decoder: analysis and applications,” in *Proc. of UCSD Center for Information Theory and its Applications Inaugural Workshop*, Feb 6-9 2006. \[Online\]. Available: [htpp//ita.5i.net/papers/160.pdf](htpp//ita.5i.net/papers/160.pdf)
N. Alon, S. Hoory, and M. Linial, “The [M]{}oore bound for irregular graphs,” *Graphs and Combinatorics*, vol. 18, no. 1, pp. 53–57, 2002.
D. K. Garnick, Y. H. H. Kwong, and F. Lazebnik, “Extremal graphs without three-cycles or four-cycles,” *J. Graph Theory*, vol. 17, no. 5, pp. 633–645, 1993.
Y. Yuansheng, L. Xiaohui, D. Guocheng, and Z. Yongxiang, “Extremal graphs without three-cycles, four-cycles or five-cycles,” *Utilitas Mathematica*, vol. 66, pp. 249–266, 2004.
[^1]: Manuscript received . This work is funded by NSF under Grant CCF-0634969, ECCS-0725405, ITR-0325979 and by the INSIC-EHDR program.
[^2]: S. K. Chilappagari, D. V. Nguyen, B. Vasic and M. W. Marcellin are with the Department of Electrical and Computer Engineering, University of Arizona, Tucson, Arizona, 85721 USA. (emails: {shashic, nguyendv, vasic, marcellin}@ece.arizona.edu.
[^3]: Parts of this work have been accepted for presentation at the International Symposium on Information Theory (ISIT’08) and the International Telemetering Conference (ITC’08).
[^4]: Precise definitions will be given in Section \[section2\] and we follow standard terminology from [@gallager] and [@tanner]
|
---
abstract: 'We investigate the normal state of the superconducting compound PuCoGa$_5$ using the combination of density functional theory (DFT) and dynamical mean field theory (DMFT), with the continuous time quantum Monte Carlo (CTQMC) and the vertex-corrected one-crossing approximation (OCA) as the impurity solvers. Our DFT+DMFT(CTQMC) calculations suggest a strong tendency of Pu-5$f$ orbitals to differentiate at low temperatures. The renormalized 5$f_{5/2}$ states exhibit a Fermi-liquid behavior whereas one electron in the 5$f_{7/2}$ states is at the edge of a Mott localization. We find that the orbital differentiation is manifested as the removing of 5$f_{7/2}$ spectral weight from the Fermi level relative to DFT. We corroborate these conclusions with DFT+DMFT(OCA) calculations which demonstrate that 5$f_{5/2}$ electrons have a much larger Kondo scale than the 5$f_{7/2}$.'
address:
- 'Condensed Matter Physics and Materials Science Department, Brookhaven National Laboratory, Upton, New York 11973, USA.'
- 'Condensed Matter Physics and Materials Science Department, Brookhaven National Laboratory, Upton, New York 11973, USA.'
- 'Ames Laboratory-U.S. DOE and Department of Physics and Astronomy, Iowa State University, Ames, Iowa 50011, USA.'
- 'Department of Physics and Astronomy, Rutgers University, Piscataway, New Jersey 08854, USA.'
- 'Condensed Matter Physics and Materials Science Department, Brookhaven National Laboratory, Upton, New York 11973, USA.'
author:
- 'W. H. Brito'
- 'S. Choi'
- 'Y. X. Yao'
- 'G. Kotliar'
title: 'Orbital-dependent correlations in PuCoGa$_5$'
---
Introduction
============
Orbital-dependent correlations have emerged as a key concept to understand the physics of a large number of materials. Early on Anisimov and coworkers [@anisimov_ruth] suggested an orbital selective Mott transition in the ruthenates. Later, orbital-dependent correlations were observed in the normal state of iron-based superconductors. [@ziping; @miao] More recently, orbital differentiation has also been shown to play an important role in the 5$f$ manifold of UO$_2$. [@werner_uo2; @lanata_uo2] In this paper, we point out that orbital differentiation also occurs in Pu-5$f$ systems, by presenting a study of PuCoGa$_5$. This suggests that orbital differentiation is a very general phenomena in multiorbital systems.
Among the group of Pu-based compounds, PuCoGa$_5$ has attracted major interest since its superconductivity develops at $T_{c}$ = 18.5 K, [@sarrao1] which is the record transition temperature among the family of heavy fermion superconductors. [@sarrao_review] Moreover, its superconducting properties indicate the existence of heavy quasiparticles [@javorsky] while its normal state exhibits a non-Fermi liquid resistivity up to 50 K. [@wastin] The complexity of elemental plutonium is also seen in the properties of PuCoGa$_5$. Analogous to what happens in $\delta$-Pu, neutron scattering measurements pointed out the absence of localized magnetic moments in the normal state, [@hiess1] which indicates an unconventional electron pairing mechanism. In fact, a comparison between the properties of PuCoGa$_5$ and PuCoIn$_5$ has suggested two distinct electron pairing mechanisms, one due to spin fluctuations and the other mediated by valence fluctuations. [@bauer1; @kout] Although the electron pairing mechanism is still under debate, more recent experiments clearly evidence a d-wave superconductivity in PuCoGa$_5$. [@daghero]
Early theoretical works employed different methods to study the electronic structure of PuCoGa$_5$. Density functional theory (DFT) calculations showed that states around the Fermi level come mainly from Pu-5$f$ states and that the paramagnetic Fermi surface is essentially two-dimensional. [@opahle] However, these calculations fail to describe the magnetic ground state, which was predicted to be antiferromagnetic or ferromagnetic due to the nearly identical total energies of both phases. This issue was later solved by LSDA+U calculations which predicted Fermi surfaces very similar to the ones obtained within DFT. [@oppeneerLDAU] The normal state of PuCoGa$_5$ was also studied using the combination of DFT with dynamical mean field theory (DMFT). By means of DFT+DMFT calculations using the spin-orbit T-matrix and fluctuating exchange (SPFT) approximation, Pourovskii *et al.* [@pourovskii] obtained a nonmagnetic state with van Hove singularities in the spectral function at 500 K. Furthermore, the authors pointed out that these singularities can result in a strong $\mathbf{q}$ dependence of the magnetic susceptibility, which advocates to a d-wave superconductivity mediated by spin fluctuations. Moreover, DFT+DMFT calculations using the vertex-corrected one-crossing approximation (OCA) were used to compare the correlation effects in PuCoGa$_5$ to PuCoIn$_5$. [@zhu] In particular, the authors found a three-peak structure in the Pu-5$f$ density of states for both materials, wherein the central peak is associated with strongly renormalized quasiparticles.
With this motivation we reconsider the issue of orbital differentiation in PuCoGa$_5$ using the DFT+DMFT method [@review2] employing state of the art impurity solvers. We find strong orbital differentiation in this material, with the 5$f_{7/2}$ states more renormalized than the 5$f_{5/2}$ states, and equivalently the coherence scale of the 5$f_{7/2}$ states much smaller than the one of the 5$f_{5/2}$ states. These conclusions were obtained using both CTQMC and OCA as impurity solvers, and hence orbital differentiation is a robust property of this material, which had not been discussed previously in the literature.
Computational Methods {#method}
=====================
Our calculations were performed using the fully charge self-consistent DFT+embedded-DMFT approach, [@hauleWK] as implemented in K. Haule’s code. [@haulepage] The DFT calculations were performed within Perdew-Burke-Ernzehof generalized gradient approximation (PBE-GGA), [@pbe] as implemented in Wien2K package. [@wien] To solve the DMFT effective impurity problem we used the Continuous time quantum Monte Carlo (CTQMC) method [@ctqmc] and the vertex-corrected one-crossing approximation (OCA). [@pruschke] In particular, we use the same values for the on-site Coulomb repulsion $U = 4.5 $ eV and Hund’s coupling $J = 0.512$ eV which were used to describe the ground state of $\delta$-Pu. [@janoschek] For the double-counting correction term we use the standard fully localized-limit form [@anisimovEdc] with $n_{f}^{0}$ = 5.2, which is the average occupancy of Pu-5$f$ states in the $\delta$-Pu as reported in Ref. .
Results and Discussions
=======================
Similar to Ce-115 materials, PuCoGa$_5$ crystallizes in a HoCoGa$_5$ tetragonal structure, which can be viewed as composed of PuGa$_3$ and CoGa$_2$ layers, as shown in Fig. \[fig:fig1\_dosdft\](a). As pointed out by Sarrao *et al.*, [@sarrao_review] an interesting feature among the family of Pu-115 superconductors is that the T$_c$ is directly connected with the distance between these layers, with PuCoGa$_5$ being the member with highest T$_c$ and smaller lattice constant $c$. In our calculations we used the experimental lattice structure, with $a = 4.2$ Å and $c = 6.8$ Å as reported in Ref. . In Fig. \[fig:fig1\_dosdft\](b) we show the calculated DFT(GGA) total and projected density of states.
![(a) Crystal structure of PuCoGa$_5$ (space group $P4/mmm$). Plutonium, cobalt and gallium atoms are represented by black, blue and yellow spheres, respectively. In (b) we show the DFT calculated density of states. Shaded region indicates the total density of states while the lines in blue, red, indigo, dashed green, and dashed orange denote the Pu-5$f_{5/2}$, Pu-5$f_{7/2}$, Co-3d, Ga1-4p, and Ga2-4p projected density of states, respectively. The Ga-4p projected density of states were multiplied by a factor of 5 for clarity.[]{data-label="fig:fig1_dosdft"}](fig1_new.eps)
Our DFT calculations indicate that bands near the Fermi level are mainly of Pu-5$f$ character, where the peak just below $E_{f}$ corresponds to 5$f_{5/2}$ states, while the peak around 1 eV to 5$f_{7/2}$ states, which agrees with previous DFT calculations. [@zhu] The Co-3d states give rise to peaks centered at -1.1 and -1.9 eV. The contribution of Ga-4p states is rather small from -3 to 3 eV around $E_f$.
We now turn to the investigation of correlation effects in PuCoGa$_5$ within DFT+DMFT. In Fig. \[fig:fig2\_dosdmft\] we show the temperature evolution of DFT+DMFT based total, Pu-5$f$, 5$f_{5/2}$, and 5$f_{7/2}$ projected density of states calculated within CTQMC. In comparison with our calculated DFT density of states (see Fig. \[fig:fig1\_dosdft\](b)), we find Pu-5$f$ sharp peaks near $E_{f}$ and quite broad peaks at -1 and -1.9 eV, which come mainly from the Co-3d states. These findings are in good agreement with the valence band spectrum of PuCoGa$_5$ obtained from photoemission measurements. [@pes]

Looking at the Pu-5$f$ density of states (lower panel of Fig. \[fig:fig2\_dosdmft\](a)), we notice the appearance of a quasiparticle peak (Kondo resonance), mostly of 5$f_{5/2}$ character (see Fig.\[fig:fig2\_dosdmft\](b)) just below the Fermi level. At 500 K, we start to see the formation of these quasiparticle states, which are enhanced at low temperatures. These findings are a clear signature of the formation of heavy quasiparticles since at low temperatures the Pu-5f electrons strongly hybridize with the surrounding conduction electrons. It is worth mentioning that this feature was observed in early DMFT calculations using the OCA approximation, [@zhu] where the quasiparticle peak was found too sharp due to the overestimation of renormalizations. In table I we present the corresponding orbital occupations. Note that for all temperatures n$_{5/2}$ $\approx 4$ and n$_{7/2}$ $\approx 1$.
\[rutiles\_res\]
T(K) n$_{5/2}$ n$_{7/2}$
------ ----------- -----------
CTQMC
500 4.08 1.03
232 4.06 1.02
50 4.02 0.98
OCA
500 4.00 0.99
232 4.01 1.00
50 4.03 0.99
25 4.13 1.02
: DFT+DMFT occupancies of 5$f_{5/2}$ and 5$f_{7/2}$ states obtained within CTQMC and OCA impurity solvers.
Furthermore, dynamical correlations lead to the emergence of Hubbard bands at high energies. The upper Hubbard band, which come mainly from 5$f_{7/2}$ states, start to appear at 500 K around 1.1 eV and upshifts to 1.3 eV at 50 K. The lower Hubbard band, mainly due to 5$f_{5/2}$ states, is clearly seen at 232 K and 50 K. At 232 K it is centered at -0.8 eV and upshifts to -0.6 eV at 50 K. Surprisingly, the 5$f_{7/2}$ states, with occupancy close to unit, become gapped at 50 K as can be seen in the lower panel of Fig. \[fig:fig2\_dosdmft\](b).
Next, we investigate how the dynamical electronic correlations modify the electronic states of PuCoGa$_5$. In Fig. \[fig:self\_imag\_T\](a)-(b), we show the imaginary part of 5$f_{5/2}$ and 5$f_{7/2}$ components of self-energy for all temperatures considered.
![Imaginary part of 5$f_{5/2}$ and 5$f_{7/2}$ self-energies at 500 K (dashed red), 232 K (green), and 50 K (blue) on the (a) real frequency axis , and (b) on the imaginary frequency axis. In the inset we zoom the imaginary part of 5$f_{7/2}$ self-energy in the low energy region.[]{data-label="fig:self_imag_T"}](self_imag_new.eps)
For temperatures of 500 and 232 K, we observe that the 5$f_{5/2}$ self-energy exhibits a Fermi-liquid like behavior, with a prominent peak at around -0.45 eV as seen in the upper panel of Fig. \[fig:self\_imag\_T\](a). This high energy feature is also captured in the 5$f_{5/2}$ self-energy computed using the OCA approximation. [@zhu] As can be seen in Fig. \[fig:self\_imag\_T\](b), the slope of the imaginary parts at these two temperatures, which is associated with the quasiparticle mass enhancement, is very similar. For the 5$f_{5/2}$ states we estimate a mass enhancement of $\frac{m^*}{m} \approx 5.8$ at 232 K. At 50 K, the correlations induce a change of behavior in the self-energies. For the 5$f_{5/2}$ states we still observe a Fermi-liquid like behavior, with mass enhancement of $\frac{m^*}{m} \approx 6.4$. However, the imaginary part of 5$f_{7/2}$ states presents two poles at -0.04 and 0.03 eV which are reminiscent of a Mott instability, as can be seen in the lower panel of Fig. \[fig:self\_imag\_T\](a). We emphasize that the 5$f_{7/2}$ occupancy close to unit, as shown in table I, favors the appearance of a Mott state. We mention that the pole below the Fermi level start to appear at 232 K, although in this case it is centered around -0.09 eV. Looking at this component on the imaginary frequency axis, we find that 5$f_{7/2}$ self-energy exhibits a larger slope than that of 5$f_{5/2}$ component. Moreover, this large slope gives rise to a mass enhancement of $\frac{m^*}{m} \approx 21$, which indicates that the electrons in 5$f_{7/2}$ states are at the edge of a Mott transition. As a result, the 5$f_{7/2}$ projected density of states presents a gap as seen in the lower panel of Fig. \[fig:fig2\_dosdmft\](b). Therefore, our DFT+DMFT(CTQMC) calculations suggest the existence of orbital-dependent correlations in PuCoGa$_5$ with substantially differentiation at low temperatures.
Another hallmark of orbital-dependent correlations in multiorbital systems is the difference of coherence scales of the orbitals. In order to explore the buildup of coherence in PuCoGa$_5$ we employ the computationally less expensive OCA impurity solver to temperatures down to 25 K. Similar temperature independent orbital occupancies are calculated within this solver as presented in table I. In Fig. \[fig:pdos\_pu115\_oca\] we display the calculated temperature evolution of the Pu-5$f$ projected density of states from 500 to 25 K. As the temperature is reduced we observe the appearance of a quasiparticle peak near the Fermi energy, where the quasiparticle peak height increases upon decrease in temperature. This behavior was also observed in early calculations for the heavy fermion Ce-115 materials within OCA [@shimScience] and is also in agreement with our CTQMC calculations. There are also additional peaks below the Fermi energy which are reminiscent of atomic multiplets observed in the spectra of the $\delta$ phase of elemental Pu. [@shimOCAPu] More important, we find that even at 500 K a quasiparticle peak of 5$f_{5/2}$ character starts to develop whereas there is no sign of Kondo resonance associated with the 5$f_{7/2}$ states for temperatures down to 25 K. Furthermore, the 5$f_{7/2}$ spectral function is essentially temperature independent for temperatures down to 25 K, the lowest temperature we could explore before the OCA solver breaks down. Hence the coherence scale of the 5$f_{7/2}$ states is less than 25 K. This indicates a drastic difference of Kondo temperatures (T$_K$) of electrons in the 5$f_{5/2}$ and 5$f_{7/2}$ states, where T$_{K}$ of the latter is very small. Therefore, our DFT+DMFT(OCA) calculations emphasize the existence of orbital-dependence of correlations in PuCoGa$_5$, where the 5$f_{5/2}$ coherence sets in at high temperatures without no sign of Kondo peak for the 5$f_{7/2}$ states down to 25 K.
![DFT+DMFT(OCA) based Pu-5$f_{5/2}$ (blue) and Pu-5$f_{7/2}$ (red) projected density of states at 500 K, 232 K, 50 K, and 25 K.[]{data-label="fig:pdos_pu115_oca"}](pdos_pu115_oca_new.eps)
Conclusions
===========
In summary, we have performed first-principles calculations at the level of fully charge self-consistent DFT+DMFT to investigate the orbital dependence of correlations in PuCoGa$_5$. From our calculations employing the CTQMC as the impurity solver we find that Pu-5$f$ electrons behave as heavy quasiparticles at low temperatures with strong orbital dependent renormalizations. Our calculations at 50 K highlight the strongly orbital-dependent correlations in PuCoGa$_5$, wherein electrons in the 5$f_{7/2}$ states are strongly renormalized and are at the edge of a Mott-transition. In addition, our calculations within the OCA demonstrate the orbital differentiation of the coherence energy scales in PuCoGa$_5$, which is hallmark of orbital dependent correlations. Most importantly, our study points towards the universality of the phenomena of orbital differentiation in multiorbital materials. It has been conjectured [@mediciPRL] that there is a connection between superconductivity and orbital differentiation. Our discovery of strong orbital differentiation in PuCoGa$_5$, which has the highest T$_c$ of the 5$f$ series, adds an important high temperature superconductor in support of that conjecture. Further microscopic studies are needed to investigate the interplay of the orbital differentiation found here and the spin fluctuations which are present in the family of Pu-based compounds. [@tanmoyPRL]
Acknowledgments
===============
W.B., S.C., and Y.X. Y. acknowledge support from the Center for Computational Design of Functional Strongly Correlated Materials and Theoretical Spectroscopy. G.K. was supported by U.S. DOE BES under Grant No. DE-FG02-99ER45761. Many useful discussions with K. Haule are gratefully acknowledged. An award of computer time was provided by the INCITE program. This research used resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725.
[100]{}
V. I. Anisimov, I. A. Nekrasov, D. E. Kondakov, T. M. Rice, and M. Sigrist, Eur. Phys. J. B [**25**]{}, 191 (2002).
Z. P. Yin, K. Haule, and G. Kotliar, Nat. Materials [**10**]{}, 932 (2011).
H. Miao, Z. P. Yin, S. F. Wu, J. M. Li, J. Ma, B.-Q. Lv, X. P. Wang, T. Qian, P. Richard, L.-Y. Xing, X.-C. Wang, C. Q. Jin, K. Haule, G. Kotliar, and H. Ding, Phys. Rev. B [**94**]{}, 201109 (2016).
L. Huang, Y. Wang, and P. Werner, arXiv:1506.06548 (2015).
N. Lanatà, Y. Yao, X. Deng, V. Dobrosavljević, and G. Kotliar, Phys. Rev. Lett. [**118**]{}, 126401 (2017).
J. L. Sarrao, L. A. Morales, J. D. Thompson, B. L. Scott, G. R. Stewart, F. Wastin, J. Rebizant, P. Boulet, E. Colineau, and G. H. Lander, Nature [**420**]{}, 297 (2002).
J. L. Sarrao, E. D. Bauer, J. N. Mitchell, P. H. Tobash, and J. D. Thompson, Physica C [**514**]{}, 184 (2015).
P. Javorský, F. Wastin, E. Colineau, J. Rebizant, P. Boulet, and G. Stewart, J. Nucl. Mater. [**344**]{}, 50 (2005).
F. Wastin, P. Boulet, J. Rebizant, E. Colineau, and G. H. Lander, J. Phys.: Condens. Matter [**15**]{}, S2279 (2003).
A. Hiess, A. Stunault, E. Colineau, J. Rebizant, F. Wastin, R. Caciuffo, and G. H. Lander, Phys. Rev. Lett. [**100**]{}, 076403 (2008).
E. D. Bauer, M. M. Altarawneh, P. H. Tobash, K. Gofryk, O. E. Ayala-Valenzuela, J. N. Mitchell, R. D. McDonald, C. H. Mielke, F. Ronning, J.-C. Griveau, E. Colineau, R. Eloirdi, R. Caciuffo, B. L. Scott, O. Janka, S. M. Kauzlarich, and J. D. Thompson, J. Phys.: Condens. Matter [**24**]{}, 052206 (2012).
G. Koutroulakis, H. Yasuoka, P. H. Tobash, J. N. Mitchell, E. D. Bauer, J. D. Thompson, Phys. Rev. B [**94**]{}, 165115 (2016).
D. Daghero, M. Tortello, G. A. Ummarino, J.-C. Griveau, E. Colineau, R. Eloirdi, A. B. Shick, J. Kolorenc, A. I. Lichtenstein, and R. Caciuffo, Nat. Comms. [**3**]{}, 786 (2012).
I. Opahle and P. M. Oppeneer, Phys. Rev. Lett. [**90**]{}, 157001 (2003).
P. M. Oppeneer, A. B. Shick, J. Rusz, S. Lebègue, and O. Eriksson, J. Alloys Compd. [**444**]{}, 109 (2007).
L. V. Pourovskii, M. I. Katsnelson, and A. I. Lichtenstein, Phys. Rev. B [**73**]{}, 060506 (2006).
J.-X. Zhu, P. H. Tobash, E. D. Bauer, F. Ronning, B. L. Scott, K. Haule, G. Kotliar, R. C. Albers, and J. M. Wills, Europhys. Lett. [**97**]{}, 57001 (2012).
G. Kotliar, S. Y. Savrasov, K. Haule, V. S. Oudovenko, O. Parcollet, and C. A. Marianetti, Rev. Mod. Phys. [**78**]{}, 865 (2006).
N. Lanatà, Y. Yao, C.-Z. Wang, K.-M. Ho, and G. Kotliar, Phys. Rev. X [**5**]{}, 011008 (2015).
K. Haule, C.-H. Yee, and K. Kim, Phys. Rev. B [**81**]{}, 195107 (2010).
<http://hauleweb.rutgers.edu/tutorials>
J. P. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev. Lett. [**77**]{}, 3865 (1996).
P. Blaha, K. Schwarz, G. K. H. Madsen, D. Kvasnicka, and J. Luitz, *WIEN2K, An Augmented Plane Wave + Local Orbitals Program for Calculating Crystal Properties* (Karlheinz Schwarz, Techn. Universität Wien, Austria, 2001).
K. Haule, Phys. Rev. B [**75**]{}, 155113 (2007).
Th. PruschkeN and N. Grewe, Z. Phys. B [**74**]{}, 439 (1989).
M. Janoschek, P. Das, B. Chakrabarti, D. L. Abernathy, M. D. Lumsden, J. M. Lawrence, J. D. Thompson, G. H. Lander, J. N. Mitchell, S. Richmond, M. Ramos, F. Trouw, J.-X. Zhu, K. Haule, G. Kotliar, and E. D. Bauer, Sci. Adv. [**1**]{} (2015).
V. I. Anisimov, F. Aryasetiawan, and A. I. Lichtenstein, J. Phys.: Condens. Matter [**9**]{}, 767 (1997).
A. Hiess, A. Stunault, E. Colineau, J. Rebizant, F. Wastin, R. Caciuffo, and G. H. Lander, Phys. Rev. Lett. [**100**]{}, 076403 (2008).
J. J. Joyce, J. M. Wills, T. Durakiewicz, M. T. Butterfield, E. Guziewicz, J. L. Sarrao, L. A. Morales, A. J. Arko, and O. Eriksson, Phys. Rev. Lett. [**91**]{}, 176401 (2003).
J. H. Shim, K. Haule, and G. Kotliar, Science [**318**]{}, 1615 (2007).
J. H. Shim, K. Haule, and G. Kotliar, Nature [**446**]{}, 513 (2007).
L. de’ Medici, Phys. Rev. Lett. [**118**]{}, 167003 (2017).
T. Das, J.-X. Zhu, and M. J. Graf, Phys. Rev. Lett. [**108**]{}, 017001 (2012).
|
---
abstract: 'We analyze the continuous time zero-sum and cooperative controller-stopper games of Karatzas and Sudderth \[Annals of Probability, 2001\], Karatzas and Zamfirescu \[Annals of Probability, 2008\] and Karatzas and Zamfirescu \[Applied Mathematics and Optimization, 2005\] when the volatility of the state process is controlled as in Bayraktar and Huang \[SIAM Journal on Control and Optimization, 2013\] but additionally when the state process has controlled jumps. We perform this analysis by first resolving the stochastic target problems (of Soner and Touzi \[SIAM Journal on Control and Optimization, 2002; Journal of European Mathematical Society, 2002\]) with a cooperative or a non-cooperative stopper and then embedding the original problem into the latter set-up. Unlike in Bayraktar and Huang \[SIAM Journal on Control and Optimization, 2013\] our analysis relies crucially on the Stochastic Perron method of Bayraktar and Sîrbu \[SIAM Journal on Control and Optimization, 2013\] but not the dynamic programming principle, which is difficult to prove directly for games.'
address:
- |
Erhan Bayraktar\
University of Michigan, Ann Arbor
- |
Jiaqi Li\
Goldman Sachs
author:
- Erhan Bayraktar
- Jiaqi Li
bibliography:
- 'mybib\_T.bib'
title: 'On the controller-stopper problems with controlled jumps'
---
[^1]
[^2]
[^3]
Introduction
============
The zero-sum stochastic games between a controller (who controls the state dynamics) and a stopper (who chooses the termination time of the game) were introduced by [@MR1481781] in discrete time and were then resolved by [@KS2001] for one-dimensional diffusions. Later, [@KZ2008] and [@BKY2010] considered this problem when the underlying diffusion is multi-dimensional but when only the drift is controlled. See also [@MR2746174; @MR2732930; @ECTA:ECTA939; @MR3023890; @MR3476637]. This problem was later on solved for the case when the volatility is controlled and can be degenerate in [@Bayraktar_Huang] and was further generalized in [@MR3267150; @MR3375882; @BS-DG-2016]. The cooperative version of the game has received much attention as well. The general theoretical results on cooperative controller-stopper problems (also called problems of stochastic control with discretionary stopping) were obtained in [@Krylov1980; @K1981; @BL1982; @MS1996; @BC2004]. Later, a martingale treatment of the controller-stopper problems was developed in [@KZ2006]. Later this was analyzed in a more general case when the volatility is also controlled in [@MR3231620].
In this paper, we will generalize these two types of stopping games to the case in which the controller can control the jumps. We will prove our results by embedding these two problems into what is called “stochastic target problems" *with a stopper* and solving a more general problem. These problems are more difficult because the goal is to derive the state process to a target almost surely. The original stochastic target problems were introduced by [@DP_FOR_STP_AND_G_FLOW; @SONER_TOUZI_STG] as a generalization of the super-replication problem in Mathematical Finance, in which the goal is to drive the controlled process to a given target at a given terminal time. There is an extensive literature on this subject which considered these problems with an increasing level of generality, see e.g. [@Bruno_jump_diffusion; @Bouchard_Elie_Touzi_ControlledLoss; @MR2585143; @STG_CONTROLLED_LOSS; @Moreau]. A survey of these results are given in Touzi’s book [@Touzi_book]. In the two versions analyzed in this paper, the terminal time is a stopping time, which is either chosen by the controller (cooperative version), or the controller has to be robust against the choice of the stopping time (uncooperative version). We use the jump diffusion model presented in [@BayraktarLi-Jump] (also see [@Moreau]) for the evolution of the state process and first analyze the target problems, one of which involves a cooperative stopper (Section \[sec:subhedging\]), and the other a non-cooperative stopper who might play against the controller in a non-anticipative way (Section \[sec:superhedging\]). In each of these target problems, we use stochastic Perron’s method (of [@Bayraktar_and_Sirbu_SP_HJBEqn]), instead of relying on the geometric dynamic programming principle (see [@DP_FOR_STP_AND_G_FLOW]) to create a viscosity sub-solution and super-solution to its associated Hamilton-Jacobi-Bellman (HJB) equation. Then by establishing an embedding result similar in spirit to [@Bouchard_Equivalence] between the controller-stopper problems and the stochastic target problems and assuming that a comparison principle holds, we show that the value functions of the original controller-stopper problems are unique viscosity solutions of the corresponding HJB equations. It is interesting to note that in the cooperative controller-stopper problem we observe the *face-lifting phenomenon*, i.e., there is a possible discontinuity at the terminal time, whereas there is no such occurrence in the non-cooperative version. In fact, this discontinuity is exactly characterized. The observation that there is discontinuity at the terminal time in the cooperative controller-stopper game goes back to [@Krylov1980], but there the magnitude was not identified. It is also worth recording that the face-lifting occurs in both of the corresponding stochastic target problems, but the reasons for the discontinuity are different.
Using the geometric dynamic programming principle, [@MR2585143] also considered the non-cooperative version of the stochastic target problem in the context of pricing American options with investment constraints in a Brownian diffusion type financial market. Our focus on the other hand is the embedding result in the spirit of [@Bouchard_Equivalence] and resolving both the cooperative and zero-sum controller-stopper games of Karatzas, Sudderth and Zamfirescu. Moreover, our results rely on the stochastic Perron’s method of [@Bayraktar_and_Sirbu_SP_HJBEqn] in generating the sub- and super-solutions of the corresponding HJB equations without relying on the dynamic programming principle and skipping the technical difficulties due to measurability issues. In general, dynamic programming principle for stochastic differential games is quite complicated, see e.g. [@Bayraktar_Huang; @Bouchard_Nutz_TargetGames; @MR997385]. Stochastic Perron’s method (a verification type result without smoothness), by working with appropriate envelopes instead of the value function itself, avoids having to prove a dynamic programming principle altogether. This method is similar in spirit to the Perron’s construction of viscosity solutions presented in [@UserGuide]. The crucial difference is that stochastic Perron’s method constructs the viscosity sub- and super solution to envelope the value function of the control problem. See [@Bayraktar_and_Sirbu_SP_LinearCase; @Bayraktar_and_Sirbu_SP_DynkinGames; @MR3274519; @MR3217159; @MR3295681; @Sirbu_SP_elementary_strategy; @BayraktarLi; @MR3535885; @BCP14; @BCS2016; @2016arXiv160807498B] for some recent results on the applications of this method.
The rest of the paper is organized as follows: In Section \[sec:prob\], the two stochastic target problems and their associated HJB equations are introduced. In Sections \[sec:superhedging\], using the stochastic Perron’s method we will analyze the stochastic target problem in which the controller needs to be robust with respect to the choice of the stopping time by which the target needs to be reached. In Section \[subsec:equivalence\], we establish the relationship between this problem and the zero-sum controller-stopper game. Using the results of the previous section and using a comparison principle, we demonstrate that the value function of the zero-sum controller-stopper game is the unique viscosity solution of the corresponding HJB equation. Sections \[sec:subhedging\] and \[sec:equivalence-subhedging\] do the same for the cooperative controller-stopper problem. The main results of the paper are Theorems \[thm: optimal control-superhedging\], \[thm: optimal control-subhedging\], and their corollaries. However, the results on the stochastic target problems in the auxiliary sections, Sections \[sec:superhedging\] and \[sec:subhedging\], where the bulk of the technical work is done, contain some new results which we also designated as theorems. The Appendix contains technical results that are crucial in embedding the controller-stopper problems into stochastic target problems.
**Notation.** Throughout this paper, the superscript $^{\top}$ stands for transposition, $|\cdot|$ for the Euclidean norm of a vector in ${\mathbb{R}}^n$, $] a,b [$ for the open interval in ${\mathbb{R}}$ from $a$ to $b$. For a subset of $\mathcal{O}$ of ${\mathbb{R}}^n$, we denote by Int$(\mathcal{O})$ its interior and by $\text{cl}(\mathcal{O})$ it closure. We also denote the open ball of radius $r>0$ centered at $x\in {\mathbb{R}}^n$ by $B_r(x)$. Inequalities and inclusion between random variables and random sets, respectively, are in the almost sure sense unless otherwise stated.
Setting up the Stochastic Target Problems {#sec:prob}
=========================================
We will define the stochastic target problems that we will use as auxiliary tools in establishing the characterization of the controller-stopper problems as the unique viscosity solutions of HJB equations. We will first have to introduce some relevant concepts and notation. These functions and the HJB equations for the target problems will appear at the end of this section. Before we introduce the set-up, we emphasize that the set-up has been used in [@Moreau] and [@BayraktarLi-Jump].
Given a complete probability space $(\Omega, \mathcal{F}, \mathbb{P})$, let $\{\lambda_i(\cdot,de)\}_{i=1}^I$ be a collection of independent integer-valued $E$-marked right-continuous point processes defined on this space. Here, $E$ is a Borel subset of $\mathbb{R}$ equipped with the Borel sigma field $\mathcal{E}$. Let $\lambda=(\lambda_1,\lambda_2,\cdots,\lambda_I)^{\top}$and $W = \{W_s\}_{0\leq s\leq T}$ be a $d$-dimensional Brownian motion defined on the same probability space such that $W$ and $\lambda$ are independent. Given $t\in[0,T]$, let $\mathbb{F}^t=\{\mathcal{F}^t_s, t\leq s\leq T\}$ be $\mathbb{P}$-augmented filtration generated by $W_{\cdot}-W_t$ and $\lambda([0,\cdot],de)-\lambda([0,t],de)$. By convention, set $\mathcal{F}^t_s = \mathcal{F}^t_t $ for $0\leq s < t$. We will use $\mathcal{T}_t$ to denote the set of $\mathbb{F}^t$-stopping times valued in $[t,T]$. Given $\tau\in\mathcal{T}_t$, the set of $\mathbb{F}^t$-stopping times valued in $[\tau, T]$ will be denoted by $\mathcal{T}_{\tau}$.
\[assump: lambda intensity kernel\] $\lambda$ satisfies the following:
1. $\lambda(ds,de)$ has intensity kernel $m(de)ds$ such that $m_{i}$ is a Borel measure on $(E,\mathcal{E})$ for any $i=1,\cdots, I$ and $\hat{m}(E)<\infty$, where $m=(m_1,\cdots,m_I)^{\top}$ and $\hat{m}=\sum_{i=1}^{I} m_{i}$.
2. $E=\text{supp}(m_i)$ for all $i=1,2,\cdots,I$. Here, $\text{supp}(m_{i}):=\{e\in E: e\in N_{e}\in T_{E} \implies m_{i}(N_{e})>0\},$ where $T_{E}$ is the topology on $E$ induced by the Euclidean topology.
3. There exists a constant $C>0$ such that $$\mathbb{P}\left(\left\{\hat{\lambda}(\{s\}, E)\leq C \;\;\text{for all}\;\; s\in[0,T]\right\}\right)=1,\;\;\text{where}\;\;\hat{\lambda}=\sum_{i=1}^{I}\lambda_{i}.$$
The above assumption implies that there are a finite number of jumps during any finite time interval. Let $\tilde{\lambda}(ds,de):=\lambda(ds,de)-m(de)ds$ be the associated compensated random measure.
Let $\mathcal{U}^t_1$ be the collection of all the $\mathbb{F}^{t}$-predictable processes in $\mathbb{L}^2(\Omega\times[0,T], \mathcal{F}\otimes\mathcal{B}[0,T], \mathbb{P}\otimes \lambda_{L}; U_{1})$, where $\lambda_{L}$ is the Lebesgue measure on ${\mathbb{R}}$ and $U_{1}\subset {\mathbb{R}}^{q}$ for some $q\in{\mathbb{N}}$. Define $\mathcal{U}_2^{t}$ to be the collection of all the maps $\nu_{2}:\Omega\times[0,T]\times E\rightarrow {\mathbb{R}}^{n}$ which are $\mathcal{P}^{t}\otimes\mathcal{E}$ measurable such that $$\label{eq:integrablility}
\|\nu_{2}\|_{\mathcal{U}_2^{t}}:=\left(\mathbb{E}\left[\int_{t}^{T}\int_E|\nu_{2}(s,e)|^2 \hat{m}(de)ds\right]\right)^{\frac{1}{2}}<\infty,$$ where $\mathcal{P}^{t}$ is the $\mathbb{F}^t$-predictable sigma-algebra on $\Omega\times[0,T]$. $\nu=(\nu_{1},\nu_{2})\in\mathcal{U}_{0}^{t} := \mathcal{U}_{1}^{t}\times\mathcal{U}^{t}_{2}$ takes value in the set $U:=U_1\times \mathbb{L}^2(E,\mathcal{E}, \hat{m};{\mathbb{R}}^{n})$. Let $${\mathbb{D}}=[0,T]\times {\mathbb{R}}^d,\quad {\mathbb{D}_{i}}=[0,T[\;\times\;{\mathbb{R}}^d \quad \text{ and } {\mathbb{D}_{T}}=\{T\}\times {\mathbb{R}}^d.$$ Given $z = (x, y)\in{\mathbb{R}}^d\times{\mathbb{R}}$, $t \in [0, T]$ and $\nu\in\mathcal{U}_{0}^{t}$, we consider the stochastic differential equations (SDEs) $$\label{eq: SDEs}
\begin{array}{l}
dX(s)=\mu_{X}(s,X(s),\nu(s))ds+\sigma_{X}(s,X(s),\nu(s))dW_s+\int_{E} \beta(s,X(s-),\nu_1(s),\nu_2(s,e), e)\lambda(ds,de), \vspace{0.07in}\\
dY(s)=\mu_{Y}(s,Z(s),\nu(s))ds+\sigma_{Y}^{\top}(s,Z(s),\nu(s)) dW_s+ \int_{E} b^{\top}(s,Z(s-),\nu_1(s),\nu_2(s,e), e)\lambda(ds,de),
\end{array}$$ with $(X(t), Y(t))=(x,y)$. Here, $Z=(X,Y)$. In , $$\begin{array}{c}
\mu_X: {\mathbb{D}}\times U \rightarrow {\mathbb{R}}^d,\;\;\sigma_X: {\mathbb{D}}\times U\rightarrow {\mathbb{R}}^{d\times d},\;\; \beta: {\mathbb{D}}\times U_1\times{\mathbb{R}}^n\times E \rightarrow {\mathbb{R}}^{d\times I}, \\
\mu_Y: {\mathbb{D}}\times{\mathbb{R}}\times U \rightarrow \mathbb{R},\;\;\sigma_Y: {\mathbb{D}}\times{\mathbb{R}}\times U\rightarrow {\mathbb{R}}^{d},\;\; b: {\mathbb{D}}\times{\mathbb{R}}\times U_1\times{\mathbb{R}}^n\times E \rightarrow {\mathbb{R}}^{I}.
\end{array}$$
Let $\mathcal{U}_{{\text{unco}}}^{t}$ be the admissible control set for the stochastic target problem with a non-cooperative stopper, which consists of all $\nu \in \mathcal{U}^{t}_{0}$ such that for any compact set $C\subset {\mathbb{R}}^{d}\times {\mathbb{R}}$ and $\tau\in\mathcal{T}_{t}$, there exists a constant $K_{{\text{unco}}}^{C, \nu,\tau}>0$ such that $$\label{eq:admissibility_super}
\int_{E} b^{\top}(\tau,x, y, \nu_{1}(\tau), \nu_{2}(\tau, e), e ) \lambda(\{\tau\}, de) \geq - K_{{\text{unco}}}^{C, \nu, \tau}\;\;\text{for all}\;\; (x,y)\in C.$$ Let $\mathcal{U}_{{\text{co}}}^{t}$ be the admissible control set for the stochastic target problem with a cooperative stopper, which consists of all $\nu \in \mathcal{U}^{t}_{0}$ such that for any compact set $C\subset {\mathbb{R}}^{d}\times {\mathbb{R}}$ and $\tau\in\mathcal{T}_{t}$, there exists a constant $K_{{\text{co}}}^{C, \nu, \tau}>0$ such that $$\label{eq:admissibility_sub}
\int_{E} b^{\top}(\tau,x, y, \nu_{1}(\tau), \nu_{2}(\tau, e), e ) \lambda(\{\tau\}, de) \leq K_{{\text{co}}}^{C, \nu,\tau}\;\;\text{for all}\;\; (x,y)\in C.$$
\[assump: regu\_on\_coeff\] Let $z=(x,y)$ and $u=(u_1,u_2)\in U = U_1\times\mathbb{L}^2(E,\mathcal{E},\hat{m};{\mathbb{R}}^{n})$. We use the notation $\|u\|_{U}:=|u_1|+\|u_2\|_{\hat{m}}$ and $u(e):=(u_1,u_2(e))$ for the rest of the paper.
1. $\mu_X, \sigma_X$, $\mu_Y$ and $\sigma_Y$ are all continuous;
2. $\mu_X, \sigma_X$, $\mu_Y$, $\sigma_Y$ are Lipschitz in $z$ and locally Lipschitz in other variables. In addition, $$|\mu_X(t,x,u)|+|\sigma_X(t,x,u)|\leq L(1+|x|+\|u\|_{U}), \;\;|\mu_Y(t,x,y,u)|+|\sigma_Y(t,x,y,u)|\leq L(1+|y|+\|u\|_{U}).$$
3. $b$ and $\beta$ are Lipschitz and grow linearly in all variables except $e$, but uniformly in $e$.
\[remark: facts from assumption 1\] Assumptions \[assump: lambda intensity kernel\] and \[assump: regu\_on\_coeff\] guarantee that there exists a unique strong solution $(X_{t,x}^{\nu}, Y_{t,x,y}^{\nu})$ to for any $\nu\in{\mathcal{U}}_{0}^{t}$. This follows from a simple arguments using Gronwall’s Lemma; see e.g. for this result in a similar set-up in [@Pham1998]. Also see Lemma 17.1.1 in [@MR3443368].
Now, we are ready to introduce the auxiliary stochastic target problems (with a stopper) that we will analyze in this paper. The main problems of zero-sum or cooperative controlled and stopper games will be introduced in Sections 4 and 6.
**The controller and stopper problems and the HJB operators** {#subsection: HJB}
-------------------------------------------------------------
Let $g: {\mathbb{R}}^d\rightarrow {\mathbb{R}}$ be a continuous function with polynomial growth. The value functions of the two target problems are defined respectively by $$\label{eq: value_function_super}
u_{{\text{unco}}}(t,x):=\inf\left\{y:\; \exists \nu \in {\mathcal{U}}^t_{{\text{unco}}} \text{ such that } Y_{t,x,y}^{\nu}(\rho)\geq g(X_{t,x}^{\nu}(\rho)) \; \mathbb{P}-\text{a.s. for all }\rho\in\mathcal{T}_{t}\right\},$$ $$\label{eq: value_function_sub}
u_{{\text{co}}}(t,x):=\sup\left\{y:\; \exists \nu \in {\mathcal{U}}^t_{{\text{co}}}\text{ and }\rho\in\mathcal{T}_{t} \text{ such that } Y_{t,x,y}^{\nu}(\rho)\leq g(X_{t,x}^{\nu}(\rho)) \; \mathbb{P}-\text{a.s.}\right\}.$$ Denote $b=(b_1,b_2,\cdots,b_I)^{\top}$ and $\beta=(\beta_1,\beta_2,\cdots,\beta_I)$. For a given ${\varphi}\in C({\mathbb{D}})$, we define the relaxed semi-limits $$\label{eq: HJB operators-super}
H^{*}(\Theta, {\varphi}):=\limsup_{\begin{subarray}{c}{\varepsilon}\searrow 0,\; \Theta^{'}\rightarrow\Theta \\ \eta\searrow 0,\; \psi\overset{\text{u.c.}}{\longrightarrow} {\varphi}\end{subarray}} H_{{\varepsilon},\eta} (\Theta^{'}, \psi) \;\; \text{and} \;\; H_{*}(\Theta, {\varphi}):=\liminf_{\begin{subarray}{c}{\varepsilon}\searrow 0,\; \Theta^{'}\rightarrow\Theta \\ \eta\searrow 0,\; \psi \overset{\text{u.c.}}{\longrightarrow} {\varphi}\end{subarray}} H_{{\varepsilon}, \eta} (\Theta^{'}, \psi), \footnote{The convergence $\psi \overset{\text{u.c.}}{\longrightarrow} {\varphi}$ is understood in the sense that $\psi$ converges uniformly on compact subsets to ${\varphi}$.}$$ $$\label{eq: HJB operators-sub}
F^{*}(\Theta, {\varphi}):=\limsup_{\begin{subarray}{c}{\varepsilon}\searrow 0,\; \Theta^{'}\rightarrow\Theta \\ \eta\searrow 0,\; \psi\overset{\text{u.c.}}{\longrightarrow} {\varphi}\end{subarray}} F_{{\varepsilon},\eta} (\Theta^{'}, \psi) \;\; \text{and} \;\; F_{*}(\Theta, {\varphi}):=\liminf_{\begin{subarray}{c}{\varepsilon}\searrow 0,\; \Theta^{'}\rightarrow\Theta \\ \eta\searrow 0,\; \psi \overset{\text{u.c.}}{\longrightarrow} {\varphi}\end{subarray}} F_{{\varepsilon}, \eta} (\Theta^{'}, \psi).$$ Here, for $\Theta = (t,x,y,p,A)\in{\mathbb{D}}\times{\mathbb{R}}\times{\mathbb{R}}^d\times\mathbb{M}^d \;( \mathbb{M}^d:={\mathbb{R}}^{d\times d}$), ${\varphi}\in C({\mathbb{D}})$, ${\varepsilon}\geq 0$ and $\eta\in[-1,1]$, $$\begin{gathered}
H_{{\varepsilon}, \eta}(\Theta, {\varphi}):=\sup_{u\in\mathcal{N}_{{\varepsilon}, \eta}(t,x,y,p,{\varphi})} \mathbf{L}^u(\Theta),\; F_{{\varepsilon}, \eta}(\Theta, {\varphi}):=\inf_{u\in\mathcal{M}_{{\varepsilon}, \eta}(t,x,y,p,{\varphi})} \mathbf{L}^u(\Theta), \text{where},\\
\begin{array}{c}
\mathbf{L}^u(\Theta):=\mu_{Y}(t,x,y,u) -\mu_{X}^{\top}(t,x,u) p -\frac{1}{2}\text{Tr}[\sigma_X\sigma_X^{\top}(t,x,u)A],\;\;N^{u}(t,x,y,p):=\sigma_Y(t,x,y,u)-\sigma_X^{\top}(t,x,u)p, \\
\Delta^{u,e}(t,x,y,{\varphi}):=\min_{1\leq i \leq I} \{b_i(t,x,y,u(e),e)-{\varphi}(t,x+\beta_i(t,x,u(e),e))+{\varphi}(t,x) \}, \\
\Pi^{u,e}(t,x,y,{\varphi}):=\max_{1\leq i \leq I} \{b_i(t,x,y,u(e),e)-{\varphi}(t,x+\beta_i(t,x,u(e),e))+{\varphi}(t,x) \}, \\
\mathcal{N}_{{\varepsilon},\eta}(t,x,y,p, {\varphi}):=\{u\in U: |N^{u}(t,x,y,p)|\leq{\varepsilon}\text{ and }
\Delta^{u,e}(t,x,y,{\varphi})\geq\eta\;\text{for}\; \hat{m}-\text{a.s.}\; e\in E \}, \\
\mathcal{M}_{{\varepsilon},\eta}(t,x,y,p, {\varphi}):=\{u\in U: |N^{u}(t,x,y,p)|\leq{\varepsilon}\text{ and }
\Pi^{u,e}(t,x,y,{\varphi})\leq\eta\;\text{for}\; \hat{m}-\text{a.s.}\; e\in E \},
\end{array}
\end{gathered}$$ where $\hat{m}$ is as in Asumption \[assump: lambda intensity kernel\]. For our later use, we also define the following: $$\begin{array}{c}
J^{u,e}_i(t,x,y,{\varphi}):= b_i(t,x,y,u(e),e)-{\varphi}(t,x+\beta_i(t,x,u(e),e))+{\varphi}(t,x), \nonumber \\
\overline{J}^{u,e}(t,x,y,{\varphi}):= (J^{u,e}_1(t,x,y,{\varphi}), \cdots, J^{u,e}_I(t,x,y,{\varphi}))^{\top}, \nonumber \\
\mathscr{L}^{u}{\varphi}(t,x):={\varphi}_t(t,x)+ \mu_{X}^{\top}(t,x,u)D{\varphi}(t,x)+\frac{1}{2}\text{Tr}[\sigma_X\sigma_X^{\top}(t,x,u)D^2{\varphi}(t,x)].
\end{array}$$
For simplicity, we denote $H^*(t,x,{\varphi}(t,x),D{\varphi}(t,x), D^2{\varphi}(t,x),{\varphi})$ by $H^*{\varphi}(t,x)$ for ${\varphi}\in C^{1,2}({\mathbb{D}})$. For ${\varphi}\in C^2({\mathbb{R}}^d)$, we denote $H^*(T,x,{\varphi}(x),D{\varphi}(x), D^2{\varphi}(x),{\varphi})$ by $H^*{\varphi}(x)$. We will use similar notation for $H_*, F^{*}, F_{*}$ and other operators in later sections.
Let $\nu_{1}, \nu_{2} \in {\mathcal{U}}^t_{{\text{unco}}}$ (resp. ${\mathcal{U}}^t_{{\text{co}}}$ ), $\tau \in \mathcal{T}_t$. The concatenation of $\nu_1$ and $ \nu_2 $ at $\tau$ is defined as $
\nu_1\otimes_{\tau} \nu_2 := \nu_1 \mathbbm{1}_{[0,\tau[}+ \nu_2 \mathbbm{1}_{[\tau,T]} \in{\mathcal{U}}_{{\text{unco}}}^{t}$ (resp. ${\mathcal{U}}^t_{{\text{co}}}$).[^4]
We will carry out Perron’s method to study the stochastic target problems with a non-cooperative stopper and a cooperative stopper, respectively, in Section \[sec:superhedging\] and \[sec:subhedging\].
Analysis of $u_{{\text{unco}}}$ defined in {#sec:superhedging}
===========================================
In this section, we use stochastic Perron’s method to prove that an appropriate upper envelope of a class of carefully defined functions is a viscosity sub-solution of $$\label{eq: sub_HJB equation_interior-superhedging}
\min\{{\varphi}(t,x)-g(x), -\partial_t{\varphi}(t,x)+ H_*{\varphi}(t,x)\}\leq 0 \;\;\text{in}\;\;{\mathbb{D}_{i}}.$$ and an appropriate lower envelope of $u_{{\text{unco}}}$ is a viscosity super-solution of $$\label{eq: super_HJB equation_interior-superhedging}
\min\{{\varphi}(t,x)-g(x), -\partial_t{\varphi}(t,x)+ H^*{\varphi}(t,x)\}\geq 0 \;\;\text{in}\;\;{\mathbb{D}_{i}}$$ The boundary conditions will be discussed in Theorem \[thm: bd\_viscosity\_property-superhedging\]. These envelopes will be defined in terms of the collections of stochastic super- and sub-solutions that we will define next.
\[Stochastic super-solutions\] \[def: Stochasticsuper-solution-super\] A continuous function $w: {\mathbb{D}}\rightarrow \mathbb{R}$ is called a stochastic super-solution of if
1. $w(t, x)\geq g(x)$ and for some $C>0$ and $n\in{\mathbb{N}}$,[^5] $|w(t,x)|\leq C(1+|x|^{n})$ for all $(t,x)\in {\mathbb{D}}$.
2. Given $(t,x,y)\in {\mathbb{D}}\times\mathbb{R}$, for any $\tau\in\mathcal{T}_t$ and $\nu\in {\mathcal{U}}^t_{{\text{unco}}}$, there exists $\tilde {\nu}\in {\mathcal{U}}^t_{{\text{unco}}}$ such that $$Y(\rho )\geq w(\rho, X(\rho )) \quad \mathbb{P}-\text{a.s.} \text{ on } \{ Y(\tau)\geq w(\tau, X(\tau)) \}$$ for all $\rho \in \mathcal{T}_{\tau}$, where $X:= X_{t,x}^{\nu\otimes_{\tau}\tilde{\nu}}$ and $Y:=Y_{t,x,y}^{\nu\otimes_{\tau}\tilde{\nu}}$.
\[Stochastic sub-solutions\] \[def: Stochasticsub-solution-super\] A continuous function $w: {\mathbb{D}}\rightarrow \mathbb{R}$ is called a stochastic sub-solution of if
1. $w(T, x)\leq g(x)$ and for some $C>0$ and $n\in{\mathbb{N}}$, $|w(t,x)|\leq C(1+|x|^{n})$ for all $(t,x)\in {\mathbb{D}}$.
2. Given $(t,x,y)\in {\mathbb{D}}\times\mathbb{R}$, for any $\tau\in\mathcal{T}_t$, $\nu\in {\mathcal{U}}_{{\text{unco}}}^t$ and $B\subset \{Y(\tau)<w(\tau,X(\tau))\}$ satisfying $B\in\mathcal{F}_\tau^t$ and $\mathbb{P}(B)>0$, there exists $\rho\in\mathcal{T}_{\tau}$ such that $$\mathbb{P}(Y(\rho )< g(X(\rho ))|B)>0.$$ Here, we use the notation $X:= X_{t,x}^{\nu}$ and $Y:=Y_{t,x,y}^{\nu}$.
Denote the sets of stochastic super-solutions and sub-solutions by ${\mathbb{U}}^+_{{\text{unco}}}$ and ${\mathbb{U}}^-_{{\text{unco}}}$, respectively.
\[assump:semisolution\_not\_empty\_super\] ${\mathbb{U}}^+_{{\text{unco}}}$ and ${\mathbb{U}}^-_{{\text{unco}}}$ are not empty.
We will provide sufficient conditions which guarantee Assumption \[assump:semisolution\_not\_empty\_super\] in the Appendix A. These conditions will be useful once we analyze the zero-sum controller-stopper game.
We are now ready to define the envelopes we mentioned above.
Let $u^+_{{\text{unco}}}:=\inf_{w \in \mathbb{U}^{+}_{{\text{unco}}}} w$ and $u^-_{{\text{unco}}}:=\sup_{w \in \mathbb{U}^{-}_{{\text{unco}}}} w$.
- For any stochastic super-solution $w$, choose $\tau=t$. Then there exists $\tilde{\nu}\in {\mathcal{U}}_{{\text{unco}}}^t$ such that $Y_{t,x,y}^{\tilde{\nu}}(\rho)\geq w\left(\rho, X_{t,x}^{\tilde{\nu}}(\rho)\right)\geq g\left(X_{t,x}^{\tilde{\nu}}(\rho)\right) \mathbb{P}-\text{a.s. for all } \rho\in\mathcal{T}_{t}
\text{ if } y\geq w(t,x).$ Hence, $y\geq w(t,x)$ implies that $y\geq u_{{\text{unco}}}(t,x)$ from . This means that $w\geq u_{{\text{unco}}}$ and $u^+_{{\text{unco}}}\geq u_{{\text{unco}}}$. By the definition of ${\mathbb{U}}^{+}_{{\text{unco}}}$, we know that $u^+_{{\text{unco}}}(t,x)\geq g(x)$ for all $(t,x)\in{\mathbb{D}}$.
- For any stochastic sub-solution $w$, if $y<w(t,x)$, by choosing $\tau=t$, we get from the second property of Definition \[def: Stochasticsub-solution-super\] that for any $\nu\in {\mathcal{U}}^t$, $\mathbb{P}\left(Y_{t,x,y}^{\nu}(\rho)< g(X_{t,x}^{\nu}(\rho))\right)> 0
$ for some $\rho\in\mathcal{T}_{t}$. Therefore, from , $y<w(t,x)$ implies that $y\leq u_{{\text{unco}}}(t,x)$. This means that $w\leq u_{{\text{unco}}}$ and $u^-_{{\text{unco}}}\leq u_{{\text{unco}}}$. By the definition of ${\mathbb{U}}^{-}_{{\text{unco}}}$, it holds that $u^{-}_{{\text{unco}}}(T,x)\leq g(x)$ for all $x\in{\mathbb{R}}^d$.
In short, $$\label{eq:intfvmavp-superhedging}
u^-_{{\text{unco}}} = \sup _{w\in \mathbb{U}^-_{{\text{unco}}}} w\leq u_{{\text{unco}}} \leq \inf _{w\in \mathbb{U}^+_{{\text{unco}}}}w= u_{{\text{unco}}}^+.$$
Viscosity Property in ${\mathbb{D}_{i}}$ {#subsec:interior}
----------------------------------------
As in [@Moreau], the proof of the sub-solution property requires a regularity assumption on the set-valued map $\mathcal{N}_{0,\eta}(\cdot, \psi)$.
\[assump: regularity-super\] For $\psi\in C({\mathbb{D}})$, $\eta>0$, let $B$ be a subset of ${\mathbb{D}}\times{\mathbb{R}}\times{\mathbb{R}}^d$ such that $\mathcal{N}_{0,\eta}(\cdot, \psi)\neq\emptyset$ on $B$. Then for every ${\varepsilon}>0$, $(t_0,x_0,y_0,p_0)\in Int(B)$ and $u_0\in\mathcal{N}_{0, \eta}(t_0,x_0,y_0,p_0,\psi) $, there exists an open neighborhood $B'$ of $(t_0,x_0,y_0,p_0)$ and a locally Lipschitz continuous map $\hat{\nu}$ defined on $B'$ such that $\|\hat{\nu}(t_0,x_0,y_0,p_0)-u_0\|_{U}\leq {\varepsilon}$ and $\hat{\nu}(t,x,y,p)\in \mathcal{N}_{0, \eta}(t,x,y,p, \psi)$.
The following two lemmas can be easily checked. Hence, we omit the proofs.
${\mathbb{U}}^{+}_{{\text{unco}}}$ and ${\mathbb{U}}^{-}_{{\text{unco}}}$ are closed under pairwise minimization and maximization, respectively. That is,
1. if $w_1, w_2\in \mathbb{U}^+_{{\text{unco}}}$, then $w_1\wedge w_2\in \mathbb{U}^+_{{\text{unco}}}$;
2. if $w_1,w_2\in \mathbb{U}^-_{{\text{unco}}}$, then $w_1\vee w_2\in \mathbb{U}^-_{{\text{unco}}}$.
\[lem: monotone seq approaches v+ or v\_–superhedging\] There exists a non-increasing sequence $\{w_{n}\}_{n=1}^{\infty}\subset{\mathbb{U}}^{+}_{{\text{unco}}}$ such that $w_n\searrow u^+_{{\text{unco}}}$ and a non-decreasing sequence $\{v_{n}\}_{n=1}^{\infty}\subset{\mathbb{U}}^{-}_{{\text{unco}}}$ such that $v_n\nearrow u^-_{{\text{unco}}}$.
\[thm: main theorem\_interior-superhedging\] Under Assumptions \[assump: lambda intensity kernel\], \[assump: regu\_on\_coeff\], \[assump:semisolution\_not\_empty\_super\] and \[assump: regularity-super\], $u^+_{{\text{unco}}}$ is an upper semi-continuous (USC) viscosity sub-solution of . On the other hand, under Assumptions \[assump: lambda intensity kernel\], \[assump: regu\_on\_coeff\] and \[assump:semisolution\_not\_empty\_super\], $u^-_{{\text{unco}}}$ is a lower semi-continuous (LSC) viscosity super-solution of .
Since the proof of the viscosity sub-solution of $u_{{\text{unco}}}^+$ is as similar to Step 1 in the proof of Theorem 3.1 in [@BayraktarLi-Jump], we will only show below that $u^-_{{\text{unco}}}$ is a viscosity super-solution.
**Step A:** We show in this step that $u^{-}_{{\text{unco}}}(t,x)\geq g(x)$ for all $(t,x)\in{\mathbb{D}}$. Assume, on the contrary, that for some $(t_{0},x_{0})\in{\mathbb{D}}$, there exists $\eta>0$ such that $$\label{eq:superhedging-interior-u-minus-step2A-contra}
2\eta=g(x_{0})-u^{-}_{{\text{unco}}}(t_{0},x_{0})>0.$$ Choose an arbitrary $w\in{\mathbb{U}}_{{\text{unco}}}^{-}$. By the definition of ${\mathbb{U}}_{{\text{unco}}}^{-}$ and lower semi-continuity of $g$, there exists ${\varepsilon}>0$ such that $$g(x)-w(t,x)>\eta, \; g(x)-g(x_{0})>-\frac{\eta}{2}, \; |w(t,x)-w(t_{0},x_{0})|\leq \frac{\eta}{2}\text{ for all } (t,x)\in\text{cl}(B_{{\varepsilon}}(t_{0},x_{0})).$$ Define $$w'(t,x):=
\left \{
\begin{split}
&w(t,x)+(g(x_{0})-\eta-w(t_{0},x_{0}))\left(1-\text{dist}((t,x), (t_{0},x_{0}))/{\varepsilon}\right) &\text{ for } (t,x)\in\text{cl}(B_{{\varepsilon}}(t_0, x_0)),\\
&w(t,x) & \text{ for } (t,x)\notin\text{cl}(B_{{\varepsilon}}(t_0, x_0)).
\end{split}
\right.$$ Obviously, $w'\geq w$ and $w'$ is continuous with polynomial growth. In addition, $$\label{eq:superhedging-interior-u-minus-step2A-1}
\{(t,x):w(t,x)<w'(t,x)\}=B_{{\varepsilon}}(t_{0},x_{0}) \quad \text{and}$$ $$\label{eq:superhedging-interior-u-minus-step2A-2}
w'(t,x)\leq w(t,x)+ (g(x_{0})-\eta-w(t_{0},x_{0}))< g(x_{0})-\frac{\eta}{2}<g(x) \text{ for }(t,x)\in\text{cl}(B_{{\varepsilon}}(t_{0},x_{0})).$$ The equation above, along with the fact that $w\in{\mathbb{U}}^{-}_{{\text{unco}}}$, implies that $w'(T,x)\leq g(x)$ for all $x\in{\mathbb{R}}^{d}$. Noting that $w'(t_{0},x_{0})=g(x_{0})-\eta>u_{{\text{unco}}}^{-}(t_{0},x_{0})$ due to , we would obtain a contradiction if we could show $w'\in{\mathbb{U}}^{-}_{{\text{unco}}}$.
To prove that $w'\in{\mathbb{U}}^{-}_{{\text{unco}}}$, fix $(t,x,y)\in {\mathbb{D}}_i\times\mathbb{R}$, $\tau \in \mathcal{T}_t$ and $\nu\in\mathcal{U}_{{\text{unco}}}^{t}$. For $w\in{\mathbb{U}}_{{\text{unco}}}^{-}$, let $\rho^{w,\tau,\nu} \in \mathcal{T}_{\tau}$ be the “optimal” stopping time satisfying the second item in Definition \[def: Stochasticsub-solution-super\]. In order to show that $w'\in{\mathbb{U}}^-_{{\text{unco}}}$, we want to construct an “optimal” stopping time $\rho$ which works in the sense of Definition \[def: Stochasticsub-solution-super\]. Let $A=\{w(\tau,X(\tau))=w'(\tau,X(\tau))\}\in\mathcal{F}^t_{\tau}$ and $$\rho=\mathbbm{1}_{A}\rho^{w,\tau,\nu}+\mathbbm{1}_{A^c}\tau.$$ Obviously, $\rho\in\mathcal{T}_{\tau}$. It suffices to show $
\mathbb{P}(Y(\rho)<g(X(\rho))|B)>0
$ for any $B\subset \{Y(\tau)<w'(\tau,X(\tau))\}$ satisfying $\mathbb{P}(B)>0$ and $B\in\mathcal{F}^t_{\tau}$. The following two scenarios together will yield the desired result.
\(i) If $\mathbb{P}(B\cap A)>0:$ We know that $B\cap A \subset \{Y(\tau)<w(\tau,X(\tau))\}$ and $B\cap A \in \mathcal{F}^t_{\tau}$. From the fact $w\in{\mathbb{U}}^{-}_{{\text{unco}}}$ and the definition of $\rho$ on $A$, it holds that $$\label{eq:case1_super-solution-property-superhedging}
\mathbb{P}(Y(\rho)<g(X(\rho))|B\cap A)=\mathbb{P}(Y(\rho^{w,\tau,\nu})<g(X(\rho^{w,\tau,\nu}))|B\cap A)>0.$$
\(ii) If $\mathbb{P}(B\cap A^c)>0$: $(\tau,X(\tau))\in B_{{\varepsilon}}(t_0,x_0)$ on $A^{c}$ from , which implies $w'(\tau,X(\tau))<g(X(\tau))$ from . Since $\rho=\tau$ on $A^{c}$, $$\label{eq:case2_super-solution-property-superhedging}
\mathbb{P}(Y(\rho)<g(X(\rho))|B\cap A^c)\geq \mathbb{P}(Y(\tau)<w'(\tau,X(\tau))|B\cap A^c) = \mathbb{P}(B\cap A^c)>0.$$ **Step B:** We claim that $u^{-}_{{\text{unco}}}$ is a viscosity super-solution of $$-\partial_t{\varphi}(t,x)+ H^*{\varphi}(t,x)\geq 0.$$ The proof is similar to the proof in Step 2 of Theorem 3.1 in [@BayraktarLi-Jump], but it is worth pointing out the following difference: after $w^{\kappa}$ is defined, we need to construct an optimal stopping time $\rho$ for $w^{\kappa}$ given $\tau$ and $\nu$ (as we did in Step A). In fact, it is easy to see that $\rho$ can be defined as follows: $$\rho=\mathbbm{1}_{A} \rho^{w,\tau,\nu} +\mathbbm{1}_{A^{c}} \rho^{w, \theta, \nu},$$ where $\rho^{w,\tau,\nu}$ (resp. $\rho^{w,\theta,\nu}$) is the “optimal” stopping time in Definition \[def: Stochasticsub-solution-super\] for $w$ given $\tau$(resp. $\theta$) and $\nu$. Here $\theta$ is the same as that in Step 2 of Theorem 3.1.
Boundary Conditions {#subsec: bdd cond}
-------------------
By the definition of $u_{{\text{unco}}}$, it holds that $u_{{\text{unco}}}(T,x)=g(x)$ for all $x\in{\mathbb{R}}^d$. However, $u^+_{{\text{unco}}}$ and $u^-_{{\text{unco}}}$ may not satisfy this boundary condition. Define $$\mathbf{N}(t,x,y,p, \psi):=\{(r,s)\in{\mathbb{R}}^d\times{\mathbb{R}}: \exists u \in U, \;\text{s.t.}\; r=N^{u}(t,x,y,p)\;\text{and}\;s\leq \Delta^{u,e}(t,x,y,\psi)\; \hat{m}-\text{a.s.}\}$$ and $$\label{eq:delta defi}
\delta:=\text{dist}(0, \mathbf{N}^c)-\text{dist}(0, \mathbf{N}),$$ where dist denotes the Euclidean distance. It holds that $$\label{eq: delta>0 equi int}
0\in\text{int}(N(t,x,y,p,\psi))\;\;\text{iff}\;\;\delta(t,x,y,p,\psi)>0.$$ We refer the readers to [@Moreau] for the discussion of the boundary conditions.
The upper (resp. lower) semi-continuous envelope of $\delta$ is denoted by $\delta^* \;(\text{resp}.\; \;\delta_*)$. Let $$\label{eq: boundary_u-_def}
u^+_{{\text{unco}}}(T-,x)=\limsup _{(t<T, x')\rightarrow (T,x)} u^-_{{\text{unco}}}(t,x'), \;\;u^-_{{\text{unco}}}(T-,x)=\liminf _{(t<T, x')\rightarrow (T,x)} u^-_{{\text{unco}}}(t,x').$$ The following theorem is an adaptation of Theorem 4.1 in [@BayraktarLi-Jump].
\[thm: bd\_viscosity\_property-superhedging\] Under Assumptions \[assump: lambda intensity kernel\], \[assump: regu\_on\_coeff\], \[assump:semisolution\_not\_empty\_super\] and \[assump: regularity-super\], $u^{+}_{{\text{unco}}}(T-,\cdot)$ is a USC viscosity sub-solution of $\min\{{\varphi}(x)-g(x), \delta_{*}{\varphi}(x)\}\leq 0\text{ on } {\mathbb{R}}^d.$ On the other hand, under Assumptions \[assump: lambda intensity kernel\], \[assump: regu\_on\_coeff\] and \[assump:semisolution\_not\_empty\_super\], $u^{-}_{{\text{unco}}}(T-,\cdot)$ is an LSC viscosity super-solution of $
\min\{{\varphi}(x)-g(x), \delta^{*}{\varphi}(x) \}\geq 0\text{ on }{\mathbb{R}}^d.
$
Zero-sum Controller-Stopper Game {#subsec:equivalence}
================================
In this section we show that the HJB equation associated to a stochastic controller-stopper game can be deduced from a stochastic target problem with a non-cooperative stopper. Given a bounded continuous function $g:{\mathbb{R}}^d\rightarrow{\mathbb{R}}$, we define a stochastic controller-stopper game by $$\mathbf{u}_{{\text{unco}}}(t,x):=\inf_{\nu\in\mathcal{U}^t}\sup_{\rho\in\mathcal{T}_{t}}\mathbb{E}[g(X_{t,x}^{\nu}(\rho))].$$ We follow the setup of Section \[sec:prob\] with one exception: $\mathcal{U}^{t}$ is the collection of all the $\mathbb{F}^{t}$-predictable processes in $\mathbb{L}^2(\Omega\times[0,T], \mathcal{F}\otimes\mathcal{B}[0,T], \mathbb{P}\otimes \lambda_{L}; U)$, where $U\subset {\mathbb{R}}^{d}$ and $X$ follows the SDE $$dX(s)=\mu_{X}(s,X(s),\nu(s))ds+\sigma_{X}(s,X(s),\nu(s))dW_s+\int_{E} \beta(s,X(s-),\nu(s), e)\lambda(ds,de).$$ The following embedding lemma is an adaptation of a result in [@Bouchard_Equivalence].
\[eq:equivalence\_application-superhedging\] Suppose Assumptions \[assump: lambda intensity kernel\] holds. Define $$\begin{gathered}
u_{{\text{unco}}}(t,x):=\inf\{y\in{\mathbb{R}}: \exists (\nu, \alpha, \gamma)\in \mathcal{U}^t\times\mathcal{A}^t\times\Gamma^t_{{\text{unco}}}\;\text{s.t.}\;Y_{t,y}^{\alpha,\gamma}(\rho)\geq g(X_{t,x}^{\nu}(\rho))\;\text{for all } \rho\in\mathcal{T}_{t}\}, \text{where} \\
Y_{t,y}^{\alpha,\gamma}(\cdot):=y+\int_t^{\cdot}\alpha^{\top}(s)dW_s+\int_t^{\cdot}\int_E\gamma^{\top}(s,e)\tilde{\lambda}(ds,de),
\end{gathered}$$ and $\mathcal{A}^t$ and $\Gamma^{t}_{{\text{unco}}}$ are the collections of ${\mathbb{R}}^{d}$-valued and $\mathbb{L}^{2}(E, \mathcal{E}, \hat{m}; {\mathbb{R}}^{I})$-valued processes, respectively, satisfying the the measurability and the integrablity condition in Section \[sec:prob\]. Then $u_{{\text{unco}}}=\mathbf{u}_{{\text{unco}}}$ on ${\mathbb{D}}$.
For fixed $\nu\in\mathcal{U}^{t}$, let $$A^{\nu}(s):=\esssup_{\tau\in\mathcal{T}_s}\mathbb{E}[g(X^{\nu}_{t,x}(\tau))|\mathcal{F}_s], s \geq t.$$ Then $A^{\nu}$ is the Snell envelope (starting at $t$) of $g(X_{t,x}^{\nu})$ and thus a super-martingale. Moreover, $$\esssup_{\tau\in\mathcal{T}_t}\mathbb{E}[g(X^{\nu}_{t,x}(\tau))|\mathcal{F}_t] + A^{\nu}(\rho)-A^{\nu}(t)\geq g(X^{\nu}_{t,x}(\rho))\text{ for all }\rho\in\mathcal{T}_{t}.$$ By Doob-Meyer Decomposition Theorem, $A^{\nu}_{s}=M^{\nu}_{s}-C^{\nu}_{s}$ for $s\in[t,T]$, where $M^{\nu}$ is a martingale on $[t,T]$ and $C^{\nu}$ is an increasing adapted process with $C^{\nu}_{t}=0$. Therefore, $$\esssup_{\tau\in\mathcal{T}_t}\mathbb{E}[g(X^{\nu}_{t,x}(\tau))|\mathcal{F}_t] + M^{\nu}(\rho)-M^{\nu}(t)\geq g(X^{\nu}_{t,x}(\rho))\text{ for all }\rho\in\mathcal{T}_{t}.$$ Denote $\mathcal{M}_{{\text{unco}}}=\{M^{\nu}: \nu\in{\mathcal{U}}^{t}\}$. In view of Lemma \[lem:stochastic\_target\_representation\_superhedging\], it suffices to check that $$\label{eq:inclusion_equi}
\mathcal{M}_{{\text{unco}}}\subset
\mathcal{M}:=\left\{Y_{t,y}^{\alpha,\gamma}(\cdot): y\in{\mathbb{R}}, \alpha\in\mathcal{A}^{t}, \gamma\in\Gamma^{t}_{{\text{unco}}} \right\}.$$ In fact, by the martingale representation theorem (see e.g. Theorem 14.5.7 in [@MR3443368]), for any $\nu\in\mathcal{U}^{t}$, $M^{\nu}$ can be represented in the form of $Y_{t,y}^{\alpha, \gamma}$ for some $\alpha \in\mathcal{A}^{t}$ and $\gamma\in\Gamma^{t}_{0}$, where $\Gamma_{0}^{t}$ is the collection of $\mathbb{L}^{2}(E, \mathcal{E}, \hat{m}; {\mathbb{R}}^{I})$-valued processes satisfying all of the admissibility conditions except for $\eqref{eq:admissibility_super}$. We now prove that $\Gamma_{0}^{t}$ in the claim above can be actually replaced by $\Gamma^{t}_{{\text{unco}}}$. Assume, contrary to , that there exists $\nu_{0}\in\mathcal{U}^{t}$ such that $$\mathcal{M}^{\nu_{0}}(\cdot)=y+\int_t^{\cdot}\alpha_{0}^{\top}(s)dW_s+\int_t^{\cdot}\int_E\gamma_{0}^{\top}(s,e)\tilde{\lambda}(ds,de)$$ for some $y\in\mathbb{R}$, $\alpha_{0}\in\mathcal{A}^{t}$ and $\gamma_{0}\in\Gamma^{t}_{0}$, but does not hold. This means that for $K>2\|g\|_{\infty}$, there exists $\tau_{0}\in\mathcal{T}_{t}$ such that $\mathbb{P}\left(\int_{E}\gamma_{0}^{\top}(\tau_{0},e) \lambda(\{\tau_{0}\},de)\leq -K\right)>0. $ Therefore, $$M^{\nu_{0}}(\tau_{0})-M^{\nu_{0}}(\tau_{0}-) = \int_{E}\gamma_{0}^{\top}(\tau_{0},e) \lambda(\{\tau_{0}\},de)\leq -K\;\;\text{with positive probability},$$ which further implies that $$A^{\nu_{0}}(\tau_{0})-A^{\nu_{0}}(\tau_{0}-)\leq -K \;\;\text{with positive probability}.$$ This contradicts the fact that $A^{\nu_{0}}$ is (strictly) bounded by $\frac{K}{2}$.
Let $\mathbf{H}^*$ be the USC envelope of the LSC map $\mathbf{H}:{\mathbb{D}}\times{\mathbb{R}}^d\times\mathbb{M}^d\times C({\mathbb{D}}) \rightarrow {\mathbb{R}}$ defined by $$\begin{array}{c}
\mathbf{H}: (t,x,p,A, {\varphi}) \rightarrow \sup_{u\in U}\{-I[{\varphi}](t,x,u)-\mu_X^{\top}(t,x,u)p-\frac{1}{2}\text{Tr}[\sigma_X\sigma_X^{\top}(t,x,u)A]\},\;\text{where} \\
I[{\varphi}](t,x,u)=\sum_{1\leq i\leq I}\int_E \left( {\varphi}(t,x+\beta_i(t,x,u,e))-{\varphi}(t,x)\right)m_i(de).
\end{array}$$
\[thm: optimal control-superhedging\] Under Assumptions \[assump: lambda intensity kernel\] and \[assump: regu\_on\_coeff\], $u^+_{{\text{unco}}}$ is a USC viscosity sub-solution of $$\min\{{\varphi}(t,x)-g(x), -\partial_t{\varphi}(t,x)+\mathbf{H}{\varphi}(t,x)\}\leq 0\text{ on } {\mathbb{D}_{i}}$$ and $u^+_{{\text{unco}}}(T-,x)\leq g(x)$ for all $x\in {\mathbb{R}}^d$. On the other hand, $u^-_{{\text{unco}}}$ is an LSC viscosity super-solution of $$\min\{{\varphi}(t,x)-g(x), -\partial_t{\varphi}(t,x)+\mathbf{H}^{*}{\varphi}(t,x)\}\geq 0\text{ on } {\mathbb{D}_{i}}$$ and $u^-_{{\text{unco}}}(T-,\cdot)\geq g(x)$ for all $x\in{\mathbb{R}}^{d}$.
It is easy to check Assumption \[assump: regularity-super\] for the stochastic target problem. Since $g$ is bounded, we can check that all of the assumptions in the Appendix A are satisfied, which implies that Assumption \[assump:semisolution\_not\_empty\_super\] holds. From Theorem \[thm: main theorem\_interior-superhedging\], $u^+_{{\text{unco}}}$ is a USC viscosity sub-solution of $$\min\{{\varphi}(t,x)-g(x), -\partial_t{\varphi}(t,x)+H_{*}{\varphi}(t,x)\}\leq 0\text{ on } {\mathbb{D}_{i}}$$ and $u^-_{{\text{unco}}}$ is an LSC viscosity super-solution of $$\min\{{\varphi}(t,x)-g(x), -\partial_t{\varphi}(t,x)+H^{*}{\varphi}(t,x)\}\geq 0\text{ on } {\mathbb{D}_{i}}$$ From Proposition 3.3 in [@Bouchard_Equivalence], $H^*\leq \mathbf{H}^*$ and $H_*\geq \mathbf{H}$. This implies that the viscosity properties in the parabolic interior hold. From the definition of $\delta$ in , we know that $$\begin{array}{ll}
\mathbf{N}(t,x,y,p,{\varphi})=&\{(q,s)\in{\mathbb{R}}^d\times{\mathbb{R}}: \exists (u,a,r)\in U\times{\mathbb{R}}^d\times{\mathbb{L}}^2(E,\mathcal{E}, \hat{m};{\mathbb{R}}^I)\;\text{s.t.}
\;q=a-\sigma_X^{\top}(t,x,u)p \\ &\text{and }s\leq \min_{1\leq i\leq I}\{r_i(e)-{\varphi}(t,x+\beta_i(t,x,u,e))+{\varphi}(t,x)\} \; \hat{m}-\text{a.s.}\; e\in E\; \}.
\end{array}$$ Obviously, $\mathbf{N}={\mathbb{R}}^d\times{\mathbb{R}}$. Therefore, $\delta=\infty$ and the boundary conditions hold.
The following two corollaries show that $\mathbf{u}_{{\text{unco}}}$ is the unique viscosity solution to its associated HJB equation. We omit the proofs as the proofs are relatively simple given the above result.
\[coro2-superhedging\] Suppose that Assumptions \[assump: lambda intensity kernel\] and \[assump: regu\_on\_coeff\] hold, $\mathbf{H}=\mathbf{H}^*$ on $\{\mathbf{H}<\infty\}$ and there exists an LSC function $\mathbf{G}:{\mathbb{D}}\times{\mathbb{R}}\times{\mathbb{R}}^d\times\mathbb{M}^d\times C({\mathbb{D}})\cap \{\mathbf{H}<\infty\}\rightarrow{\mathbb{R}}$ such that $$\begin{array}{c}
(a)\; \mathbf{H}(t,x,y,p,M,{\varphi})<\infty \implies \mathbf{G}(t,x,y,p,M,{\varphi})\leq 0, \\
(b)\;\mathbf{G}(t,x,y,p,M,{\varphi})<0\implies \mathbf{H}(t,x,y,p,M,{\varphi})<\infty.
\end{array}$$ Then $u^+_{{\text{unco}}}$ $(\text{resp. } u^-_{{\text{unco}}})$ is a USC $($resp. an LSC$)$ viscosity sub-solution $($resp. super-solution$)$ of $$\min\{{\varphi}(t,x)-g(x), \max\{-\partial_t{\varphi}(t,x)+\mathbf{H}{\varphi}(t,x), \mathbf{G}{\varphi}(t,x)\}\}=0\;\;\text{on}\;\;{\mathbb{D}}_{i}.$$
\[Remark:existence of G\] Consider that $b = \beta = 0$ and the state process $X$ follows the set-up in [@Bayraktar_and_Sirbu_SP_HJBEqn]. In the case of one-dimensional utility maximization, where $H(t,x,p,M) = p/2M^{2}$, one can see that $H(t,x,\cdot)$ is continuous and finite in ${\mathbb{R}}\times {\mathbb{R}}$ but not at $(0,0)$. Then we can easily check that $G= - e^{-H}$ satisfies all the properties in Corollary 4.1.
Suppose that all the assumptions in Corollary \[coro2-superhedging\] hold. Then $u^{+}_{{\text{unco}}}(T-,x)=u^{-}_{{\text{unco}}}(T-,x)=g(x)$. Moreover, if the comparison principle holds for $$\min\{{\varphi}(t,x)-g(x), \max\{-\partial_t{\varphi}(t,x)+\mathbf{H}{\varphi}(t,x), \mathbf{G}{\varphi}(t,x)\}\}=0\;\;\text{on}\;\;{\mathbb{D}}_{i},$$ then $u_{{\text{unco}}}$$(=\mathbf{u}_{{\text{unco}}})$ is the unique continuous viscosity solution with $u_{{\text{unco}}}(T,x)=g(x)$.
As for the assumptions needed for the comparison principle to hold, we refer the readers to Theorem 4.1 in [@Pham1998] for a similar comparison principle result. Thus, we get an example in which the comparison principle holds (up to slight modification). A more general result for controlled jumps is provided in [@Barles2008].
Analysis of $u_{{\text{co}}}$ defined in {#sec:subhedging}
=========================================
In the section, using stochastic Perron’s method we prove that an appropriate upper bound of $u_{{\text{co}}}$ is a viscosity sub-solution of $$\label{eq: sub_HJB equation_interior-subhedging}
\min\{{\varphi}(t,x)-g(x), -\partial_t{\varphi}(t,x)+ F_*{\varphi}(t,x)\}\leq 0 \;\;\text{in}\;\;{\mathbb{D}_{i}}.$$ and an appropriate lower bound is a viscosity super-solution of $$\label{eq: super_HJB equation_interior-subhedging}
\min\{{\varphi}(t,x)-g(x), -\partial_t{\varphi}(t,x)+ F^*{\varphi}(t,x)\}\geq 0 \;\;\text{in}\;\;{\mathbb{D}_{i}}$$ The boundary conditions will be deferred to Theorem \[thm: bd\_viscosity\_property\_subhedging\]. In order to construct the aforementioned upper and lower envelopes we will introduce two classes of functions next.
\[Stochastic super-solutions\] \[def: Stochasticsuper-solution\_subhedging\] A continuous function $w: {\mathbb{D}}\rightarrow \mathbb{R}$ is called a stochastic super-solution of if
1. $w(t, x)\geq g(x)$ and for some $C>0$ and $n\in{\mathbb{N}}$, $|w(t,x)|\leq C(1+|x|^{n})$ for all $(t,x)\in {\mathbb{D}}$.
2. Given $(t,x,y)\in {\mathbb{D}}\times\mathbb{R}$, for any $\tau\in\mathcal{T}_t$, $\rho \in \mathcal{T}_{\tau}$ and $\nu\in {\mathcal{U}}_{{\text{co}}}^{t}$, we have $$\mathbb{P}(Y(\rho )>w(\rho, X(\rho))|B)>0$$ for any $B\subset \{Y(\tau)>w(\tau,X(\tau))\}$ satisfying $B\in\mathcal{F}_\tau^t$ and $\mathbb{P}(B)>0$. Here, $X:= X_{t,x}^{\nu}$ and $ Y:=Y_{t,x,y}^{\nu}$.
\[Stochastic sub-solutions\] \[def: Stochasticsub-solution\_subhedging\] A continuous function $w: {\mathbb{D}}\rightarrow \mathbb{R}$ is called a stochastic sub-solution of if
1. $w(T, x)\leq g(x)$ for all $x\in\mathbb{R}^d$ and for some $C>0$ and $n\in{\mathbb{N}}$, $|w(t,x)|\leq C(1+|x|^{n})$ for all $(t,x)\in {\mathbb{D}}$.
2. Given$(t,x,y)\in {\mathbb{D}}\times\mathbb{R}$, for any $\tau\in\mathcal{T}_t$ and $\nu\in {\mathcal{U}}_{{\text{co}}}^t$, there exist $\rho \in \mathcal{T}_{\tau}$ and $\tilde{\nu}\in\mathcal{U}_{{\text{co}}}^t$ such that $$Y(\rho )\leq g(X(\rho)) \text{ on } \{Y(\tau)\leq w(\tau,X(\tau))\},$$ where $X:= X_{t,x}^{\nu\otimes_{\tau}\tilde{\nu}}$ and $ Y:=Y_{t,x,y}^{\nu\otimes_{\tau}\tilde{\nu}}$.
Denote the sets of stochastic super-solutions and sub-solutions by ${\mathbb{U}}^+_{{\text{co}}}$ and ${\mathbb{U}}^-_{{\text{co}}}$, respectively.
\[assump:semisolution\_not\_empty\_sub\] ${\mathbb{U}}^+_{{\text{co}}}$ and ${\mathbb{U}}^-_{{\text{co}}}$ are not empty.
Sufficient conditions for the above assumption are given in Appendix A. These conditions will be useful once we analyze the cooperative controller-stopper problem. We are ready to define the aforementioned envelopes.
Let $u^+_{{\text{co}}}:=\inf_{w \in \mathbb{U}^{+}_{{\text{co}}}} w$ and $u^-_{{\text{co}}}:=\sup_{w \in \mathbb{U}^{-}_{{\text{co}}}} w$.
- For $w\in{\mathbb{U}}^{+}_{{\text{co}}}$, choose $\tau=t$. Then for any $\nu\in{\mathcal{U}}^{t}_{{\text{co}}}$ and $\rho\in\mathcal{T}_{t}$, it holds that ${\mathbb{P}}(Y(\rho) >g\left(X(\rho)\right))>{\mathbb{P}}(Y(\rho)>w(\rho, X(\rho)))>0
\text{ if } y>w(t,x).$ Hence, $y\geq w(t,x)$ implies that $y\geq u_{{\text{co}}}(t,x)$ from . This means that $w\geq u_{{\text{co}}}$ and $u^+_{{\text{co}}}\geq u_{{\text{co}}}$. By the definition of ${\mathbb{U}}^{+}_{{\text{co}}}$, we know that $u^+_{{\text{co}}}(t,x)\geq g(x)$ for all $(t,x)\in{\mathbb{D}}$.
- For $w\in{\mathbb{U}}^{-}_{{\text{co}}}$, if $y\leq w(t,x)$, by choosing $\tau=t$, we get that there exist $\tilde{\nu}\in{\mathbb{U}}^{t}_{{\text{co}}}$ and $\rho\in\mathcal{T}_{t}$ such that $Y_{t,x,y}^{\tilde{\nu}}(\rho)\leq g(X_{t,x}^{\tilde\nu}(\rho))\; {\mathbb{P}}{\mbox{-a.s.}}.$ Therefore, from , $y<w(t,x)$ implies that $y\leq u(t,x)$. This means that $w\leq u_{{\text{co}}}$ and $u^-_{{\text{co}}}\leq u_{{\text{co}}}$. By the definition of ${\mathbb{U}}^{-}_{{\text{co}}}$, it holds that $u^{-}_{{\text{co}}}(T,x)\leq g(x)$ for all $x\in{\mathbb{R}}^d$.
In short, $$\label{eq:intfvmavp-subhedging}
u^-_{{\text{co}}} = \sup _{w\in \mathbb{U}^-_{{\text{co}}}} w\leq u_{{\text{co}}} \leq \inf _{w\in \mathbb{U}^+_{{\text{co}}}}w= u_{{\text{co}}}^+.$$
Viscosity Property in ${\mathbb{D}_{i}}$ {#viscosity-property-in-mathbbd_i}
----------------------------------------
Before we state the main results, we need the following assumption which is crucial to the super-solution property of $u^{+}_{{\text{co}}}$.
\[assump: regularity\_sub\] For $\psi\in C({\mathbb{D}})$, $\eta>0$, let $B$ be a subset of ${\mathbb{D}}\times{\mathbb{R}}\times{\mathbb{R}}^d$ such that $\mathcal{M}_{0,-\eta}(\cdot, \psi)\neq\emptyset$ on $B$. Then for every ${\varepsilon}>0$, $(t_0,x_0,y_0,p_0)\in Int(B)$ and $u_0\in\mathcal{M}_{0, -\eta}(t_0,x_0,y_0,p_0,\psi) $, there exists an open neighborhood $B'$ of $(t_0,x_0,y_0,p_0)$ and a locally Lipschitz continuous map $\hat{\nu}$ defined on $B'$ such that $\|\hat{\nu}(t_0,x_0,y_0,p_0)-u_0\|_{U}\leq {\varepsilon}$ and $\hat{\nu}(t,x,y,p)\in \mathcal{M}_{0, -\eta}(t,x,y,p, \psi)$.
As before we have the following two results whose proofs will be omitted.
${\mathbb{U}}^{+}_{{\text{co}}}$ and ${\mathbb{U}}^{-}_{{\text{co}}}$ are closed under pairwise minimization and maximization, respectively.
\[lem: monotone seq approaches v+ or v\_–subhedging\] There exists a non-increasing sequence $\{w_{n}\}_{n=1}^{\infty}\subset{\mathbb{U}}^{+}_{{\text{co}}}$ such that $w_n\searrow u^+_{{\text{co}}}$ and a non-decreasing sequence $\{v_{n}\}_{n=1}^{\infty}\subset{\mathbb{U}}^{-}_{{\text{co}}}$ such that $v_n\nearrow u^-_{{\text{co}}}$.
\[thm: main theorem\_interior-subhedging\] Under Assumptions \[assump: lambda intensity kernel\], \[assump: regu\_on\_coeff\], \[assump:semisolution\_not\_empty\_sub\] and \[assump: regularity\_sub\], $u^+_{{\text{co}}}$ is an upper semi-continuous (USC) viscosity sub-solution of . On the other hand, under Assumptions \[assump: lambda intensity kernel\], \[assump: regu\_on\_coeff\] and \[assump:semisolution\_not\_empty\_sub\], $u^-_{{\text{co}}}$ is a lower semi-continuous (LSC) viscosity super-solution of .
[**Step 1 ($u_{{\text{co}}}^+$ is a viscosity sub-solution).**]{} The proof of this claim is similar to Step 2 of the proof of Theorem 3.1 in [@BayraktarLi-Jump]. The difference is that the proof uses sub-martingale property since the target is $Y \leq g(X)$ instead of $Y \ge g(X)$.
[**Step 2 ($u^-_{{\text{co}}}$ is a viscosity super-solution).**]{}\
**Step A:** We show in this step that $u^{-}_{{\text{co}}}(t,x)\geq g(x)$ for all $(t,x)\in{\mathbb{D}}$. Assume, on the contrary, that for some $(t_{0},x_{0})\in{\mathbb{D}}$, there exists $\eta>0$ such that $$\label{eq:subhedging-interior-u-minus-step2A-contra}
2\eta=g(x_{0})-u^{-}_{{\text{co}}}(t_{0},x_{0})>0.$$ Choose an arbitrary $w\in{\mathbb{U}}_{{\text{co}}}^{-}$. By the definition of ${\mathbb{U}}_{{\text{co}}}^{-}$ and lower semi-continuity of $g$, there exists ${\varepsilon}>0$ such that $$g(x)-w(t,x)>\eta, \; g(x)-g(x_{0})>-\frac{\eta}{2}, \; |w(t,x)-w(t_{0},x_{0})|\leq \frac{\eta}{2}\text{ for all } (t,x)\in\text{cl}(B_{{\varepsilon}}(t_0,x_0)).$$ Define $$w'(t,x):=
\left \{
\begin{split}
&w(t,x)+(g(x_{0})-\eta-w(t_{0},x_{0}))\left(1-\text{dist}((t,x), (t_{0},x_{0}))/{\varepsilon}\right) &\text{ for } (t,x)\in\text{cl}(B_{{\varepsilon}}(t_0, x_0)),\\
&w(t,x) & \text{ for } (t,x)\notin\text{cl}(B_{{\varepsilon}}(t_0, x_0)).
\end{split}
\right.$$ Obviously, $w'\geq w$ and $w'$ is continuous with polynomial growth. In addition, $$\label{eq:subhedging-interior-u-minus-step2A-1}
\{(t,x):w(t,x)<w'(t,x)\}=B_{{\varepsilon}}(t_{0},x_{0}) \quad \text{and}$$ $$\label{eq:subhedging-interior-u-minus-step2A-2}
w'(t,x)\leq w(t,x)+ (g(x_{0})-\eta-w(t_{0},x_{0}))< g(x_{0})-\frac{\eta}{2}<g(x) \text{ for }(t,x)\in\text{cl}(B_{{\varepsilon}}(t_{0},x_{0})).$$ The equation above, along with the fact that $w\in{\mathbb{U}}^{-}_{{\text{co}}}$, implies that $w'(T,x)\leq g(x)$ for all $x\in{\mathbb{R}}^{d}$. Noting that $w'(t_{0},x_{0})=g(x_{0})-\eta>u_{{\text{co}}}^{-}(t_{0},x_{0})$ due to , we would obtain a contradiction if we could show $w'\in{\mathbb{U}}^{-}_{{\text{co}}}$. We now prove that $w'\in{\mathbb{U}}^{-}_{{\text{co}}}$.
Fix $(t,x,y)\in {\mathbb{D}}_i\times\mathbb{R}$, $\tau \in \mathcal{T}_t$ and $\nu\in\mathcal{U}_{{\text{co}}}^{t}$. For $w\in{\mathbb{U}}_{{\text{co}}}^{-}$, let $\rho^{w,\tau,\nu} \in \mathcal{T}_{\tau}$ and $\tilde{\nu}^{w,\tau,\nu}$ be the “optimal” stopping time and control satisfying the second item in Definition \[def: Stochasticsub-solution\_subhedging\]. In order to show that $w'\in{\mathbb{U}}^-_{{\text{co}}}$, we want to construct an “optimal” stopping time $\rho$ and “optimal” control $\tilde{\nu}$ which work for $w'$ in the sense of Definition \[def: Stochasticsub-solution\_subhedging\]. Let $A=\{w(\tau,X(\tau))=w'(\tau,X(\tau))\}\in\mathcal{F}^t_{\tau}$, $$\rho=\mathbbm{1}_{A}\rho^{w,\tau,\nu}+\mathbbm{1}_{A^c}\tau \;\text{and}\; \tilde{\nu}=(\mathbbm{1}_{A}\tilde{\nu}^{w,\tau,\nu}+\mathbbm{1}_{A^c} u_{0})\mathbbm{1}_{[\tau, T]},$$ where $u_{0}$ is an arbitrary element in $U$. Obviously, $\rho\in\mathcal{T}_{\tau}$ and $\tilde{\nu}\in\mathcal{T}_{t}$. It suffices to show $$Y(\rho)\leq g(X(\rho))\;\; {\mathbb{P}}{\mbox{-a.s.}}\;\;\text{on}\;\;\{Y\leq w'(\tau, X(\tau))\}.$$ (i) On $A\cap\{Y\leq w'(\tau, X(\tau))\}$: Note that $A\cap\{Y\leq w'(\tau, X(\tau))\} \subset \{Y(\tau)\leq w(\tau,X(\tau))\}$. From the fact $w\in{\mathbb{U}}^{-}_{{\text{co}}}$ and the definition of $\rho$ and $\tilde{\nu}$ on $A$, it holds that $$\label{eq:case1_super-solution-property-subhedging}
Y(\rho)=Y(\rho^{w,\tilde{\nu},\tau})\leq g(X(\rho^{w,\tilde{\nu},\tau}))= g(X(\rho)) \text{ on } A\cap\{Y\leq w'(\tau, X(\tau))\}.$$ (ii) On $A^{c}\cap\{Y\leq w'(\tau, X(\tau))\}$: $(\tau,X(\tau))\in B_{{\varepsilon}}(t_0,x_0)$ on $A^{c}$ from , which implies $w'(\tau,X(\tau))<g(X(\tau))$ from . This, together with the fact that $\rho=\tau$ on $A^{c}$, implies that $$\label{eq:case2_super-solution-property-subhedging}
Y(\rho)\leq w'(\rho, X(\rho))\leq g(X({\rho}))\text{ on }A^{c}\cap\{Y\leq w'(\tau, X(\tau))\}.$$ **Step B:** We claim that $u^{-}_{{\text{co}}}$ is a viscosity super-solution to $$-\partial_t{\varphi}(t,x)+ F^*{\varphi}(t,x) \geq 0.$$ We omit this proof, which is rather long, in the interest of space. This follows the outline of Step 1 in the proof ofTheorem 3.1 of [@BayraktarLi-Jump]. It is worth noting that after the construction of $w^{\kappa}$ in that proof, given $(t,x,y)\in {\mathbb{D}}_i\times\mathbb{R}$, $\tau \in \mathcal{T}_t$ and $\nu\in\mathcal{U}_{{\text{co}}}^{t}$, we need to construct an “optimal” stopping time $\rho$ and “optimal” control $\tilde{\nu}$ which work for $w'$ in the sense of Definition \[def: Stochasticsub-solution\_subhedging\] (as we did above in Step A).
Boundary condition for $u_{{\text{co}}}$
----------------------------------------
As for the boundary conditions, instead of studying $u_{{\text{co}}}^+(T,x)$ and $u_{{\text{co}}}^-(T,x)$, we still consider $$\label{eq: boundary_u-_def_subhedging}
u_{{\text{co}}}^+(T-,x)=\limsup _{(t<T, x')\rightarrow (T,x)} u_{{\text{co}}}^+(t,x'), \;\;u_{{\text{co}}}^-(T-,x)=\liminf _{(t<T, x')\rightarrow (T,x)} u_{{\text{co}}}^-(t,x').$$
\[thm: bd\_viscosity\_property\_subhedging\] Under Assumptions \[assump: lambda intensity kernel\], \[assump: regu\_on\_coeff\], \[assump:semisolution\_not\_empty\_sub\] and \[assump: regularity\_sub\], $u_{{\text{co}}}^{+}(T-,\cdot)$ is an USC viscosity sub-solution of $$\left({\varphi}(x)-g(x)\right)\mathbbm{1}_{\{F_*{\varphi}(x)>-\infty\}}\leq 0\text{ on } {\mathbb{R}}^d.$$ Moreover, $
u_{{\text{co}}}^{-}(T-,x)\geq g(x) \text{ for all }x\in{\mathbb{R}}^d.
$
Since $u_{{\text{co}}}^-(t,x)\geq g(x)$ for any $(t,x)\in{\mathbb{D}}$ due to Step 2A of Theorem \[thm: main theorem\_interior-subhedging\], it directly follows that $u^-_{\text{co}}(T-,x)\geq g(x)$. The proof of the subsolution property is longer but the proof of Theorem 4.1 in [@BayraktarLi-Jump] can be adapted to the present case.
Cooperative Controller-Stopper Game {#sec:equivalence-subhedging}
===================================
In this section, we prove that a cooperative controller-stopper problem can expressed in terms of a stochastic target problem with a cooperative stopper. Given a bounded continuous function $g:{\mathbb{R}}^d\rightarrow{\mathbb{R}}$, we define $$\mathbf{u}_{{\text{co}}}(t,x):=\sup_{\nu\in\mathcal{U}^t}\sup_{\rho\in\mathcal{T}_{t}}\mathbb{E}[g(X_{t,x}^{\nu}(\rho))].$$ We follow the setup of Section \[sec:prob\] with one exception: $\mathcal{U}^{t}$ is the collection of all the $\mathbb{F}^{t}$-predictable processes in $\mathbb{L}^2(\Omega\times[0,T], \mathcal{F}\otimes\mathcal{B}[0,T], \mathbb{P}\otimes \lambda_{L}; U)$, where $U\subset {\mathbb{R}}^{d}$ and $X$ follows the SDE $$dX(s)=\mu_{X}(s,X(s),\nu(s))ds+\sigma_{X}(s,X(s),\nu(s))dW_s+\int_{E} \beta(s,X(s-),\nu(s), e)\lambda(ds,de).$$
\[eq:equivalence\_application\] Suppose Assumptions \[assump: lambda intensity kernel\] and \[assump: regu\_on\_coeff\] hold. Define a stochastic target problem as follows: $$\begin{gathered}
u_{{\text{co}}}(t,x):=\sup\{y\in{\mathbb{R}}: \exists (\nu, \alpha, \gamma)\in \mathcal{U}^t\times\mathcal{A}^t\times\Gamma^t_{{\text{co}}}\text{ and } \rho\in\mathcal{T}_{t}\;\text{s.t.}\;Y_{t,y}^{\alpha,\gamma}(\rho)\leq g(X_{t,x}^{\nu}(\rho))\}, \text{where} \\
Y_{t,y}^{\alpha,\gamma}(\cdot):=y+\int_t^{\cdot}\alpha^{\top}(s)dW_s+\int_t^{\cdot}\int_E\gamma^{\top}(s,e)\tilde{\lambda}(ds,de)
\end{gathered}$$ and $\mathcal{A}^t$ and $\Gamma^{t}_{{\text{co}}}$ are the sets of ${\mathbb{R}}^{d}$-valued and $\mathbb{L}^{2}(E, \mathcal{E}, \hat{m}; {\mathbb{R}}^{I})$-valued processes, respectively, satisfying the admissibility conditions in Section \[sec:prob\]. Then $u_{{\text{co}}}=\mathbf{u}_{{\text{co}}}$ on ${\mathbb{D}}$.
In view of Lemma \[lem:stochastic\_target\_representation\_subhedging\] and Remark \[re:subhedging\], it suffices to check that $$\label{eq:inclusion_equi-subhedging}
\mathcal{M}_{{\text{co}}}\subset
\mathcal{M}:=\left\{Y_{t,y}^{\alpha,\gamma}(\cdot): y\in{\mathbb{R}}, \alpha\in\mathcal{A}^{t}, \gamma\in\Gamma^{t} \right\},$$ where $\mathcal{M}_{{\text{co}}}$ is defined as in Remark \[re:subhedging\]. In fact, by the martingale representation theorem, for any $\nu\in\mathcal{U}^{t}$ and $\rho\in\mathcal{T}_{t}$, $\mathbb{E}[g(X_{t,x}^{\nu}(\rho))|\mathcal{F}^{t}_{\cdot}]$ can be represented in the form of $Y_{t,y}^{\alpha, \gamma}$ for some $\alpha \in\mathcal{A}^{t}$ and $\gamma\in\Gamma^{t}_{0}$,[^6] where $\Gamma_{0}^{t}$ is the set of $\mathbb{L}^{2}(E, \mathcal{E}, \hat{m}; {\mathbb{R}}^{I})$-valued processes satisfying all of the admissibility conditions except $\eqref{eq:admissibility_sub}$. We now prove that such $\gamma$ satisfies the condition in , thus finishing the proof.
Assume, contrary to , that there exists $\nu_{0}\in\mathcal{U}^{t}$ and $\rho\in\mathcal{T}_{t}$ such that $$\mathbb{E}[g(X_{t,x}^{\nu_{0}}(\rho))|\mathcal{F}^{t}_{\cdot}]=y+\int_t^{\cdot}\alpha_{0}^{\top}(s)dW_s+\int_t^{\cdot}\int_E\gamma_{0}^{\top}(s,e)\tilde{\lambda}(ds,de)$$ for some $y\in\mathbb{R}$, $\alpha_{0}\in\mathcal{A}^{t}$ and $\gamma_{0}\in\Gamma^{t}_{0}$, but does not hold for $\gamma_{0}$. In the equation above, $\mathbb{E}[g(X_{t,x}^{\nu_{0}}(\rho))|\mathcal{F}^{t}_{\cdot}]$ can be chosen to be càdlàg, thanks to Theorem 1.3.13 in [@ShreveKaratzas]. Then for $K>2\|g\|_{\infty}$, there exists $\tau_{0}\in\mathcal{T}_{t}$ such that $\mathbb{P}\left(\int_{E}\gamma_{0}^{\top}(\tau_{0},e) \lambda(\{\tau_{0}\},de)>K\right)>0.$ Letting $M_{0}(\cdot)=\mathbb{E}\left[g(X_{t,x}^{\nu_{0}}(\rho))|\mathcal{F}^{t}_{\cdot}\right]$, we get that $$M_{0}(\tau_{0})-M_{0}(\tau_{0}-) = \int_{E}\gamma_{0}^{\top}(\tau_{0},e) \lambda(\{\tau_{0}\},de)>K\;\;\text{with positive probability}.$$ Since $|M_{0}|$ is bounded by $\|g\|_{\infty}<K/2$, we obtain a contradiction.
Let $\mathbf{F}_{*}$ be the LSC envelope of the USC map $\mathbf{F}:{\mathbb{D}}\times{\mathbb{R}}^d\times\mathbb{M}^d\times C({\mathbb{D}}) \rightarrow {\mathbb{R}}$ defined by $$\begin{array}{c}
\mathbf{F}: (t,x,p,A, {\varphi}) \rightarrow \inf_{u\in U}\{-I[{\varphi}](t,x,u)-\mu_X^{\top}(t,x,u)p-\frac{1}{2}\text{Tr}[\sigma_X\sigma_X^{\top}(t,x,u)A]\},\;\text{where} \\
I[{\varphi}](t,x,u)=\sum_{1\leq i\leq I}\int_E \left( {\varphi}(t,x+\beta_i(t,x,u,e))-{\varphi}(t,x)\right)m_i(de).
\end{array}$$
\[thm: optimal control-subhedging\] Under Assumptions \[assump: lambda intensity kernel\] and \[assump: regu\_on\_coeff\], $u^+_{{\text{co}}}$ is a USC viscosity sub-solution of $$\min\{{\varphi}(t,x)-g(x), -\partial_t{\varphi}(t,x)+\mathbf{F}_{*}{\varphi}(t,x)\}\leq 0\text{ on } {\mathbb{D}_{i}}$$ and $u^+_{{\text{co}}}(T-,x)$ is an USC viscosity sub-solution of $$\left({\varphi}(x)-g(x)\right)\mathbbm{1}_{\{\mathbf{F}_{*}{\varphi}(x)>-\infty\}}\leq 0\text{ on } {\mathbb{R}}^d.$$ On the other hand, $u^-_{{\text{co}}}$ is an LSC viscosity super-solution of $$\min\{{\varphi}(t,x)-g(x), -\partial_t{\varphi}(t,x)+\mathbf{F}{\varphi}(t,x)\}\geq 0\text{ on } {\mathbb{D}_{i}}$$ and $u^-_{{\text{co}}}(T-,\cdot)\geq g(x)$ for all $x\in{\mathbb{R}}^{d}$.
It is easy to check Assumption \[assump: regularity\_sub\] for the stochastic target problem. Since $g$ is bounded, we can check that all of the assumptions in the Appendix A are satisfied, which implies that Assumption \[assump:semisolution\_not\_empty\_sub\] holds. From Theorem \[thm: main theorem\_interior-subhedging\], $u^+_{{\text{co}}}$ is a USC viscosity sub-solution of $$\min\{{\varphi}(t,x)-g(x), -\partial_t{\varphi}(t,x)+F_{*}{\varphi}(t,x)\}\leq 0\text{ on } {\mathbb{D}_{i}}$$ and $u^-_{{\text{co}}}$ is an LSC viscosity super-solution of $$\min\{{\varphi}(t,x)-g(x), -\partial_t{\varphi}(t,x)+F^{*}{\varphi}(t,x)\}\geq 0\text{ on } {\mathbb{D}_{i}}$$ From Proposition 3.3 in [@Bouchard_Equivalence], $F^*\leq \mathbf{F}$ and $F_*\geq \mathbf{F}_{*}$. Thus, we get our desired results.
The following two corollaries (whose proofs are omitted) show that $\mathbf{u}_{{\text{co}}}$ is the unique viscosity solution to its associated HJB equation.
\[coro2\] Suppose that Assumptions \[assump: lambda intensity kernel\] and \[assump: regu\_on\_coeff\] hold, $\mathbf{F}=\mathbf{F}_{*}$ on $\{\mathbf{F}>-\infty\}$ and there exists a USC function $\mathbf{G}:{\mathbb{D}}\times{\mathbb{R}}\times{\mathbb{R}}^d\times\mathbb{M}^d\times C({\mathbb{D}})\cap \{\mathbf{F}>-\infty\}\rightarrow{\mathbb{R}}$ such that $$\begin{array}{c}
(a)\; \mathbf{F}(t,x,y,p,M,{\varphi})>-\infty \implies \mathbf{G}(t,x,y,p,M,{\varphi})\geq 0, \\
(b)\;\mathbf{G}(t,x,y,p,M,{\varphi})>0\implies \mathbf{F}(t,x,y,p,M,{\varphi})>-\infty.
\end{array}$$ Then $u^+_{{\text{co}}}$ $(\text{resp. } u^-_{{\text{co}}})$ is a USC $($resp. an LSC$)$ viscosity sub-solution $($resp. super-solution$)$ of $$\min\{{\varphi}(t,x)-g(x), \max\{-\partial_t{\varphi}(t,x)+\mathbf{F}{\varphi}(t,x), \mathbf{G}{\varphi}(t,x)\}\}=0\;\;\text{on}\;\;{\mathbb{D}}_{i}.$$
A remark similar to Remark \[Remark:existence of G\] applies regarding the verifiability of the assumption above.
Suppose that all the assumptions in Corollary \[coro2\] hold. Additionally, assume that there is a comparison principle between USC sub-solutions and LSC super-solutions for the PDE $$\label{eq:bd_pde_control}
\min\{{\varphi}(x)-g(x), \mathbf{G}{\varphi}(x)\}=0\;\;\text{on}\;\;{\mathbb{R}}^{d}.$$ Then $u^{+}_{{\text{co}}}(T-,x)=u^{-}_{{\text{co}}}(T-,x)=\hat{g}(x)$, where $\hat{g}$ is the unique continuous viscosity solution to . Moreover, if the comparison principle holds for $$\min\{{\varphi}(t,x)-g(x), \max\{-\partial_t{\varphi}(t,x)+\mathbf{F}{\varphi}(t,x),\; \mathbf{G}{\varphi}(t,x)\}\}=0 \text{ on }{\mathbb{D}_{i}},$$ then $u_{{\text{co}}}$$(=\mathbf{u}_{{\text{co}}})$ is the unique continuous viscosity solution with $u_{{\text{co}}}(T,x)=\hat{g}(x)$.
To get a comparison principle, we can adopt the proof in [@Pham1998] appropriately like in [@Bayraktar_Huang].
Appendix A {#sec:appendix .unnumbered}
==========
We provide sufficient conditions for the nonemptiness of ${\mathbb{U}}^+_{{\text{unco}}}$, ${\mathbb{U}}^-_{{\text{unco}}}$, ${\mathbb{U}}^{+}_{{\text{co}}}$ and ${\mathbb{U}}^{-}_{{\text{co}}}$.
\[assump: g bounded\] $g$ is bounded.
\[assump: existence of no\_investing\_strategy\] There exists $u_0 \in U$ such that $\sigma_Y(t,x,y,u_0)=0$ and $b(t,x,y,u_0(e),e)=0$ for all $(t,x,y,e)\in {\mathbb{D}}\times \mathbb{R}\times E$.
\[prop: U\^+ is not empty\] Under Assumptions \[assump: lambda intensity kernel\], \[assump: regu\_on\_coeff\], \[assump: g bounded\] and \[assump: existence of no\_investing\_strategy\], ${\mathbb{U}}^{+}_{{\text{unco}}}$ and ${\mathbb{U}}^{-}_{{\text{co}}}$ are not empty.
We will only show ${\mathbb{U}}^{+}_{{\text{unco}}}$ is not empty. A very similar proof applies to ${\mathbb{U}}^{-}_{{\text{co}}}$.\
**Step 1.** In this step we assume that $\mu_{Y}$ is non-decreasing in its $y$-variable. We will show that $w(t,x)=\gamma-e^{kt}$ is a stochastic super-solution for some choice of $k$ and $\gamma$.
By the linear growth condition on $\mu_Y$ in Assumption \[assump: regu\_on\_coeff\], there exists $L>0$ such that $$|\mu_Y(t,x,y,u_0)|\leq L(1+|y|),$$ where $u_0$ is the element in $U$ in Assumption \[assump: existence of no\_investing\_strategy\]. Choose $k\geq 2L$ and $\gamma$ such that $-e^{k T}+\gamma\geq \|g\|_{\infty}$. Then $w(t,x)\geq w(T,x)\geq g(x)$ for all $(t,x)\in{\mathbb{D}}$. It suffices to show that for any $(t,x,y)\in {\mathbb{D}}\times\mathbb{R}$, $\tau\in\mathcal{T}_t$, $\nu\in {\mathcal{U}}^t_{{\text{unco}}}$ and $\rho\in \mathcal{T}_{\tau}$, $$\label{eq: property of stochastic super-solution}
Y(\rho )\geq w(\rho, X(\rho )) \;\; \mathbb{P}\text{-a.s.}\;\; \text{on}\;\;\{Y(\tau )\geq w(\tau, X(\tau))\}, \text{where } X:= X_{t,x}^{\nu\otimes_{\tau}u_0}, Y:=Y_{t,x,y}^{\nu\otimes_{\tau}u_0}.$$ Let $A= \{Y(\tau )> w(\tau, X(\tau))\}$, $V(s)=w(s,X(s))$ and $\Gamma(s)=\left(V(s)-Y(s)\right)\mathbbm{1}_{A}.$ Therefore, for $s\geq \tau$, $$\begin{gathered}
\label{eq: Gamma_integral}
dY(s)= \mu_{Y}\left(s,X(s),Y(s), u_0\right)ds, \;dV(s)= -ke^{ks}ds, \;
\Gamma(s)=\mathbbm{1}_{A}\int_{\tau}^{s} ( \xi(q)+ \Delta(q) ) dq + \mathbbm{1}_{A}\Gamma(\tau) , \text{where} \\
\Delta(s):=-ke^{ks}-\mu_Y(s,X(s),V(s),u_0)\leq -ke^{ks}-\mu_Y(s,X(s),-e^{ks},u_0)\leq -ke^{ks}+L(1+e^{ks})\leq 0, \nonumber\\
\xi(s):=\mu_{Y}(s,X(s),V(s),u_0) -\mu_{Y}(s,X(s),Y(s),u_0). \nonumber\end{gathered}$$ Therefore, from and the definitions of $\Gamma$ and $A$, it holds that $$\Gamma(s)\leq \mathbbm{1}_A\int_{\tau}^{s} \xi(q) dq \;\; \text{and}\;\; \Gamma^{+}(s)\leq \mathbbm{1}_A\int_{\tau}^{s} \xi^{+}(q) dq
\;\; \text{for} \;\; s\geq \tau.$$ From the Lipschitz continuity of $\mu_Y$ in $y$-variable in Assumption \[assump: regu\_on\_coeff\], $$\label{eq: fun_gronwall}
\Gamma^{+}(s)\leq \mathbbm{1}_A \int_{\tau}^{s} \xi^{+}(q) dq \leq \int_{\tau}^{s} L_0 \Gamma^{+}(q) dq \;\; \text{for} \;\; s\geq \tau,$$ where $L_0$ is the Lipschitz constant of $\mu_Y$ with respect to $y$. Note that we use the assumption that $\mu_Y$ is non-decreasing in its $y$-variable to obtain the second inequality. Since $\Gamma^+(\tau)=0$, an application of Grönwall’s Inequality implies that $\Gamma^+(\rho)\leq 0$, which further implies that holds.\
**Step 2.** We get rid of our assumption on $\mu_{Y}$ from Step 1 by following a proof similar to those in [@BayraktarLi] and [@Bouchard_Nutz_TargetGames]. For $c>0$, define $\widetilde Y_{t,x,y}^{\nu}$ as the strong solution of $$\begin{split}
d\widetilde{Y}(s)&=\tilde \mu_{Y}(s,X_{t,x}^{\nu}(s),\widetilde{Y}(s),\nu(s)) ds +\tilde \sigma_{Y}^{\top}(s,X_{t,x}^{\nu}(s),\widetilde{Y}(s),\nu(s))dW_{s} \\ &+ \int_{E} \widetilde{b}^{\top}(s,X_{t,x}^{\nu}(s-),\widetilde{Y}(s-), \nu_1(s),\nu_2(s,e), e)\lambda(ds,de)
\end{split}$$ with initial data $\widetilde{Y}(t)=y$, where $$\begin{array}{c}
\widetilde{\mu}_{Y}(t,x,y,u):= c y+e^{ct} \mu_{Y}(t,x,e^{-c t}y,u), \;
\widetilde{\sigma}_{Y}(t,x,y,u):= e^{c t} \sigma_{Y}(t,x,e^{-c t} y,u), \\
\widetilde{b}(t,x,y,u(e),e):= e^{c t} b(t,x,e^{-c t} y,u(e),e).
\end{array}$$ Therefore, $$\label{eq:trans to increasing}
\widetilde{Y}_{t,x,y}^{\nu}(s)e^{-cs}=Y_{t,x,ye^{-ct}}^{\nu}(s), \;t\leq s\leq T.$$ Let $$\label{eq:new super stg}
\tilde u_{{\text{unco}}}(t,x)= \inf\{y\in {\mathbb{R}}: \exists \; \nu\in \mathcal{U}^t_{{\text{unco}}}, \mbox{ s.t.}\; \widetilde{Y}^{\nu}_{t,x,y}(\rho)\ge \tilde g(\rho, X^{\nu}_{t,x}(\rho))\;{\mbox{-a.s.}}\},$$ where $\tilde{g}(t, x)=e^{ct} g(x)$. Therefore, from , $\tilde{u}_{{\text{unco}}}(t,x)=e^{ct}u_{{\text{unco}}}(t,x).$ Since $\mu_{Y}$ is Lipschitz in $y$, we can choose $c>0$ so that $
\widetilde {\mu}_{Y}: (t,x,y,u) \mapsto cy + e^{c t}\mu_{Y}(t,x,e^{-c t}y,u)
$ is non-decreasing in $y$. Moreover, all the properties of $\widetilde{\mu}_{Y}, \widetilde{\sigma}_{Y}$ and $\widetilde{b}$ in Assumption \[assump: regu\_on\_coeff\] still hold. We replace $\mu_Y$, $\sigma_Y$ and $b$ in all of the equations and definitions in Section \[sec:prob\] with $\widetilde{\mu}_{Y}, \widetilde{\sigma}_{Y}$ and $\widetilde{b}$, we get $\widetilde{H}^*$ and $\widetilde{H}_*$. Let $\widetilde{{\mathbb{U}}}^+_{{\text{unco}}}$ be the set of stochastic super-solutions of the new target problem . It is easy to see that $w\in{\mathbb{U}}^+_{{\text{unco}}}$ if and only if $\widetilde{w}(t,x):=e^{ct}w(t,x)\in \widetilde{{\mathbb{U}}}^+_{{\text{unco}}}$. From Step 1, $\widetilde{{\mathbb{U}}}^+_{{\text{unco}}}$ is not empty. Thus, ${\mathbb{U}}^+_{{\text{unco}}}$ is not empty.
\[assump: linear growth in y\] There is $C\in {\mathbb{R}}$ such that for all $(t,x,y,u,e)\in{\mathbb{D}}\times{\mathbb{R}}\times U\times E$, $$\left|\mu_Y(t,x,y,u)+\int_E b^{\top}(t,x,y,u(e),e) m(de)\right|\leq C(1+|y|).$$
\[Prop: U\^- is not empty\] Under Assumptions \[assump: lambda intensity kernel\], \[assump: regu\_on\_coeff\], \[assump: g bounded\] and \[assump: linear growth in y\], $\mathbb{U}_{{\text{unco}}}^{-}$ and ${\mathbb{U}}^{+}_{{\text{co}}}$ are not empty.
We will only show that $\mathbb{U}_{{\text{unco}}}^{-}$ is not empty. Assume that $$\mu_{Y}(t,x,y,u)+\int_E b^{\top}(t,x,y,u(e),e)m(de)$$ is non-decreasing in its $y$-variable. We could remove this assumption by using the argument from previous proposition.
Choose $k\geq 2C$ ($C$ is the constant in Assumption \[assump: linear growth in y\]) and $\gamma>0$ such that $e^{k T}-\gamma<- \|g\|_{\infty}$. Let $w(t,x)=e^{kt}-\gamma$. Notice that $w$ is continuous, has polynomial growth in $x$ and $w(T,x)\leq g(x)$ for all $x\in{\mathbb{R}}^{d}$. It suffices to show that for any $(t,x,y)\in {\mathbb{D}}\times\mathbb{R}$, $\tau\in\mathcal{T}_{t}$ and $\nu\in {\mathcal{U}}^t_{{\text{unco}}}$, there exists $\rho\in\mathcal{T}_{t}$ such that $\mathbb{P}(Y(\rho )< g( X(\rho ))|B)>0$ for $B\subset \{Y(\tau)<w(\tau,X(\tau))\}$ satisfying $B\in\mathcal{F}_\tau^t$ and ${\mathbb{P}}(B)>0$, where $X:= X_{t,x}^{\nu}$ and $Y:=Y_{t,x,y}^{\nu}$. Define $$\begin{gathered}
M(\cdot)=Y(\cdot)-\int_{\tau}^{\cdot}K(s)ds,\; V(s)=w(s,X(s)),\;
A= \{Y(\tau )<w(\tau, X(\tau))\},\; \Gamma(s)=\left(Y(s)-V(s)\right)\mathbbm{1}_{A}, \\
\text{where } K(s):=\mu_{Y}(s,X(s),Y(s),\nu(s))+\int_{E}b^{\top}(s,X(s-),Y(s-),\nu_1(s),\nu_2(s,e),e)m(de),\\
\widetilde{K}(s):=\mu_{Y}(s,X(s),V(s),\nu(s))+\int_{E}b^{\top}(s,X(s-),V(s-),\nu_1(s),\nu_2(s,e),e)m(de).
\end{gathered}$$ It is easy to see that $M$ is a martingale after $\tau.$ Due to the facts that $A\in\mathcal{F}_\tau^t$ and $dV(s)= ke^{ks}ds$, we further know $$\label{eq: supermar1_nonemptyness of U+}
\mathbbm{1}_{A}\left(Y(\cdot)-V(\cdot)+\int_{\tau}^{\cdot} ke^{ks}-K(s) ds \right)\;\; \text{is a martingale after}\;\;\tau .$$ Since Assumption \[assump: linear growth in y\] holds and $\mu_{Y}(t,x,y,u)+\int_E b^{\top}(t,x,y,u(e),e)m(de)$ is non-decreasing in $y$, $$\widetilde{K}(s)\leq \mu_Y(s,X(s),e^{ks}, \nu(s))+\int_{E}b^{\top}(s,X(s-),e^{ks},\nu_1(s),\nu_2(s,e),e)m(de)\leq 2C e^{ks}.$$ Therefore, it follows from , the inequality above and the fact $k\geq 2C$ that $$\label{eq: supermar2_nonemptyness of U+}
\widetilde{M}(\cdot):=\mathbbm{1}_{A}\left(Y(\cdot)-V(\cdot)-\int_{\tau}^{\cdot}\xi(s)ds)\right) \;\;\text{is a super-martingale after }\tau,$$ where $\xi(s):=K(s)-\widetilde{K}(s)$. Since $\widetilde{M}(\tau)<0$ on $B$, there exists a non-null set $F\subset B $ such that $\widetilde{M}(\rho)<0$ on $F$ for any $\rho\in\mathcal{T}_{\tau}$. By the definition of $\widetilde{M}$ in , we get $$\label{eq: Gamma_rho_strict_ineq on F}
\Gamma(\rho)< \mathbbm{1}_{A}\int_{\tau}^{\rho}\xi(s)ds \;\;\text{on}\;\;F.$$ Therefore, $$\label{eq: ineq for gronwall ineq}
\Gamma^{+}(\rho)\leq \mathbbm{1}_A\int_{\tau}^{\rho} \xi^{+}(s) ds
\leq \int_{\tau}^{\rho} L_0 \Gamma^{+}(s) ds\;\;\text{on}\;\;F.$$ By Grönwall’s Inequality, $\Gamma^+(\tau)=0$ implies that $\Gamma^+(\rho)=0$ on $F$. More precisely, for $\omega\in F$ (${\mathbb{P}}-\text{a.s.}$), $\Gamma^{+}(s)(\omega)=0$ for $s\in [\tau(\omega),\rho(\omega)]$. This implies that we can replace the inequalities with equalities in . Therefore, by , $\Gamma(\rho)<0$ on $F$, which yields $\mathbb{P}(Y(\rho )< g( X(\rho ))|B)>0.$
Appendix B {#sec:appendixB .unnumbered}
==========
Let $T$ be a finite time horizon, given a general probability space $(\Omega, \mathcal{F},\mathbb{P})$ endowed with a filtration $\mathbb{F} = \{\mathcal{F}_t\}_{0\leq t \leq T}$ satisfying the usual conditions. Let $\mathcal{T}_t$ be the set of $\mathbb{F}$-stopping times valued in $[t,T]$. In particular, let $\mathcal{T}:=\mathcal{T}_0$. We assume that $\mathcal{F}_0$ is trivial. Let us consider an optimal control problem defined as follows. Let $\mathcal{U}$ be the collection of all $\mathbb{F}$-predictable processes valued in $U\subset{\mathbb{R}}^k$ and $\{G^{\nu},\nu\in\mathcal{U}\}$ be a collection of bounded, right-continuous processes valued in ${\mathbb{R}}$. Given $(t,\nu)\in[0,T]\times\mathcal{U}$, we consider two optimal stopping control problems: $$V_{{\text{unco}}}^{\nu}(t)=\essinf_{\mu\in\mathcal{U}(t,\nu)}\esssup_{\tau\in\mathcal{T}_t}\mathbb{E}[G^{\mu}(\tau)|\mathcal{F}_t],$$ and $$V_{{\text{co}}}^{\nu}(t)=\esssup_{\mu\in\mathcal{U}(t,\nu)}\esssup_{\tau\in\mathcal{T}_t}\mathbb{E}[G^{\mu}(\tau)|\mathcal{F}_t],$$ where $\mathcal{U}(t,\nu)=\{\mu\in\mathcal{U}, \mu= \nu\;\text{on}\;[0,t]\;\;\mathbb{P}-\text{a.s.}\}$.
\[lem:stochastic\_target\_representation\_superhedging\] Given $t\in[0,T]$ and $\nu\in{\mathcal{U}}_{t}$, let $\mathcal{M}$ be any family of martingales which satisfies the following: $$\label{eq: property of martinagle set_appendix_superhedging}
\begin{array}{c}
\text{For any}\;\; \mu\in\mathcal{U}(t,\nu),\;\text{there exists an}\;\; M\in\mathcal{M} \;\text{such that}\;\; \\ \esssup_{\tau\in\mathcal{T}_t}\mathbb{E}[G^{\mu}(\tau)|\mathcal{F}_t] + M(\rho)-M(t)\geq G^{\mu}(\rho)\;\;\text{for all }\rho\in\mathcal{T}_t.
\end{array}$$ Then $V_{{\text{unco}}}^{\nu}(t)=Y_{{\text{unco}}}^{\nu}(t),$ where $$Y_{{\text{unco}}}^{\nu}(t)=\essinf\left\{ Y\in L^1(\Omega, \mathcal{F}_t, \mathbb{P})\;|\; \exists (M,\mu)\in \mathcal{M}\times\mathcal{U}(t,\nu), \text{s.t.}\; Y+M(\rho)-M(t)\geq G^{\mu}(\rho)\;\;\text{for all }\rho\in \mathcal{T}_t \right\}.$$
\(1) $Y_{{\text{unco}}}^{\nu}(t)\geq V_{{\text{unco}}}^{\nu}(t)$: Fix $Y\in L^1(\Omega, \mathcal{F}_t, \mathbb{P})$ and $(M,\mu)\in\mathcal{M}\times\mathcal{U}(t,\nu)$ such that $$Y+M(\rho)-M(t)\geq G^{\mu}(\rho) \;\;\text{for all }\rho\in\mathcal{T}_t.$$ By taking the conditional expectation, we get that $$Y \geq \mathbb{E}[G^{\mu}(\rho)|\mathcal{F}_t]\;\;\text{for all }\rho\in\mathcal{T}_t.$$ which implies that $Y\geq V_{{\text{unco}}}^{\nu}(t)$. Therefore, $Y_{{\text{unco}}}^{\nu}(t)\geq V_{{\text{unco}}}^{\nu}(t)$.\
(2) $V_{{\text{unco}}}^{\nu}(t)\geq Y_{{\text{unco}}}^{\nu}(t)$: we get from , for each $\mu\in\mathcal{U}(t,\nu)$, there exists an $M\in\mathcal{M}$ such that $$\esssup_{\tau\in\mathcal{T}_t}\mathbb{E}[G^{\mu}(\tau)|\mathcal{F}_t]+M(\rho)-M(t)\geq G^{\mu}(\rho)\text{ for all }\rho\in\mathcal{T}.$$ This implies that $$\esssup_{\tau\in\mathcal{T}_t}\mathbb{E}[G^{\mu}(\tau)|\mathcal{F}_t]\geq Y_{{\text{unco}}}^{\nu}(t),$$ which further implies $V_{{\text{unco}}}^{\nu}(t)\geq Y_{{\text{unco}}}^{\nu}(t)$.
\[lem:stochastic\_target\_representation\_subhedging\] Let $\mathcal{M}$ be any family of martingales which satisfies the following: $$\label{eq: property of martinagle set_appendix_subhedging}
\text{For any}\;\; \nu\in\mathcal{U} \;\text{and}\; \rho\in\mathcal{T},\;\text{there exists an}\;\; M\in\mathcal{M} \;\text{such that}\;\; G^{\nu}(\rho) = M(\rho).$$ Then for each $(t,\nu)\in[0,T]\times\mathcal{U}$, $V_{{\text{co}}}^{\nu}(t)=Y_{{\text{co}}}^{\nu}(t),$ where $$Y_{{\text{co}}}^{\nu}(t)=\esssup\left\{ Y\in L^1(\Omega, \mathcal{F}_t, \mathbb{P})\;| \exists (M,\mu,\rho)\in \mathcal{M}\times\mathcal{U}(t,\nu)\times\mathcal{T}_t,\text{s.t.}\;\;Y+M(\rho)-M(t)\leq G^{\mu}(\rho) \right\}.$$
\(1) $Y_{{\text{co}}}^{\nu}(t)\leq V_{{\text{co}}}^{\nu}(t)$: Fix $Y\in L^1(\Omega, \mathcal{F}_t, \mathbb{P})$ and $(M,\mu,\rho)\in\mathcal{M}\times\mathcal{U}(t,\nu)\times\mathcal{T}_t$ such that $$Y+M(\rho)-M(t)\leq G^{\mu}(\rho).$$ Then by taking the conditional expectation, we get that $$Y \leq \mathbb{E}[G^{\mu}(\rho)|\mathcal{F}_t]\leq V_{{\text{co}}}^{\nu}(t),$$ which implies that $Y_{{\text{co}}}^{\nu}(t)\leq V_{{\text{co}}}^{\nu}(t)$.\
(2) $Y_{{\text{co}}}^{\nu}(t)\geq V_{{\text{co}}}^{\nu}(t)$: we get from , for each $\mu\in\mathcal{U}(t,\nu)$ and $\rho\in\mathcal{T}_t$, there exists an $M\in\mathcal{M}$ such that $$\mathbb{E}[G^{\mu}(\rho)|\mathcal{F}_t]+M(\rho)-M(t)=G^{\mu}(\rho).$$ In particular, $$\mathbb{E}[G^{\mu}(\rho)|\mathcal{F}_t]+M(\rho)-M(t)\leq G^{\mu}(\rho).$$ Therefore, $\mathbb{E}[G^{\mu}(\rho)|\mathcal{F}_t]\leq Y_{{\text{co}}}^{\nu}(t)$, which implies $V_{{\text{co}}}^{\nu}(t)\leq Y_{{\text{co}}}^{\nu}(t)$.
\[re:subhedging\] It is clear that a collection of martingales which satisfies always exists. In particular, one can take $$\mathcal{M}_{{\text{co}}}=\{\{\mathbb{E}[G^{\nu}(\rho)|\mathcal{F}_t]\}_{0\leq t\leq T}, \nu\in\mathcal{U}, \rho\in\mathcal{T}\}.$$
[^1]: E. Bayraktar is supported in part by the National Science Foundation under grant DMS-1613170 and the Susan M. Smith Professorship.
[^2]:
[^3]:
[^4]: This can be easily checked.
[^5]: $C$ and $n$ may depend on $w$ and $T$. This also applies to Definition \[def: Stochasticsub-solution-super\], \[def: Stochasticsuper-solution\_subhedging\] and \[def: Stochasticsub-solution\_subhedging\].
[^6]: Such $\alpha$ and $\gamma$ are unique
|
---
abstract: 'Treating optimization methods as dynamical systems can be traced back centuries ago in order to comprehend the notions and behaviors of optimization methods. Lately, this mind set has become the driving force to design new optimization methods. Inspired by the recent dynamical system viewpoint of Nesterov’s fast method, we propose two classes of fast methods, formulated as hybrid control systems, to obtain pre-specified exponential convergence rate. Alternative to the existing fast methods which are parametric-in-time second order differential equations, we dynamically synthesize feedback controls in a state-dependent manner. Namely, in the first class the damping term is viewed as the control input, while in the second class the amplitude with which the gradient of the objective function impacts the dynamics serves as the controller. The objective function requires to satisfy the so-called Polyak–[Ł]{}ojasiewicz inequality which effectively implies no local optima and a certain gradient-domination property. Moreover, we establish that both hybrid structures possess Zeno-free solution trajectories. We finally provide a mechanism to determine the discretization step size to attain an exponential convergence rate.'
author:
- 'Arman Sharifi Kolarijani, Peyman Mohajerin Esfahani, Tamás Keviczky'
bibliography:
- './mybref.bib'
title: 'Continuous-Time Accelerated Methods via a Hybrid Control Lens'
---
Introduction {#sec:intro}
============
There is a renewed surge of interest in gradient-based algorithms in many computational communities such as machine learning and data analysis. The following non-exhaustive list of references indicates typical application areas: clustering analysis [@lashkari2008convex], neuro-computing [@bottou1991stochastic], statistical estimation [@salakhutdinov2003optimization], support vector machines [@allen2016katyusha], signal and image processing [@becker2011nesta], and networked-constrained optimization [@ghadimi2013multi]. This interest primarily stems from low computational and memory loads of these algorithms (making them exceptionally attractive in large-scale problems where the dimension of decision variables can be enormous). As a result, a deeper understating of how these algorithms function has become a focal point of many studies.
One research direction that has been recently revitalized is the application of ordinary differential equations (ODEs) to the analysis and design of optimization algorithms. Consider an iterative algorithm that can be viewed as a discrete dynamical system, with the scalar $s$ as its step size. As $s$ decreases, one can observe that the iterative algorithm in fact recovers a differential equation, e.g., in the case of gradient descent method applied to an unconstrained optimization problem $\min_{X\in\mathbb{R}^n}~{\small f(X)}$, one can inspect that $$\begin{array}{c}
X^{k+1}=X^k-s \nabla f(X^k) ~ \leadsto ~ \dot{X}(t)=-\nabla f\big(X(t)\big)
\end{array}$$ where $f:\mathbb{R}^n\rightarrow \mathbb{R}$ is a smooth function, $X$ is the decision variable, $k\in \mathbb{Z}_{\geq 0}$ is the iteration index, and $t\in \mathbb{R}_{\geq 0}$ is the time. The main motivation behind this line of research has to do with well-established analysis tools in dynamical systems described by differential equations.
The slow rate of convergence of the gradient descent algorithm ($\mathcal{O}(\frac{1}{t})$ in continuous and $\mathcal{O}(\frac{1}{k})$ in discrete time), limits its application in large-scale problems. In order to address this shortcoming, many researchers resort to the following class of 2nd-order ODEs, which is also the focus of this study: $$\label{dyn2}
\ddot{X}(t)+\gamma(t)\dot{X}(t)+\nabla f\big(X(t)\big)=0.$$ Increasing the order of the system dynamics interestingly helps improve the convergence rate of the corresponding algorithms to $\mathcal{O}(\frac{1}{k^2})$ in the discrete-time domain or to $\mathcal{O}(\frac{1}{t^2})$ in the continuous-time domain. Such methods are called *momentum*, *accelerated*, or *fast* gradient-based iterative algorithms in the literature. The time-dependent function $\gamma:\mathbb{R}_{\geq 0}\rightarrow \mathbb{R}_{>0}$ is a *damping* or a *viscosity* term, which has also been referred to as the *asymptotically vanishing viscosity* since $\lim_{t\rightarrow \infty}~\gamma(t)=0$ [@Cabot2004steepest].
**Chronological developments of fast algorithms:** It is believed that the application of (\[dyn2\]) to speed-up optimization algorithms is originated from [@Polyak1964] in which Polyak was inspired by a physical point of view (i.e., a heavy-ball moving in a potential field). Later on, Nesterov introduced his celebrated accelerated gradient method in [@Nesterov1983] using the notion of “[estimate sequences]{}" and guaranteeing convergence rate of $\mathcal{O}(\frac{1}{k^2})$. Despite several extensions of Nesterov’s method [@Nesterov2004; @Nesterov2005smooth; @Nesterov2013gradient], the approach has not yet been fully understood. In this regard, many have tried to study the intrinsic properties of Nesterov’s method such as [@Drusvyatskiy2016; @Bubeck2015geometric; @Drori2014; @Lessard2016]. Recently, the authors in [@Su2014differential] and in details [@Su2016differential] surprisingly discovered that Nesterov’s method recovers (\[dyn2\]) in its continuous limit, with the time-varying damping term $\gamma (t)=\frac{3}{t}$.
**A dynamical systems perspective:** Based on the observation suggested by [@Su2014differential], several novel fast algorithms have been developed. Inspired by the mirror descent approach [@nemirovskii1983problem], the ODE (\[dyn2\]) has been extended to non-Euclidean settings using the Bregman divergence in [@krichene2015accelerated]. Then, the authors in [@Wibisono2016variational] further generalized the approach in [@krichene2015accelerated] to higher order methods using instead the Bregman Lagrangian. Following [@Wibisono2016variational], a “[rate-matching]{}" Lyapunov function is proposed in [@wilson2016lyapunov] with its monotonicity property established for both continuous and discrete dynamics. Recently, the authors in [@Lessard2016] make use of an interesting semidefinite programming framework developed by [@Drori2014] and use tools from robust control theory to analyze the convergence rate of optimization algorithms. More specifically, the authors exploit the concept of integral quadratic constraints (IQCs) [@megretski1997system] to design iterative algorithms under the strong convexity assumption. Later, the authors in [@fazlyab2017analysis] extend the results of IQC-based approaches to quasi-convex functions. The authors in [@hu2017dissipativity] use dissipativity theory [@willems1972dissipative] along with the IQC-based analysis to construct Lyapunov functions enabling rate analyses. In [@attouch2016fast], the ODE is amended with an extra Hessian driven damping $\beta \nabla^2 f(X(t))$ for some positive scalar $\beta$. It is shown that the proposed dynamics can be generalized to the case of lower-semicontinuous functions via an appropriate reparameterization of the dynamics. The authors in [@krichene2016adaptive] propose an averaging approach to construct a broad family of fast mirror descent methods. They also introduce a state-dependent, heuristic method to adaptively update the averaging function.
**Restarting schemes:** A characteristic feature of fast methods is the non-monotonicity in the suboptimality measure $f-f^*$, where $f^*$ refers to the optimal value of function $f$. The reason behind such an undesirable behavior can be intuitively explained in two ways: (i) a momentum based argument indicating as the algorithm evolves, the algorithm’s momentum gradually increases to a level that it causes an oscillatory behavior [@o2015adaptive]; (ii) an acceleration-based argument indicating that the asymptotically vanishing damping term becomes so small that the algorithm’s behavior drifts from an over-damped regime into an under-damped regime with an oscillatory behavior [@Su2016differential]. To prevent such an undesirable behavior in fast methods, an optimal fixed restart interval is determined in terms of the so-called condition number of function $f$ such that the momentum term is restarted to a certain value, see e.g., [@Nesterov2004; @nemirovski2005efficient; @gu2013parnes; @lan2013iteration; @Nesterov2013gradient]. It is worth mentioning that [@o2015adaptive] proposes two heuristic adaptive restart schemes. It is numerically observed that such restart rules practically improve the convergence behavior of a fast algorithm.
**Regularity for exponential convergence:** Generally speaking, exponential convergence rate and the corresponding regularity requirements of the function $f$ are two crucial metrics in fast methods. In what follows, we discuss about these metrics for three popular fast methods in the literature. (Notice that these fast methods are in general designed for wider classes of functions and not limited to the specific cases reported below.) When the objective functions are strongly convex with a constant $\sigma_f$ and their gradient is Lipschitz with a constant $L_f$, [@Su2016differential] proposes the “[speed restarting]{}" scheme $$\text{sup}\Big\{ t>0:~\forall \tau\in(0,t),{\small \frac{d\| \dot{X}(\tau) \|^2}{d\tau}}>0 \Big\},$$ to achieve the convergence rate of: $$f\big(X(t)\big)-f^* \leq d_1 e^{-d_2 t} \| X(0) -X^* \|^2.$$ The positive scalars $d_1$ and $d_2$ depend on the constants $\sigma_f$ and $L_f$. Assuming the convexity of the function $f$ with a certain choice of parameters in their “[ideal scaling]{}" condition, [@Wibisono2016variational] uses the dynamics $$\begin{aligned}
\ddot{X}(t)+c\dot{X}(t)
+c^2e^{ct} \Big( \nabla^2 h\big(X(t)+\frac{1}{c}\dot{X}(t)\big) \Big)^{-1}\nabla f\big(X(t)\big)=0,\end{aligned}$$ and guarantees the convergence rate of $\mathcal{O}(e^{-ct})$ for some positive scalar $c$, where the function $h$ is a distance generating function. Under uniform convexity assumption with a constant $\nu_f$, it is further shown that $$f\big(X(t)\big)-f^* \leq \Big(f\big(X(0)\big)-f^*\Big) e^{-\nu_f \frac{1}{p-1}t}.$$ where $p-1$ is the order of smoothness of $f$. The authors in [@wilson2016lyapunov] introduce the Lyapunov function $$\mathcal{E}(t)=e^{\beta(t)}\left( f\big(X(t)\big)-f^*+\frac{\sigma_f}{2} \| X^*-Z(t) \|^2 \right),$$ to guarantee the rate of convergence $$\mathcal{E}(t) \leq \mathcal{E}(0) e ^{-\int \dot{\beta}(s) ds},$$ where $Z(t)=X(t)+\frac{1}{\dot{\beta}(t)}\dot{X}$, $\dot{Z}(t)=-\dot{X}(t)-\frac{1}{\sigma_f} \dot{\beta}(t)\nabla f\big(X(t) \big)$, and $\beta(t)$ is a user-defined function.
**Statement of hypothesis:** Much of the references reviewed above (excluding, e.g., [@attouch2016fast] and [@krichene2016adaptive]) primarily deal with constructing a time-dependent damping term $\gamma(t)$ that is sometimes tied to a Lyapunov function. Furthermore, due to underlying oscillatory behavior of the corresponding 2nd-order ODE, researchers utilize restarting schemes to over-write the steady-state non-monotonic regime with the transient monotonic regime of the dynamics. In general, notice that these schemes are based on time-dependent schedulers.
With the above argument in mind, let us view an algorithm as a unit point mass moving in a potential field caused by an objective function $f$ under a parametric (or possibly constant) viscosity, similar to the second order ODE . In this view, we aim to address the following two questions:
Is it possible to
1. synthesize the damping term $\gamma$ as a state-dependent term (i.e., $\gamma(X,\dot X)$), or
2. dynamically control the magnitude of the potential force $\nabla f(X)$,
such that the underlying properties of the optimization algorithm are improved?
**Contribution:** In this paper, we answer these questions by amending the 2nd-order ODE in two ways as follows: $$\begin{aligned}
\text{(I)} &~ \ddot{X}(t)+u_{\textbf{I}}\big(X(t),\dot{X}(t)\big)~\dot{X}(t)+\nabla f(X(t))=0,\\
\text{(II)} &~ \ddot{X}(t)+\dot{X}(t)+u_{\textbf{II}}\big(X(t),\dot{X}(t)\big)~\nabla f(X(t))=0,\end{aligned}$$ where the indices indicate to which question each structure is related to in the above hypothesis. Evidently, in the first structure, the state-dependent input $u_{\textbf{I}}$ replaces the time-dependent damping $\gamma$ in (\[dyn2\]). While in the second structure, the feedback input $u_{\textbf{II}}$ dynamically controls the magnitude with which the potential force enters the dynamics (we assume for simplicity of exposition that $\gamma (t)=1$, however, one can modify our proposed framework and following a similar path develop the corresponding results for the case $\gamma(t)\neq 1$). Let $f$ be a twice differentiable function that satisfies the so-called Polyak–[Ł]{}ojasiewicz (PL) inequality (see Assumption (\[d\_1\])). Given a positive scalar $\alpha$, we seek to achieve an exponential rate of convergence $\mathcal{O}(e^{-\alpha t})$ for an unconstrained, smooth optimization problem in the suboptimality measure $f\big(X(t)\big)-f^*$. To do so, we construct the state-dependent feedback laws for each structure as follows: $$\begin{aligned}
u_{\textbf{I}}\big(X(t),\dot{X}(t)\big) :=
\alpha + \frac{\| \nabla f(X(t)) \|^2 - \langle \nabla^2 f\big(X(t)\big) \dot{X}(t), \dot{X}(t) \rangle}{\langle \nabla f\big(X(t)\big), -\dot{X}(t) \rangle},\end{aligned}$$ $$\begin{aligned}
u_{\textbf{II}}\big(X(t),\dot{X}(t)\big) :=
\frac{ \langle \nabla^2 f\big(X(t)\big) \dot{X}(t), \dot{X}(t) \rangle +(1 - \alpha) \langle \nabla f\big(X(t)\big), -\dot{X}(t) \rangle}{\| \nabla f(X(t)) \|^2 }.\end{aligned}$$ Motivated by restarting schemes, we further extend the class of dynamics to hybrid control systems (see Definition \[def\_hyb\] for further details) in which both of the above ODE structures play the role of the *continuous flow* in their respective hybrid dynamical extension. We next suggest an admissible control input range $[u_{\min},u_{\max}]$ that determines the *flow set* of each hybrid system. Based on the model parameters $\alpha$, $u_{\min}$, and $u_{\max}$, we then construct the *jump map* of each hybrid control system by the mapping $\big(X^\top,-\beta \nabla^\top f(X)\big)^\top$ guaranteeing that the range space of the jump map is contained in its respective flow set. Notice that the velocity restart schemes take the form of $\dot{X}=-\beta \nabla f(X)$.
This paper extends the results of [@armanICML] in several ways which are summarized as follows:
- We synthesize a state-dependent gradient coefficient ($u_{\textbf{II}}(x)$) given a prescribed control input bound and a desired convergence rate (Theorem \[theo\_step\_conv\]). This is a complementary result to our earlier study \[30\] which is concerned with a state-dependent damping coefficient ($u_{\textbf{I}}(x)$). Notice that the state-dependent feature of our proposed dynamical systems differs from commonly time-dependent methodologies in the literature.
- We derive a lower bound on the time between two consecutive jumps for each hybrid structure. This ensures that the constructed hybrid systems admit the so-called Zeno-free solution trajectories. It is worth noting that the regularity assumptions required by the proposed structures are different (Theorems \[theo\_zeno\] and \[theo\_step\_zeno\]).
- The proposed frameworks are general enough to include a subclass of non-convex problems. Namely, the critical requirement is that the objective function $f$ satisfies the Polyak–[Ł]{}ojasiewicz (PL) inequality (Assumption (\[d\_1\])), which is a weaker regularity assumption than the strong convexity that is often assumed in this context for exponential convergence.
- We utilize the *forward-Euler* method to discretize both hybrid systems (i.e., obtain optimization algorithms). We further provide a mechanism to compute the step size such that the corresponding discrete dynamics have an exponential rate of convergence (Theorem \[theo\_2\]).
The remainder of this paper is organized as follows. In Section \[sec:notation\], the mathematical notions are represented. The main results of the paper are introduced in Section \[sec:mainres\]. Section \[sec:proofs\] contains the proofs of the main results. We introduce a numerical example in Section \[sec:examp\]. This paper is finally concluded in Section \[sec:conc\].
**Notations:** The sets $\mathbb{R}^n$ and $\mathbb{R}^{m\times n}$ denote the $n$-dimensional Euclidean space and the space of $m\times n$ dimensional matrices with real entries, respectively. For a matrix $M\in\mathbb{R}^{m\times n}$, $M^\top$ is the transpose of $M$, $M\succ0$ ($\prec0$) refers to $M$ positive (negative) definite, $M\succeq0$ ($\preceq0$) refers to $M$ positive (negative) semi-definite, and $\lambda_{\max}(M)$ denotes the maximum eigenvalue of $M$. The $n\times n$ identity matrix is denoted by $I_n$. For a vector $v\in\mathbb{R}^n$ and $i\in\{1,\cdots,n \}$, $v_i$ represents the $i$-th entry of $v$ and $\| v \|:=\sqrt{\Sigma_{i=1}^n~v_i^2}$ is the Euclidean 2-norm of $v$. For two vectors $x,y\in\mathbb{R}^n$, $\langle x,y \rangle:=x^\top y$ denotes the Euclidean inner product. For a matrix $M$, $\| M \|:=\sqrt{\lambda_{\max}(M^\top M)}$ is the induced 2-norm. Given the set $S\subseteq \mathbb{R}^n$, $\partial S$ and $\text{int}(S)$ represent the boundary and the interior of $S$, respectively.
Preliminaries {#sec:notation}
=============
We briefly recall some notions from hybrid dynamical systems that we will use to develop our results. We state the standing assumptions related to the optimization problem to be tackled in this paper. The problem statement is then introduced. We adapt the following definition of a hybrid control system from [@goebel2012hybrid] that is sufficient in the context of this paper.
\[def\_hyb\] A time-invariant hybrid control system $\mathcal{H}$ comprises a controlled ODE and a jump (or a reset) rule introduced as: $$\tag{$\mathcal{H}$}
\label{p1}
\left\lbrace
\begin{array}{lllc}
\dot{x} & = & F\big(x,u(x)\big), & x \in \mathcal{C}\\
x^+ & = & G(x), & \text{otherwise},
\end{array}
\right.$$ where $x^+$ is the state of the hybrid system after a jump, the function $u:\mathbb{R}^n\rightarrow\mathbb{R}^m$ denotes a feedback signal, the function $F:\mathbb{R}^n\times\mathbb{R}^m\rightarrow\mathbb{R}^n$ is the flow map, the set $\mathcal{C}\subseteq \mathbb{R}^n$ is the flow set, and the function $G:\partial \mathcal{C}\rightarrow$ *int*$(\mathcal{C})$ represents the jump map.
Notice that the jump map $G(x)$ will be activated as soon as the state $x$ reaches the boundary of the flow set $\mathcal{C}$, that is $\partial \mathcal{C}$. In hybrid dynamical systems, the notion of *Zeno behavior* refers to the phenomenon that an infinite number of jumps occur in a bounded time interval. We then call a solution trajectory of a hybrid dynamical system Zeno-free if the number of jumps within any finite time interval is bounded. The existence of a lower bound on the time interval between two consecutive jumps suffices to guarantee the Zeno-freeness of a solution trajectory of a hybrid control system. Nonetheless, there exist solution concepts in the literature that accept Zeno behaviors, see for example [@aubin2002impulse; @goebel2012hybrid; @goebel2006solutions; @lygeros2003dynamical] and the references therein.
Consider the following class of unconstrained optimization problems: $$%\tag{P}
\label{pf1}
f^*:=\underset{X\in\mathbb{R}^n}{\min} f(X),$$ where $f:\mathbb{R}^n\rightarrow \mathbb{R}$ is an objective function.
\[def\_10\] We stipulate that the objective function $f:\mathbb{R}^n\rightarrow \mathbb{R}$ is twice differentiable and fulfills the following
- (Bounded Hessian) The Hessian of function $f$, denoted by $\nabla^2 f(x)$, is uniformly bounded, i.e., $$\tag{A1}
\label{p2}
-\ell_f I_n \preceq \nabla^2 f(x) \preceq L_f I_n,$$ where $\ell_f$ and $L_f$ are non-negative constants.
<!-- -->
- (Gradient dominated) The function $f$ satisfies the Polyak-[Ł]{}ojasiewicz inequality with a positive constant $\mu_f$, i.e., for every $x$ in $\mathbb{R}^n$ we have $$\tag{A2}
\label{d_1}
\frac{1}{2} \big\| \nabla f(x) \big\|^2 \geq \mu_f \big(f(x)-f^*\big),$$ where $f^*$ is the minimum value of $f$ on $\mathbb{R}^n$.
- (Lipschitz Hessian) The Hessian of the function $f$ is Lipschitz, i.e., for every $x,y$ in $\mathbb{R}^n$ we have $$\begin{aligned}
\tag{A3}
\label{z6}
\big\| \nabla^2 f(x) - \nabla^2 f(y) \big\| \leq H_f \| x - y \|,\end{aligned}$$ where $H_f$ is a positive constant.
We now formally state the main problem to be addressed in this paper:
\[prob1\] Consider the unconstrained optimization problem (\[pf1\]) where the objective function $f$ is twice differentiable. Given a positive scalar $\alpha$, design a fast gradient-based method in the form of a hybrid control system (\[p1\]) with $\alpha$-exponential convergence rate, i.e. for any initial condition $X(0)$ and any $t \geq 0$ we have $$f\big(X(t)\big)-f^*\leq e^{-\alpha t} \Big(f\big(X(0)\big)-f^* \Big),$$ where $\{X(t)\}_{t \geq0}$ denotes the solution trajectory of the system (\[p1\]).
\[rem\_lip\] Since the function $f$ is twice differentiable, Assumption (\[p2\]) implies that the function $\nabla f$ is also Lipschitz with a positive constant $L_f$, i.e., for every $x, y$ in $\mathbb{R}^n$ we have $$\label{p2_g}
\big\| \nabla f(x)-\nabla f(y) \big\| \leq L_f \| x-y\|.$$
We now collect two remarks underlining some features of the set of functions that satisfy (\[d\_1\]).
The PL inequality in general does not imply the convexity of a function but rather the invexity of it. The notion of invexity was first introduced by [@Hanson1981]. The PL inequality (\[d\_1\]) implies that the suboptimality measure $f-f^*$ grows at most as a quadratic function of $\nabla f$.
While the PL inequality does not require the uniqueness of the stationary points of a function (i.e., $\{x: \nabla f(x)=0 \}$), it ensures that all stationary points of the function $f$ are global minimizers [@CravenGlover1985].
We close our preliminary section with a couple of popular examples borrowed from [@Karimi2016].
The composition of a strongly convex function and a linear function satisfies the PL inequality. This class includes a number of important problems such as least squares, i.e., $f(x)=\| Ax -b \|^2 $ (obviously, strongly convex functions also satisfy the PL inequality). Any strictly convex function over a compact set satisfies the PL inequality. As such, the log-loss objective function in logistic regression, i.e., $f(x)=\Sigma_{i=1}^n\log\big(1+\text{exp}(b_ia_i^\top x)\big)$, locally satisfies the PL inequality.
Main Results {#sec:mainres}
============
In this section, the main results of this paper are provided. We begin with introducing two types of structures for the hybrid system (\[p1\]) motivated by the dynamics of fast gradient methods [@Su2016differential]. Given a positive scalar $\alpha$, these structures, indexed by **I** and **II**, enable achieving the rate of convergence $\mathcal{O}(e^{-\alpha t})$ in the suboptimality measure $f\big(x_1(t)\big)-f^*$. We then collect multiple remarks highlighting the shared implications of the two structures along with a naive type of time-discretization for these structures. The technical proofs are presented in Section \[sec:proofs\]. For notational simplicity, we introduce the notation $x = (x_1,x_2)$ such that the variables $x_1$ and $x_2$ represent the system trajectories $X$ and $\dot{X}$, respectively.
Structure **I**: state-dependent damping coefficient {#sec:par_I}
----------------------------------------------------
The description of the first structure follows. We start with the flow map $F_{\textbf{I}}:\mathbb{R}^{2n}\times \mathbb{R}\rightarrow\mathbb{R}^{2n}$ defined as
\[sH\] $$\begin{aligned}
\label{s1}
F_{\textbf{I}}\big(x,u_{\textbf{I}}(x)\big)=
\left(
\begin{aligned}
x& _2\\
-\nabla f &(x_1)
\end{aligned}
\right)+\left(
\begin{aligned}
0~&\\
-x& _2
\end{aligned}
\right)u_{\textbf{I}}(x).\end{aligned}$$ Notice that $F_{\textbf{I}}(\cdot,\cdot)$ is the state-space representation of a 2nd-order ODE. The feedback law $u_{\textbf{I}}:\mathbb{R}^{2n}\rightarrow \mathbb{R}$ is given by $$%\tag{S1}
\label{s8_1}
u_{\textbf{I}}(x) = \alpha + \frac{\| \nabla f(x_1) \|^2 - \langle \nabla^2 f(x_1) x_2, x_2 \rangle}{\langle \nabla f(x_1), -x_2 \rangle}.$$ Intuitively, the control input $u_{\textbf{I}}(x)$ is designed such that the flow map $F_{\textbf{I}}\big(x,u_{\textbf{I}}(x)\big)$ renders a level set $\sigma(t):=\langle \nabla f\big(x_1(t)\big),x_2(t) \rangle +\alpha\big(f\big(x_1(t)\big)-f^*\big)$ invariant, i.e., $\frac{d}{dt}\sigma(t)=0$. Next, the candidate flow set $\mathcal{C}_{\textbf{I}} \subset \mathbb{R}^{2n}$ is characterized by an admissible input interval $[\ul{u}_{\textbf{I}}~\ol{u}_{\textbf{I}}]$, i.e., $$\label{s8_2}
\mathcal{C}_{\textbf{I}} = \big\{x\in\mathbb{R}^{2n}:~ u_{\textbf{I}}(x)\in [\ul{u}_{\textbf{I}},,\ol{u}_{\textbf{I}}] \big\},$$ where the interval bounds $\ul{u}_{\textbf{I}},\ol{u}_{\textbf{I}}$ represent the range of admissible control values. Notice that the flow set $\mathcal{C}_{\textbf{I}}$ is the domain in which the hybrid system (\[p1\]) can evolve continuously. Finally, we introduce the jump map $G_{\textbf{I}}:\mathbb{R}^{2n}\rightarrow\mathbb{R}^{2n}$ parameterized by a constant $\beta_{\textbf{I}}$ $$\begin{aligned}
%\tag{S2}
\label{s8}
G_{\textbf{I}}(x)=\left(\begin{aligned}
x& _1 \\
-\beta_{\textbf{I}} \nabla & f(x_1)
\end{aligned}\right).\end{aligned}$$ The parameter $\beta_{\textbf{I}}$ ensures that the range space of the jump map $G_{\textbf{I}}$ is a strict subset of $\text{int}(\mathcal{C}_{\textbf{I}})$. By construction, one can inspect that any neighborhood of the optimizer $x_1^*$ has a non-empty intersection with the flow set $\mathcal{C}_{\textbf{I}}$. That is, there always exist paths in the set $\mathcal{C}_{\textbf{I}}$ that allow the continuous evolution of the hybrid system to approach arbitrarily close to the optimizer.
We are now in a position to formally present the main results related to the structure **I** given in . For the sake of completeness, we borrow the first result from [@armanICML]. This theorem provides a framework to set the parameters $\ul{u}_{\textbf{I}}$, $\ol{u}_{\textbf{I}}$, and $\beta_{\textbf{I}}$ in (\[s8\_2\]) and (\[s8\]) in order to ensure the desired exponential convergence rate $\mathcal{O}(e^{-\alpha t})$.
\[theo\_1b\] Consider a positive scalar $\alpha$ and a smooth function $f: \mathbb{R}^n\rightarrow \mathbb{R}$ satisfying Assumptions (\[p2\]) and (\[d\_1\]). Then, the solution trajectory of the hybrid control system (\[p1\]) with the respective parameters (\[sH\]) starting from any initial condition $x_1(0)$ satisfies $$\label{eqt_8b}
f\big(x_1(t)\big)-f^* \leq e^{-\alpha t} \Big( f\big(x_1(0)\big)-f^* \Big), \quad \forall t \geq0,$$ if the scalars $\ul{u}_{\textbf{I}}$, $\ol{u}_{\textbf{I}}$, and $\beta_{\textbf{I}}$ are chosen such that
\[eqt\_1b\] $$\begin{aligned}
\ul{u}_{\textbf{I}} & < \alpha+ \beta_{\textbf{I}}^{-1}-L_f\beta_{\textbf{I}}, \label{eqt_1b1}\\
\label{eqt_1b2}
\ol{u}_{\textbf{I}} & > \alpha+ \beta_{\textbf{I}}^{-1}+\ell_f\beta_{\textbf{I}}, \\
\alpha & \leq 2 \mu_f \beta_{\textbf{I}}. \label{eqt_1b3}\end{aligned}$$
The next result establishes a key feature of the solution trajectories generated by the dynamics (\[p1\]) with the respective parameters (\[sH\]), that the solution trajectories are indeed *Zeno*-free.
\[theo\_zeno\] Consider a smooth function $f: \mathbb{R}^n\rightarrow \mathbb{R}$ satisfying Assumption \[def\_10\], and the corresponding hybrid control system with the respective parameters satisfying . Given the initial condition $\Big(x_1(0),-\beta_{\textbf{I}} \nabla f\big(x_1(0)\big) \Big)$ the time between two consecutive jumps of the solution trajectory, denoted by $\tau_{\textbf{I}}$, satisfies for any scalar $r>1$ $$\begin{aligned}
\label{z_t2_03}
\tau_{\textbf{I}} \ge \log \left(\min{\bigg\{\frac{a_1}{a_2+ a_3 \big\|\nabla f\big(x_{1}(0)\big)\big\| } +1, r \bigg\}^{1/\delta}}\right), \end{aligned}$$ where the involved constants are defined as
$$\begin{aligned}
C & :=\frac{(\ol{u}_{\textbf{I}} - \alpha) + \sqrt{(\ol{u}_{\textbf{I}} - \alpha)^2 + 4L_f} }{2}, \label{z_t2_2}\\
\delta & := C + \max \{\ol{u}_{\textbf{I}}, -\ul{u}_{\textbf{I}} \}, \label{z_t2_3} \\
\mathcal{L}_f & := \max\{\ell_f, L_f \}, \label{z_t2_30} \\
a_1 &:= \min \{ \ol{u}_{\textbf{I}}-(\alpha + \beta_{\textbf{I}}^{-1}+\ell_f \beta_{\textbf{I}}), (\alpha+\beta_{\textbf{I}}^{-1}-L_f \beta_{\textbf{I}})-\ul{u}_{\textbf{I}} \}, \label{z_t2_00} \\
a_2 &:= r L_f \delta^{-1} (r \beta_{\textbf{I}} C + 1 ) + \beta_{\textbf{I}}^{-1} + (r^2+r+1) \beta_{\textbf{I}} \mathcal{L}_f, \label{z_t2_01}\\
a_3 &:= r^3 \beta_{\textbf{I}}^2 H_f \delta^{-1}. \label{z_t2_02}
\end{aligned}$$
Consequently, the solution trajectories are Zeno-free.
\[rem\_41\] Notice that Theorem \[theo\_zeno\] suggests a lower-bound for the inter-jump interval $\tau_{\textbf{I}}$ that depends on $\| \nabla f\big(x_1\big)\|$. In light of the fact that the solution trajectories converge to the optimal solutions, and as such $\nabla f\big(x_1\big)$ tends to zero, one can expect that the frequency at which the jumps occur reduces as the hybrid control system evolves in time.
Structure **II**: state-dependent potential coefficient {#sec:par_II}
-------------------------------------------------------
\[sH\_step\] In this subsection, we first provide the structure **II** for the hybrid control system . We skip the the details of differences with the structure **I** and differ it to Subection \[sec:par\_I\] and Section \[sec:proofs\]. Consider the flow map $F_{\textbf{II}}:\mathbb{R}^{2n}\times \mathbb{R}\rightarrow\mathbb{R}^{2n}$ given by $$\begin{aligned}
\label{step_01}
F_{\textbf{II}}\big(x,u_{\textbf{II}}(x)\big)=
\left(
\begin{aligned}
~x&_2\\
-&x_2
\end{aligned}
\right)+\left(
\begin{aligned}
0~&\\
-\nabla f &(x_1)
\end{aligned}
\right)u_{\textbf{II}}(x),\end{aligned}$$ and the feedback law $u_{\textbf{II}}:\mathbb{R}^{2n}\rightarrow \mathbb{R}$ given by $$\begin{aligned}
\label{step_02}
u_{\textbf{II}} (x) = \frac{ \langle \nabla^2 f(x_1) x_2, x_2 \rangle + ( 1- \alpha) \langle \nabla f(x_1), - x_2 \rangle }{ \| \nabla f(x_1) \|^2 }.\end{aligned}$$ Notice that here the input $u_{\textbf{II}} (x)$ is derived along the same lines as in structure **I**. The feedback input $u_{\textbf{II}} (x)$ is synthesized such that the level set $\sigma(t):=\langle \nabla f\big(x_1(t)\big),x_2(t) \rangle +\alpha\big(f\big(x_1(t)\big)-f^*\big)$ remains constant as the dynamics $x$ evolve based on the flow map $F_{\textbf{II}}\big(x,u_{\textbf{II}}(x)\big)$. The candidate flow set $\mathcal{C}_{\textbf{II}} \subset \mathbb{R}^{2n}$ is parameterized by an admissible interval $[\ul{u}_{\textbf{II}}~\ol{u}_{\textbf{II}}]$ as follows: $$\begin{aligned}
\label{step_03}
\mathcal{C}_{\textbf{II}} = \left\lbrace x\in \mathbb{R}^{2n}:~ u_{\textbf{II}}(x) \in [\ul{u}_{\textbf{II}},\ol{u}_{\textbf{II}}] \right\rbrace.\end{aligned}$$ Parameterized in a constant $\beta_{\textbf{II}}$, the jump map $G_{\textbf{II}}:\mathbb{R}^{2n}\rightarrow\mathbb{R}^{2n}$ is given by $$\begin{aligned}
\label{step_04}
G_{\textbf{II}}(x)= \left(
\begin{aligned}
x&_1 \\
-\beta_{\textbf{II}} \nabla & f(x_1)
\end{aligned}
\right).\end{aligned}$$
\[theo\_step\_conv\] Consider a positive scalar $\alpha$ and a smooth function $f: \mathbb{R}^n\rightarrow \mathbb{R}$ satisfying Assumptions (\[p2\]) and (\[d\_1\]). Then, the solution trajectory of the hybrid control system (\[p1\]) with the respective parameters starting from any initial condition $x_1(0)$ satisfies the inequality if the scalars $\ul{u}_{\textbf{II}}$, $\ol{u}_{\textbf{II}}$, and $\beta_{\textbf{II}}$ are chosen such that
\[step\_05\] $$\begin{aligned}
\ul{u}_{\textbf{II}} & < -\ell_f \beta_{\textbf{II}}^2 + (1 - \alpha) \beta_{\textbf{II}}, \label{step_05_2} \\
\ol{u}_{\textbf{II}} & > L_f \beta_{\textbf{II}}^2 + (1 - \alpha) \beta_{\textbf{II}}, \label{step_05_1} \\
\alpha & \leq 2 \mu_f \beta_{\textbf{II}}. \label{step_05_3} \end{aligned}$$
\[theo\_step\_zeno\] Consider a smooth function $f: \mathbb{R}^n\rightarrow \mathbb{R}$ satisfying Assumptions and , and the hybrid control system with the respective parameters satisfying . Given the initial condition $\Big(x_1(0),-\beta_{\textbf{II}} \nabla f\big(x_1(0)\big) \Big)$ the time between two consecutive jumps of the solution trajectory, denoted by $\tau_{\textbf{II}}$, satisfies for any scalar $r \in (0,1)$ $$\begin{aligned}
\label{step_3}
\tau_{\textbf{II}} \geq \min\left\{ r\omega^{-1}, \delta ( b_1+b_2)^{-1} \right\}.\end{aligned}$$ where the involved scalars are defined as $$\begin{aligned}
\delta &:=\min\big\{\ol{u}_{\textbf{II}}-(L_f\beta_{\textbf{II}}^2+(1-\alpha)\beta_{\textbf{II}}), (-\ell_f\beta_{\textbf{II}}^2+(1-\alpha)\beta_{\textbf{II}})-\ul{u}_{\textbf{II}} \big\}, \\
U &:=\max \{\ol{u}_{\textbf{II}}, -\ul{u}_{\textbf{II}}\},\\
\mathcal{L}_f &:=\max \{ \ell_f, L_f \},\\
\omega &:=\mathcal{L}_f (\beta_{\textbf{II}}^2+\beta_{\textbf{II}} U)^{\frac{1}{2}},\\
b1 & := \frac{2 \mathcal{L}_f \beta_{\textbf{II}} \big( U + \omega (\beta_{\textbf{II}} +U) \big) }{(1-r)^3}, \\
b_2& := |\alpha - 1| \frac{ 2 \omega \beta_{\textbf{II}} }{(1-r)^3} + |\alpha - 1| \alpha \beta_{\textbf{II}} (1+r).\end{aligned}$$ Thus, the solution trajectories are Zeno-free.
\[rem\_41\] Notice that unlike Theorem \[theo\_zeno\], the derived lower-bound for the inter-jump interval $\tau_{\textbf{II}}$ is uniform in the sense that the bound is independent of $\| \nabla f\big(x_1\big)\|$. Furthermore, the regularity requirement on the function $f$ is weaker than the one used in Theorem \[theo\_zeno\], i.e., the function $f$ is not required to satisfy the Assumption .
Notice that the main differences between the structures , lie in the flow maps and the feedback laws. On the other hand, these structures share the key feature of enabling an $\alpha$-exponential convergence rate for the hybrid system through their corresponding control inputs. The reason explaining the aforementioned points is deferred until later in Section \[sec:proofs\].
Further Discussions {#sec:par_I}
-------------------
In what follows, we collect several remarks regarding the common features of the proposed structures. Then, we apply the *forward-Euler* method of time-discretization to these structures of the hybrid control system . The proposed discretizations guarantee an exponential rate of convergence in the suboptimality measure $f(x_1^k)-f^*$, where $k$ is the iteration index.
\[rem\_2\] The PL inequality is a weaker requirement than strong convexity. Notice that although the class of functions that satisfy the PL inequality are in general non-convex, the set of minimizers of such functions should still be a convex set.
\[rem\_3\] The hybrid frameworks intrinsically capture restarting schemes through the jump map. The schemes are a weighted gradient where the weight factor $\beta_{\textbf{I}}$ or $\beta_{\textbf{II}}$ is essentially characterized by the given data $\alpha$, $\mu_f$, $\ell_f$, and $L_f$. One may inspect that the constant $\beta_{\textbf{I}}$ or $\beta_{\textbf{II}}$ can be in fact introduced as a state-dependent weight factor to potentially improve the performance. Nonetheless, for the sake of simplicity of exposition, we do not pursue this level of generality in this paper.
\[rem\_40\] Although our proposed frameworks require 2nd-order information, i.e., the Hessian $\nabla^2 f$, this requirement only appears in a mild form as an evaluation in the same spirit as the modified Newton step proposed in [@nesterov2006cubic]. Furthermore, we emphasize that our results still hold true if one replaces $\nabla^2 f(x_1)$ with its upper-bound $L_f I_n$ following essentially the same analysis. For further details we refer the reader to the proof of Theorem \[theo\_step\_conv\].
\[rem\_step\_input\] An implication of Theorem \[theo\_step\_conv\] is that if the desired convergence rate $\alpha > \big(\frac{2\mu_f}{2\mu_f+\ell_f}\big)$, it is then required to choose $\ul{u}_{\textbf{II}}<0$, indicating that the system may need to receive energy through a negative damping. On a similar note, Theorem \[theo\_1b\] asserts that the upper bound requires $\ol{u}_{\textbf{I}}>\alpha$, and if $\alpha > \big(\frac{2 \mu_f}{\sqrt{\max\{L_f-2\mu_f,0\}}}\big)$, we then have to set $\ul{u}_{\textbf{I}}<0$ [@armanICML Remark 3.4].
Discrete-Time Dynamics
----------------------
In the next result, we show that if one applies the forward-Euler method on the two proposed structures properly, the resulting discrete-time hybrid control systems possess exponential convergence rates. Suppose $i\in\{\textbf{I},\textbf{II}\}$ and let us denote by $s$ the time-discretization step size. Consider the discrete-time hybrid control system $$\label{d1}
x^{k+1} =\left\lbrace
\begin{array}{lc}
F_{d,i}\big(x^k,u_{d,i}(x^k)\big), & x^k \in \mathcal{C}_{d,i}\\
G_{d,i}(x^k), & \text{otherwise},
\end{array}
\right.$$ where $F_{d,i}$, $G_{d,i}$, and $\mathcal{C}_{d,i}$ are the flow map, the jump map, and the flow set, respectively. The discrete flow map $F_{d,i}:\mathbb{R}^{2n}\times \mathbb{R}\rightarrow\mathbb{R}^{2n}$ is given by
\[dHd\] $$\begin{aligned}
\label{d1_1}
F_{d,i}\big(x^k,u_{d,i}(x^k) \big)=x^k+sF_i\big(x^k,u_i(x^k)\big),\; i \in \{ \mathbf{I}, \mathbf{II} \},\end{aligned}$$ where $F_i$ and $u_i$ are defined in and , or and based on the considered structure $i$. The discrete flow set $\mathcal{C}_{d,i}\subset \mathbb{R}^{2n}$ is defined as $$\begin{aligned}
\label{d2_1}
\mathcal{C}_{d,i} = \big\{(x_1^k,x_2^k)\in\mathbb{R}^{2n}:
c_1 \| x^k_2 \|^2 \leq \| \nabla f(x^k_1) \|^2 \leq c_2 \langle \nabla f(x^k_1), -x^k_2 \rangle \big\},\end{aligned}$$ and, $c_1$ and $c_2$ are two positive scalars. The discrete jump map $G_{d,i}:\mathbb{R}^{2n}\rightarrow\mathbb{R}^{2n}$ is given by $G_{d,i}(x^k)=\big((x^k)^\top,-\beta \nabla^\top f(x^k)\big)^\top$.
It is evident in the flow sets $\mathcal{C}_{d,i}$ of the discrete-time dynamics that these sets are no longer defined based on admissible input intervals. The reason has to do with the difficulties that arise from appropriately discretizing the control inputs $u_{\textbf{I}}$ and $u_{\textbf{II}}$. Nonetheless, the next result guarantees exponential rate of convergence of the discrete-time control system with either of the respective structure **I** or **II**, by introducing a mechanism to set the scalars $c_1$, $c_2$, and $\beta$.
\[theo\_2\] Consider a smooth function $f: \mathbb{R}^n\rightarrow \mathbb{R}$ satisfying Assumptions and . The solution trajectory of the discrete-time hybrid control system (\[d1\]) with the respective structure $i \in \{\mathbf{I},\mathbf{II}\}$ and starting from any initial condition $x_1^0$, satisfies $$\begin{aligned}
\label{d4}
f(x_1^{k+1})-f^* \leq \lambda(s,c_1,c_2,\beta) \big( f(x_1^k) - f^*\big), \end{aligned}$$ with $\lambda(s,c_1,c_2,\beta)\in (0,1)$ given by $$\begin{aligned}
\label{d_r}
\lambda(s,c_1,c_2,\beta):=1+ 2 \mu_f \big( - \frac{s}{c_2} + \frac{L_f}{2 c_1} s^2 \big),\end{aligned}$$ if the parameters $s$, $c_1$ ,$c_2$, and $\beta$ satisfy
\[d4\_d\] $$\begin{aligned}
& \sqrt{c_1} \leq c_2, \label{d4_d1}\\
&\beta^2 c_1 \leq 1 \leq \beta c_2 ,\label{d4_d2}\\
&c_2 L_f s < 2 c_1.\end{aligned}$$
\[rem\_5\] We would like to emphasize that the exponential convergence of the proposed discretization method solely depends on the dynamics $x_1$ and the properties of the objective function $f$. Thus, we deliberately avoid labeling the scalars $c_1$, $c_2$, and $\beta$ by the structure index $i$. Crucially, the structures of the control laws do not impact the relations in Theorem \[theo\_2\], see Subsection \[subsec:proof\_2\] for more details. In light of the above facts, we believe that a more in-depth analysis of the dynamics along with the control structures may provide a more intelligent way to improve the discretization result of Theorem \[theo\_2\].
\[Cor\_1\] The optimal convergence rate guaranteed by Theorem \[theo\_2\] for the discrete-time dynamics is $\lambda^*:=\big(1-\frac{\mu_f}{L_f} \big)$ and $$\begin{aligned}
&\sqrt{c_1^*} = c^*_2 = \frac{1}{\beta^*} = L_f s^*.\end{aligned}$$
The pseudocode to implement the above corollary is presented in Algorithm \[alg:example\] using the discrete-time dynamics with the respective parameters **I** or **II**.
data $x_1^0$, $\ell_f$, $L_f$, $\mu_f$, $\alpha \in \mathbb{R}^+$, $k_{\max} \in \mathbb{N}^+$, $i\in\{ \mathbf{I}, \mathbf{II} \}$ $\sqrt{c_1} = c_2 = \beta^{-1} = L_f s$, $x_2^0=-\beta \nabla f(x_1^0)$\
$\quad\quad\;\; x^0=(x_1^0,x_2^0)$ $x^{k+1} \leftarrow F_{d,i}(x^k)$ $x^{k+1} \leftarrow G_{d,i}(x^k)$
Notice that the rate $1-\frac{\mu_f}{L_f}$ in Corollary \[Cor\_1\] is equal to the rate guaranteed by the gradient descent method for functions that satisfy the PL inequality , see e.g., [@Karimi2016]. This is in fact another inefficiency indicator of a straightforward application of the forward-Euler method on the continuous-time hybrid control systems that are proposed in this paper. Moreover, it is worth emphasizing that Nesterov’s fast method achieves the optimal rate $1-\sqrt{\frac{\sigma_f}{L_f}}$ for strongly convex functions with the strong convexity constant $\sigma_f$ [@Nesterov2004].
Technical Proofs {#sec:proofs}
================
Proof of Theorem \[theo\_zeno\] {#subsec:pro_zeno1}
-------------------------------
In this subsection, we first set the stage by providing two intermediate results regarding the properties of dynamics of the hybrid control system (\[p1\]) with the respective parameters (\[sH\]). We then employ these facts to formally state the proof of Theorem \[theo\_zeno\]. The next lemma reveals a relation between $\nabla f(x_1)$ and $x_2$ along the trajectories of the hybrid control system. In this subsection, for the sake of brevity we denote $x_1(t)$ and $x_1(0)$ by $x_1$ and $x_{1,0}$, respectively. We adapt the same change of notation for $x_2$ and $x$, as well.
\[lem\_z1\] Consider the continuous-time hybrid control system (\[p1\]) with the respective parameters (\[sH\]) satisfying (\[eqt\_1b\]) where the function $f$ satisfies Assumptions (\[p2\]) and (\[d\_1\]). Then, we have $$\begin{aligned}
\label{z1}
\big\| \nabla f(x_1) \big\| \leq C \| x_2 \| ,
\end{aligned}$$ where $C$ is given by (\[z\_t2\_2\]).
Notice that, by the definition of the control law and the upper bound condition $u_{\textbf{I}}(x) \leq \ol{u}_{\textbf{I}}$, we have $$\begin{aligned}
\big\| \nabla f(x_1) \big\|^2 - \langle \nabla^2 f(x_1) x_2 , x_2 \rangle
\leq (\ol{u}_{\textbf{I}} - \alpha) \langle \nabla f(x_1), -x_2 \rangle \leq
(\ol{u}_{\textbf{I}} - \alpha) \big\| \nabla f(x_1) \big\| \cdot \| x_2 \|,\end{aligned}$$ where the second inequality follows from the Cauchy-Schwarz inequality. Since the function $f$ satisfies Assumption (\[p2\]), one can infer that $$\begin{aligned}
\big\| \nabla f(x_1) \big\|^2 - L_f \| x_2 \|^2 \leq (\ol{u}_{\textbf{I}} - \alpha) \big\| \nabla f(x_1) \big\| \cdot \| x_2\|,\end{aligned}$$ which in turn can be reformulated into $$\begin{aligned}
\label{z3}
\frac{\big\| \nabla f(x_1) \big\|^2}{\| x_2 \|^2} - (\ol{u}_{\textbf{I}} - \alpha) \frac{\big\| \nabla f(x_1) \big\|}{\| x_2 \|} - L_f \leq 0.\end{aligned}$$ Defining the variable $y:= \big\| \nabla f(x_1) \big\| / \| x_2 \|$, the inequality (\[z3\]) becomes the quadratic inequality $y^2 - (\ol{u}_{\textbf{I}} - \alpha) y -L_f \leq 0$. Taking into account that $y\geq 0$, it then follows from (\[z1\]) that $$\begin{aligned}
y=\frac{\big\| \nabla f(x_1) \big\|}{\| x_2 \|} \leq \frac{(\ol{u}_{\textbf{I}} - \alpha) + \sqrt{(\ol{u}_{\textbf{I}} - \alpha)^2 + 4L_f} }{2} =: C.\end{aligned}$$ This concludes the proof of Lemma \[lem\_z1\].
In the following, we provide a result that indicates the variation of norms $x_1$ and $x_2$, along the trajectories of the hybrid control system, are bounded in terms of time while they evolve according to the continuous mode. Since the hybrid control system is time-invariant, such bounds can be generalized to all inter-jump intervals.
\[lem\_z2\] Suppose that the same conditions as specified in Lemma \[lem\_z1\] hold, and the hybrid control system (\[p1\]), (\[sH\]) starts from the initial condition $\big(x_{1,0},-\beta_{\textbf{I}} \nabla f(x_{1,0}) \big)$ for some $x_{1,0} \in \mathbb{R}^n$. Then
\[z4\_m\] $$\begin{aligned}
\label{z4_0}
& \| x_1 - x_{1,0} \| \leq \delta^{-1} \| x_{2,0} \| \big( e^{\delta t} -1 \big),\\
\label{z4_1}
&\| x_2 - x_{2,0} \| \leq \| x_{2,0} \| \big( e^{\delta t} - 1 \big),\end{aligned}$$
where $\delta$ is given by (\[z\_t2\_3\]).
Using the flow dynamics (\[s1\]) we obtain $$\label{z4_30}
\begin{aligned}
\frac{d}{dt} \| x_2 \| \leq \Big\| \frac{d}{dt} x_2 \Big\| \leq \big\| \nabla f( x_1 ) \big\| + \big| u_{\textbf{I}} (x) \big| \cdot \| x_2 \|
\leq (C + \max\{\ol{u}_{\textbf{I}},-\ul{u}_{\textbf{I}}\} ) \| x_2 \| = \delta \| x_2 \|.
\end{aligned}$$ The inequality (\[z4\_30\]) implies that $$\begin{aligned}
\label{z4_3}
\| x_2 \| \leq \| x_{2,0} \| e^{\delta t}.\end{aligned}$$ Furthermore, notice that $$\begin{aligned}
\frac{d}{dt} \| x_1 - x_{1,0} \| &\leq \Big\| \frac{d}{dt} ( x_1 - x_{1,0} ) \Big\| = \| x_2 \|.\end{aligned}$$ Integrating the two sides of the above inequality leads to $$\begin{aligned}
\| x_1 - x_{1,0} \| \leq \int_0^{t}~ \big\| x_2(s) \big\| ~ds \leq \int_0^{t}~ \| x_{2,0} \| e^{\delta s} ~ds
= \frac{\| x_{2,0} \|}{\delta} \big( e^{\delta t} -1 \big),\end{aligned}$$ in which we made use of (\[z4\_3\]). Hence, the inequality (\[z4\_0\]) in Lemma \[lem\_z1\] is concluded. Next, we shall establish the inequality (\[z4\_1\]). Note that $$\begin{aligned}
\frac{d}{dt} \| x_2 - x_{2,0} \| \leq \Big\| \frac{d}{dt} ( x_2 - x_{2,0} ) \Big\| = \Big\| \frac{d}{dt} x_2 \Big\| \leq \delta \big\| x_2 \big\|
\leq \delta \| x_2 -x_{2,0} \| + \delta \| x_{2,0} \|.
\end{aligned}$$ Applying Grownwall’s inequality [@khalil1996noninear Lemma A.1] then leads to the desired inequality (\[z4\_1\]). The claims in Lemma \[lem\_z2\] follow.
**Proof of Theorem \[theo\_zeno\]:** The proof comprises five steps, and the key part is to guarantee that during the first inter-jump interval the quantity $\big|u_{\textbf{I}}( x ) - u_{\textbf{I}}( x_{,0} )\big|$ is bounded by a continuous function $\phi\Big(t,\big\|\nabla f(x_{1,0})\big\|\Big)$, which is exponential in its first argument and linear in its second argument. Then, it follows from the continuity of the function $\phi$ that the solution trajectories of the hybrid control system are Zeno-free.
**Step 1:** Let us define $g(t):=\langle \nabla f(x_1), -x_2 \rangle$. We now compute the derivative of $g(t)$ along the trajectories of the hybrid control system (\[p1\]), (\[sH\]) during the first inter-jump interval, i.e., $$\begin{aligned}
\frac{d}{dt} g(t)&
= \langle \nabla^2 f( x_1) x_2, -x_2 \rangle + \langle \nabla f( x_1), u_{\textbf{I}} ( x ) x_2 + \nabla f( x_1 ) \rangle \\
&= - \langle \nabla^2 f( x_1) x_2, x_2 \rangle + \big\| \nabla f( x_1) \big\|^2 + u_{\textbf{I}} ( x) \langle \nabla f( x_1), x_2 \rangle \\
& = - \alpha \langle \nabla f(x_1), -x_2 \rangle = - \alpha ~ g(t).
\end{aligned}$$ According to the above discussion and considering the initial state $x_{2,0}=- \beta_{\textbf{I}} \nabla f(x_{1,0})$, it follows that $$\begin{aligned}
\label{z5}
\langle \nabla f(x_1), -x_2 \rangle = \beta_{\textbf{I}} \big\| \nabla f( x_{1,0}) \big\|^2 e ^{-\alpha t}.\end{aligned}$$
**Step 2:** The quantity $\Big| e^{\alpha t} \big\| \nabla f( x_1) \big\|^2 - \big\| \nabla f( x_{1,0}) \big\|^2 \Big|$ is bounded along the trajectories of the hybrid control system (\[p1\]) with the respective parameters (\[sH\]) during the first inter-jump interval, i.e., $$\label{z7}
\begin{aligned}
\Big| e^{\alpha t} \big\| \nabla f( x_1 ) \big\|^2 - \big\| \nabla f( x_{1,0}) \big\|^2 \Big|
& = \Big| e^{\alpha t} \big\| \nabla f( x_1) \big\|^2 - (e^{\alpha t}-e^{\alpha t} + 1) \big\| \nabla f( x_{1,0}) \big\|^2 \Big| \\
& \overset{\text{(i)}}{\leq} e^{\alpha t} \Big|\big\| \nabla f( x_1) \big\|^2 - \big\| \nabla f( x_{1,0}) \big\|^2 \Big| + (e^{\alpha t} - 1) \big\| \nabla f( x_{1,0}) \big\|^2 \\
& = e^{\alpha t} \Big|\big\langle \nabla f( x_1) - \nabla f( x_{1,0}),\nabla f( x_1) + \nabla f( x_{1,0}) \big\rangle \Big| \\
&\qquad\quad + (e^{\alpha t} - 1) \big\| \nabla f( x_{1,0}) \big\|^2 \\
& \overset{\text{(ii)}}{\leq} e^{\alpha t} \big\| \nabla f( x_1) - \nabla f( x_{1,0})\big\|\cdot \big\| \nabla f( x_1) + \nabla f( x_{1,0}) \big\| \\
& \qquad\quad + (e^{\alpha t} - 1) \big\| \nabla f( x_{1,0}) \big\|^2 \\
& \overset{\text{(iii)}}{\leq} e^{\alpha t} L_f \| x_1 - x_{1,0} \| \cdot
\big(\beta_{\textbf{I}} C e^{\delta t} + 1 \big) \frac{\| x_{2,0} \|}{\beta_{\textbf{I}}} + \big(e^{\alpha t} - 1\big) \frac{\| x_{2,0} \|^2}{\beta_{\textbf{I}}^2} \\
& \overset{\text{(iv)}}{\leq} e^{\alpha t} L_f \big(e^{\delta t} - 1 \big) \frac{\| x_{2,0} \|}{\delta} \cdot
\big(\beta_{\textbf{I}} C e^{\delta t} + 1 \big) \frac{\| x_{2,0} \|}{\beta_{\textbf{I}}} + \big(e^{\alpha t} - 1\big) \frac{\| x_{2,0} \|^2}{\beta_{\textbf{I}}^2} \\
&= \left( \frac{L_f}{\delta \beta_{\textbf{I}}} e^{\alpha t} \big(\beta_{\textbf{I}} C e^{\delta t} + 1 \big) \big(e^{\delta t} - 1 \big) +
\frac{1}{\beta_{\textbf{I}}^2} \big(e^{\alpha t} - 1\big) \right) \| x_{2,0}\|^2,
\end{aligned}$$ where we made use of the triangle inequality in the inequality (i), the Cauchy-Schwarz inequality in the inequality (ii), Assumption (\[p2\]) and its consequence in Remark \[rem\_lip\] along with the triangle inequality in the inequality (iii), and the inequality (\[z4\_0\]) in the inequality (iv), respectively.
**Step 3:** Observe that $$\label{z8}
\begin{aligned}
& \big| e^{\alpha t} \langle \nabla^2 f(x_1) x_2,x_2 \rangle
- \langle \nabla^2 f(x_{1,0}) x_{2,0},x_{2,0} \rangle \big| \\
& \quad = \Big| e^{\alpha t} \big\langle \big[\nabla^2 f(x_1)-\nabla^2 f(x_{1,0})+\nabla^2 f(x_{1,0})\big] x_2,x_2 \big\rangle - \big(e^{\alpha t}-e^{\alpha t} + 1\big) \langle \nabla^2 f(x_{1,0}) x_{2,0},x_{2,0} \rangle \Big| \\
&\quad = \Big| e^{\alpha t} \big\langle \big[\nabla^2 f(x_1)-\nabla^2 f(x_{1,0})\big] x_2,x_2 \big\rangle + e^{\alpha t} \langle \nabla^2 f(x_{1,0}) x_2,x_2 \rangle - e^{\alpha t} \langle \nabla^2 f(x_{1,0}) x_{2,0},x_{2,0} \rangle \\
&\qquad\quad + \big(e^{\alpha t}- 1 \big) \langle \nabla^2 f(x_{1,0}) x_{2,0},x_{2,0} \rangle \Big| \\
&\quad \overset{\text{(i)}}{\leq} e^{\alpha t} \Big| \big\langle \big[\nabla^2 f(x_1)-\nabla^2 f(x_{1,0})\big] x_2,x_2 \big\rangle \Big| + e^{\alpha t} \Big| \langle \nabla^2 f(x_{1,0}) x_2,x_2 \rangle - \langle \nabla^2 f(x_{1,0}) x_{2,0},x_{2,0} \rangle \Big| \\
&\qquad\quad + \big(e^{\alpha t}- 1 \big) \Big| \langle \nabla^2 f(x_{1,0}) x_{2,0},x_{2,0} \rangle \Big| \\
&\quad \overset{\text{(ii)}}{\leq} e^{\alpha t} H_f \| x_1 - x_{1,0} \| \cdot \| x_2 \|^2 + e^{\alpha t} \Big| \big\langle \nabla^2 f(x_{1,0}) \big[ x_2 - x_{2,0}\big], x_2 + x_{2,0} \big\rangle \Big| + \mathcal{L}_f \| x_{2,0} \|^2 \big(e^{\alpha t} - 1 \big),
\end{aligned}$$ where the inequality (i) follows from the triangle inequality, and the inequality (ii) is an immediate consequence of Assumptions and , recalling $\mathcal{L}_f=\max\{\ell_f,L_f \}$. According to the above analysis, one can deduce that $$\label{z8}
\begin{aligned}
& \big| e^{\alpha t} \langle \nabla^2 f(x_1) x_2,x_2 \rangle
- \langle \nabla^2 f(x_{1,0}) x_{2,0},x_{2,0} \rangle \big| \\
& \overset{\text{(i)}}{\leq} e^{\alpha t} H_f \frac{\| x_{2,0} \|}{\delta} \big(e^{\delta t} - 1 \big) \cdot e^{2\delta t} \| x_{2,0} \|^2 + e^{\alpha t} \mathcal{L}_f \| x_2 - x_{2,0} \| \cdot \| x_2 + x_{2,0} \| + \big( e^{\alpha t} -1 \big) \mathcal{L}_f \| x_{2,0} \|^2 \\
& \overset{\text{(ii)}}{\leq} \frac{ H_f }{\delta} e^{(\alpha+2\delta) t} \big\| x_2(0) \big\|^3 \cdot (e^{\delta t} - 1) + e^{\alpha t} \mathcal{L}_f \big(e^{\delta t} - 1\big) \| x_{2,0} \| \cdot \big(e^{\delta t} + 1\big) \| x_{2,0} \| + \mathcal{L}_f \| x_{2,0} \|^2 \big( e^{\alpha t} -1 \big) \\
& = \Big(
( H_f / \delta) ~ e^{(\alpha+2\delta) t} \| x_{2,0} \| \cdot \big( e^{\delta t} - 1 \big) + \mathcal{L}_f \big( e^{(\alpha+\delta) t} + e^{\alpha t} \big) \big( e^{\delta t} - 1 \big) + \mathcal{L}_f (e^{\alpha t} -1) \Big) \| x_{2,0} \|^2,
\end{aligned}$$ where we made use of the inequality (\[z4\_0\]), the inequality (\[z4\_1\]), and the triangle inequality in the inequality (i), and the inequality (\[z4\_1\]) and the triangle inequality in the inequality (ii), respectively.
**Step 4:** We now study the input variation $\big| u_{\textbf{I}}( x) - u_{\textbf{I}}( x_{,0} ) \big|$ along the solution trajectories of the hybrid control system , during the first inter-jump interval. Observe that $$\label{z9}
\begin{aligned}
& \big|u_{\textbf{I}}( x ) - u_{\textbf{I}}( x_{,0} )\big| \\
&\quad = \Big| \frac{\big\| \nabla f(x_1) \big\|^2 - \langle \nabla^2 f(x_1) x_2(t),x_2 \rangle}
{\langle \nabla f(x_1), -x_2 \rangle} - \frac{\big\| \nabla f( x_{1,0}) \big\|^2 - \langle \nabla^2 f(x_{1,0}) x_{2,0},x_{2,0} \rangle}
{\langle \nabla f(x_{1,0}), -x_{2,0} \rangle}
\Big|\\
&\quad = \Big| \frac{\big\| \nabla f(x_1) \big\|^2 }
{\beta_{\textbf{I}} \big\| \nabla f(x_{1,0}) \big\|^2 e ^{-\alpha t}} -\frac{ \langle \nabla^2 f(x_1) x_2 ,x_2 \rangle}
{\beta_{\textbf{I}} \big\| \nabla f( x_{1,0}) \big\|^2 e ^{-\alpha t}} - \frac{ \big\| \nabla f(x_{1,0}) \big\|^2 }
{\beta_{\textbf{I}} \big\| \nabla f( x_{1,0}) \big\|^2}
+ \frac{ \langle \nabla^2 f(x_{1,0}) x_{2,0}, x_{2,0} \rangle}
{\beta_{\textbf{I}} \big\| \nabla f( x_{1,0}) \big\|^2}
\Big| \\
&\quad \overset{\text{(i)}}{\leq} \frac{1}{\beta_{\textbf{I}} \big\| \nabla f( x_{1,0}) \big\|^2} \Big| e^{\alpha t} \big\| \nabla f( x_1) \big\|^2 - \big\| \nabla f(x_{1,0}) \big\|^2 \Big| \\
& \qquad +
\frac{1}{\beta_{\textbf{I}} \big\| \nabla f(x_{1,0}) \big\|^2} \Big| e^{\alpha t} \big\langle \nabla^2 f(x_1) x_2 , x_2 \big\rangle
- \langle \nabla^2 f(x_{1,0}) x_{2,0}, x_{2,0} \rangle \Big| \\
&\quad \overset{\text{(ii)}}{=} \frac{\beta_{\textbf{I}}}{ \| x_{2,0} \|^2} \Big| e^{\alpha t} \big\| \nabla f(x_1) \big\|^2 - \big\| \nabla f(x_{1,0}) \big\|^2 \Big| +
\frac{\beta_{\textbf{I}}}{ \| x_{2,0} \|^2} \Big| e^{\alpha t} \langle \ \nabla^2 f(x_1) x_2 ,x_2 \rangle
- \langle \nabla^2 f(x_{1,0}) x_{2,0} , x_{2,0} \rangle \Big| ,
\end{aligned}$$ where we made use of the triangle inequality in the inequality (i) and the relation (\[z5\]) in the equality (ii), respectively. Based on the above discussion, we then conclude that $$\label{z9}
\begin{aligned}
& \big|u_{\textbf{I}}( x ) - u_{\textbf{I}}( x_{,0} )\big| \\
& \overset{\text{(i)}}{\leq} \frac{\beta_{\textbf{I}}}{ \| x_{2,0} \|^2}
\left( \frac{L_f}{\delta \beta_{\textbf{I}}} e^{\alpha t} \big(\beta_{\textbf{I}} C e^{\delta t} + 1 \big) \big(e^{\delta t} - 1 \big) +
\frac{1}{\beta^2_{\textbf{I}}} \big(e^{\alpha t} - 1 \big) \right) \| x_{2,0} \|^2\\
&\quad + \frac{\beta_{\textbf{I}}}{ \| x_{2,0} \|^2} \left(
\frac{ H_f }{\delta} e^{(\alpha+2\delta) t} \| x_{2,0} \| \cdot \big(e^{\delta t} - 1 \big) + \mathcal{L}_f \big( e^{(\alpha+\delta) t} + e^{\alpha t} \big) \big(e^{\delta t} - 1 \big)
+ \mathcal{L}_f \big(e^{\alpha t} -1 \big) \right) \| x_{2,0} \|^2 \\
& \overset{\text{(ii)}}{\leq}
\frac{L_f}{\delta} e^{\delta t} (\beta_{\textbf{I}} C e^{\delta t} + 1 ) (e^{\delta t} - 1 ) +
\frac{1}{\beta_{\textbf{I}}} (e^{\delta t} - 1)
\\
&\quad + \beta_{\textbf{I}} \Big(
\beta_{\textbf{I}} H_f \delta^{-1} \cdot e^{3 \delta t} \big\|\nabla f(x_{1,0})\big\| \cdot \big(e^{\delta t} - 1 \big) + \mathcal{L}_f \big(e^{2 \delta t} + e^{\delta t} \big) \big(e^{\delta t} - 1 \big) + \mathcal{L}_f \big(e^{\delta t} -1 \big) \Big) \\
& = \Big( L_f \delta^{-1} \cdot e^{\delta t} (\beta_{\textbf{I}} C e^{\delta t} + 1 ) + \frac{1}{\beta_{\textbf{I}}} +
\frac{\beta_{\textbf{I}}^2 H_f }{\delta} e^{3 \delta t} \big\|\nabla f(x_{1,0})\big\| + \beta_{\textbf{I}} \mathcal{L}_f (e^{2 \delta t} + e^{\delta t})
+ \beta_{\textbf{I}} \mathcal{L}_f \Big) \big( e^{\delta t} - 1 \big)\\
& =: \phi\Big(t,\big\|\nabla f(x_{1,0})\big\|\Big),
\end{aligned}$$ where the inequality (i) follows from the implications of Steps 2 and 3, and the equality (ii) is an immediate consequence of the relation $\alpha < \delta$ and the equality $x_{2,0}=-\beta_{\textbf{I}} \nabla f( x_{1,0})$.
**Step 5:** Consider $a_1$ defined in (\[z\_t2\_00\]) and recall that $u_{\textbf{I}} ( x_{,0})$ by design lies inside the input interval $[\ul{u}_{\textbf{I}} , \ol{u}_{\textbf{I}}]$. The quantity $a_1$ is a lower bound on the distance of $u_{\textbf{I}}( x_{,0})$ to the boundaries of the interval $[\ul{u}_{\textbf{I}} , \ol{u}_{\textbf{I}}]$. Thus, the inter-jump interval $\tau_{\textbf{I}}$ satisfies $$\begin{aligned}
\tau_{\textbf{I}} \geq \max \left\{t\geq 0 :~ \big|u_{\textbf{I}}( x ) - u_{\textbf{I}}( x_{,0} )\big| \leq a_1 \right\}
\geq \max \left\{t\geq 0 :~ \phi\Big(t,\big\|\nabla f(x_{1,0})\big\|\Big) \leq a_1 \right\},\end{aligned}$$ where the second inequality is implied by the analysis provided in Step 4. Consider a positive constant $r>1$. One can infer for every $t\in \big[0, \delta^{-1}{\log r}\big]$ that $$\begin{aligned}
\phi\Big(t,\big\|\nabla f(x_{1,0})\big\|\Big)
& \leq \Big( r L_f \delta^{-1} (r \beta_{\textbf{I}} C + 1 ) + \beta_{\textbf{I}}^{-1}
+ r^3 \beta_{\textbf{I}}^2 H_f \delta^{-1} \big\|\nabla f(x_{1,0})\big\| \\
&\quad
+ (r^2+r) \beta_{\textbf{I}} \mathcal{L}_f + \beta_{\textbf{I}} \mathcal{L}_f \Big) (e^{\delta t} - 1)\\
&= \Big(a_2 + a_3 \big\|\nabla f(x_{1,0})\big\| \Big)(e^{\delta t} - 1) \\
& =: \phi'\Big(t,\big\|\nabla f(x_{1,0})\big\|\Big),\end{aligned}$$ where the constants $a_2$ and $a_3$ are defined in , , respectively, and the inequality $e^{\delta t} < r$ is used. Suppose now $\tau'$ is the lower bound of the inter jump in . Then $\phi'\Big(\tau',\big\|\nabla f(x_{1,0})\big\|\Big)=a_1$, where the constant $a_1$ is defined in . It is straightforward to establish the assertion made in .
In the second part of the assertion, we should show that the proposed lower bound in is uniformly away from zero along any trajectories of the hybrid system. To this end, we only need to focus on the term $\|\nabla f\big(x_{1}(t)\big)\|$. Recall that Theorem \[theo\_1b\] effectively implies that $\lim_{t\rightarrow \infty}~\|\nabla f\big(x_{1}(t)\big)\| = 0$, possibly not in a monotone manner though. This observation allows us to deduce that $M:= \sup_{t \ge 0}\|\nabla f\big(x_{1}(t)\big)\| < \infty$. Using the uniform bound $M$, we have a minimum non-zero inter-jump interval, giving rise to a Zeno-free behavior for all solution trajectories.
Proof of Theorem \[theo\_step\_conv\]
-------------------------------------
The proof follows a similar idea as in [@armanICML Theorem 3.1] but the required technical steps are somewhat different, leading to another set of technical assumptions. In the first step, we begin with describing on how the chosen input $u_{\textbf{II}}(x)$ in ensures achieving the desired exponential convergence rate $\mathcal{O}\big(e^{-\alpha t}\big)$. Let us define the set $\mathcal{E}_{\alpha}:=\Big\{x\in \mathbb{R}^{2n}: \alpha \big(f(x_1)-f^*\big) < \langle \nabla f(x_1), -x_2 \rangle \Big\}$. We demonstrate that as long as a solution trajectory of the continuous flow is contained in the set $\mathcal{E}_{\alpha}$, the function $f$ obeys the exponential decay . To this end, observe that if $\big(x_1(t),x_2(t)\big) \in \mathcal{E}_{\alpha}$, $$\begin{aligned}
\frac{d}{dt}\Big(f\big(x_1(t)\big)-f^*\Big) = \big\langle \nabla f\big(x_1(t)\big),x_2(t) \big\rangle \le -\alpha \big(f(x_1)-f^*\big).\end{aligned}$$ The direct application of Gronwall’s inequality, see [@khalil1996noninear Lemma A.1], to the above inequality yields the desired convergence claim . Hence, it remains to guarantee that the solution trajectory renders the set $\mathcal{E}_{\alpha}$ invariant. Let us define the quantity $$\begin{aligned}
\sigma(t) := \langle \nabla f\big(x_1(t)\big),x_2(t) \rangle +\alpha\Big(f\big(x_1(t)\big)-f^*\Big).\end{aligned}$$ By construction, if $\sigma(t) < 0$, it follows that $\big(x_1(t),x_2(t)\big) \in \mathcal{E}_{\alpha}$. As a result, if we synthesize the feedback input $u_{\textbf{II}}(x)$ such that $\dot\sigma(t) \le 0$ along the solution trajectory of , the value of $\sigma(t)$ does not increase, and as such $$\begin{aligned}
\big(x_1(t),x_2(t)\big) \in \mathcal{E}_{\alpha}, ~\forall t \ge 0 ~ \Longleftrightarrow ~ \big(x_1(0),x_2(0)\big) \in \mathcal{E}_{\alpha}.\end{aligned}$$ To ensure non-positivity property of $\dot{\sigma}(t)$, note that we have $$\begin{aligned}
\dot{\sigma}(x)
&= \langle \nabla^2 f(x_1) x_2, x_2 \rangle + \langle \nabla f(x_1), \dot{x}_2 \rangle +\alpha \langle \nabla f(x_1), x_2 \rangle \\
& = \langle \nabla^2 f(x_1) x_2, x_2 \rangle + \langle \nabla f(x_1), - x_2 - u_{\textbf{II}} ( x ) \nabla f(x_1) \rangle + \alpha \langle \nabla f(x_1), x_2 \rangle \\
& = \langle \nabla^2 f(x_1) x_2, x_2 \rangle + \langle \nabla f(x_1), - x_2 \rangle - u_{\textbf{II}} ( x ) \| \nabla f(x_1) \|^2 - \alpha \langle \nabla f(x_1), - x_2 \rangle \\
& = \langle \nabla^2 f(x_1) x_2, x_2 \rangle + ( 1- \alpha) \langle \nabla f(x_1), - x_2 \rangle - u_{\textbf{II}} ( x ) \| \nabla f(x_1) \|^2 = 0,\end{aligned}$$ where the last equality follows from the definition of the proposed control law . It is worth noting that one can simply replace the information of the Hessian $\nabla^2 f\big(x_1(t)\big)$ with the upper bound $L_f$ and still arrive at the desired inequality, see also Remark \[rem\_40\] with regards to the 1st-order information oracle. Up to now, we showed that the structure of the control feedback guarantees the $\alpha$-exponential convergence. It remains then to ensure that $x(0) \in \mathcal{E}_{\alpha}$. Consider the initial state $x_2(0) =-\beta_{\text{II}} \nabla f\big(x_1(0)\big)$. Notice that $$\begin{aligned}
\alpha \Big(f\big(x_1(0)\big)-f^*\Big) & \leq \frac{\alpha}{2 \mu_f} \big\| \nabla f\big(x_1(0)\big) \big\|^2
= \frac{\alpha}{2 \mu_f \beta_{\textbf{II}}} \langle -x_2(0), \nabla f\big(x_1(0)\big) \rangle
\leq \langle \nabla f\big(x_1(0)\big) , -x_2(0) \rangle,\end{aligned}$$ where in the first inequality we use the gradient-dominated assumption , and in the second inequality the condition is employed. Suppose $\big(x_1^{\top}(0),x^{\top}_2(0)\big)^{\top}$ as the jump state $x^+$. It is evident that the range space of the jump map lies inside the set $\mathcal{E}_\alpha$. At last, it is required to show that the jump policy is well-defined in the sense that the trajectory lands in the interior of the flow set $\mathcal{C}_{\textbf{I}}$ , i.e., the control values also belong to the admissible set $[\ul{u}_{\textbf{II}},\ol{u}_{\textbf{II}}]$. To this end, we only need to take into account the initial control value since the switching law is continuous in the states and serves the purpose by design. Suppose that $x^+ \in \mathcal{C}_{\textbf{II}}$, we then have the sufficient requirements $$\begin{gathered}
\ul{u}_{\textbf{II}} < \frac{-\ell_f\beta_{\text{II}}^2 \|\nabla f(x_1^+) \|^2+ (1-\alpha) \beta_{\text{II}} \| \nabla f(x_1^+) \|^2 }{ \| \nabla f(x_1^+) \|^2 }
\\ \le u_{\textbf{II}}(x^+) \leq \\
\frac{L_f\beta_{\text{II}}^2 \|\nabla f(x_1^+) \|^2+ (1-\alpha) \beta_{\text{II}} \| \nabla f(x_1^+) \|^2 }{\| \nabla f(x_1^+) \|^2 } < \ol{u}_{\textbf{II}},\end{gathered}$$ where the relations and are considered. Factoring out the term $\| \nabla f(x_1^+) \|^2$ leads to the sufficiency requirements given in and . Hence, the claim of Theorem \[theo\_step\_conv\] follows.
Proof of Theorem \[theo\_step\_zeno\]
-------------------------------------
In order to facilitate the argument regarding the proof of Theorem \[theo\_step\_zeno\], we begin with providing a lemma describing the norm-2 behaviors of $\langle \nabla f(x_1), - x_2 \rangle$, $x_2$, and $\nabla f(x_1)$. For the sake of brevity, we employ the same notations used in Subsection \[subsec:pro\_zeno1\], as well.
\[lem\_step1\] Consider the continuous-time hybrid control system (\[p1\]) with the respective parameters (\[sH\_step\]) satisfying (\[step\_05\]) where the function $f$ satisfies Assumptions (\[p2\]) and (\[d\_1\]). Suppose the hybrid control system is initiated from $\big(x_{1,0},\beta_{\textbf{II}} \nabla f(x_{1,0})\big)$ for some $x_{1,0}\in \mathbb{R}^n$. Then,
\[step\_1\] $$\begin{aligned}
& \langle \nabla f(x_1), - x_2 \rangle =\beta_{\textbf{II}} e^{-\alpha t} \| \nabla f(x_{1,0})\|^2, \label{step_1_0} \\
& \| x_2 \| \leq D(t) \| \nabla f ( x_{1,0}) \|, \label{step_1_1}\\
& \ul{\eta}(t) \| \nabla f ( x_{1,0}) \| \leq \| \nabla f ( x_{1}) \| \leq \ol{\eta}(t) \| \nabla f ( x_{1,0}) \|, \label{step_1_2}\end{aligned}$$
with the time-varying scalars $D$, $\ul{\eta}$, and $\ol{\eta}$ given by
\[step\_2\] $$\begin{aligned}
& D(t):= \Big( \beta_{\textbf{II}} ^2 e^{-2t} +\beta_{\textbf{II}} U \big( 1 - e^{-2t} \big) \Big)^{\frac{1}{2}}, \label{step_2_1}\\
& \ul{\eta}(t) := 1 - \mathcal{L}_f (\beta_{\textbf{II}}^2+\beta_{\textbf{II}} U)^{\frac{1}{2}} t , \label{step_2_2}\\
& \ol{\eta}(t) := 1 + \mathcal{L}_f (\beta_{\textbf{II}}^2+\beta_{\textbf{II}} U)^{\frac{1}{2}} t , \label{step_2_3}\end{aligned}$$
respectively, where $U:=\max \{\ol{u}_{\textbf{II}}, -\ul{u}_{\textbf{II}}\}$ and $\mathcal{L}_f:= \max \{ \ell_f, L_f \}$.
Considering the flow dynamics and the feedback input , one obtains $$\begin{aligned}
\frac{d}{dt}\langle \nabla f(x_1), - x_2 \rangle
&= \langle \nabla^2 f(x_1)x_2, -x_2 \rangle + \langle \nabla f(x_1),-\dot{x}_2 \rangle \\
& = \langle \nabla^2 f(x_1)x_2, -x_2 \rangle + \langle \nabla f(x_1),x_2 +u_{\textbf{II}}(x) \nabla f(x_1) \rangle \\
& = \langle \nabla^2 f(x_1)x_2, -x_2 \rangle + \langle \nabla f(x_1),x_2 \rangle +u_{\textbf{II}}(x) \|\nabla f(x_1) \|^2 \\
& = \langle \nabla^2 f(x_1)x_2, -x_2 \rangle + \langle \nabla f(x_1),x_2 \rangle +\langle \nabla^2 f(x_1)x_2, x_2 \rangle - (1-\alpha) \langle \nabla f(x_1),x_2 \rangle \\
&=-\alpha \langle \nabla f(x_1), - x_2 \rangle,\end{aligned}$$ and as a result given the initial state $\big(x_{1,0}, -\beta_{\textbf{II}} \nabla f(x_{1,0})\big)$, the equality given in is valid. We next turn to establish that holds. Let us define $h(t)=\| x_2 \|^2$. Hence, $$\begin{aligned}
\frac{d}{dt}h(t) & \overset{\text{(i)}}{=} 2 \langle x_2, -x_2 -u_{\textbf{II}} (x)\nabla f(x_1) \rangle = -2\|x_2 \|^2 + 2 u_{\textbf{II}} (x) \langle \nabla f(x_1),-x_2 \rangle\\
&\overset{\text{(ii)}}{=} -2 h(t) + 2 u_{\textbf{II}} (x) \beta_{\textbf{II}} e^{-\alpha t} \| \nabla f(x_{1,0}) \|^2 \leq -2 h(t) + 2 U \beta_{\textbf{II}} \| \nabla f(x_{1,0}) \|^2,\end{aligned}$$ where we made use of the flow dynamics in the inequality (i) and the equation in the equality (ii). We then use the Gronwall’s inequality to infer that $$\begin{aligned}
\|x_2\|^2
& \leq e^{-2t} \| x_{2,0} \|^2 + \int_0^t e^{-2(t-\tau)}2U\beta_{\textbf{II}} \big\| \nabla f(x_{1,0}) \big\|^2 d \tau\\
& = e^{-2t} \beta_{\textbf{II}}^2 \big\| \nabla f(x_{1,0}) \big\|^2 +e^{-2t}2U\beta_{\textbf{II}} \big\| \nabla f(x_{1,0}) \big\|^2 \int_0^t e^{2\tau} d \tau\\
& = e^{-2t} \big\| \nabla f(x_{1,0}) \big\|^2 \Big( \beta_{\textbf{II}}^2 e^{-2t} +\beta_{\textbf{II}} U \big( 1 - e^{-2t} \big) \Big)\\
& =: D^2(t) \big\| \nabla f(x_{1,0}) \big\|^2,\end{aligned}$$ where $D(t)$ is defined in . As a result, the claim in holds. The argument to show the last claim in Lemma \[lem\_step1\] is discussed now. Let us define $g(t):=\big\| \nabla f(x_1) \big\|^2$. Observe that $$\begin{aligned}
\frac{d}{dt} g(t) = 2 \langle \nabla^2 f(x_1) x_2 , \nabla f(x_1) \rangle,\end{aligned}$$ and as a result $$\begin{aligned}
\left| \frac{d}{dt} g(t) \right|
\overset{\text{(i)}}{\leq} 2 \mathcal{L}_f \| x_2 \| \cdot \big\| \nabla f(x_1) \big\|
= 2 \mathcal{L}_f \| x_2 \| \sqrt{g(t)}
\overset{\text{(ii)}}{\leq}
2 \mathcal{L}_f D(t) \big\| \nabla f(x_{1,0}) \big\| \sqrt{g(t)} ,\end{aligned}$$ where the inequalities (i) and (ii) are implied by Assumption and the inequality , respectively. Hence, we deduce that $$\begin{aligned}
\frac{d}{dt} g(t) \geq - 2 \mathcal{L}_f D(t) \big\| \nabla f(x_{1,0}) \big\| \sqrt{g(t)} ,\end{aligned}$$ and as a consequence $$\begin{aligned}
\frac{d g(t)}{\sqrt{g(t)}} \geq - 2 \mathcal{L}_f D(t) \big\| \nabla f(x_{1,0}) \big\| dt.\end{aligned}$$ Integrating the two sides of the above inequality results in $$\begin{aligned}
\sqrt{g(t)} -\sqrt{g(0)}
& \geq - \mathcal{L}_f \big\| \nabla f(x_{1,0}) \big\| \int_0^t D(\tau) d\tau\\
& = - \mathcal{L}_f \big\| \nabla f(x_{1,0}) \big\| \int_0^t \Big( \beta_{\textbf{II}} ^2 e^{-2\tau} +\beta_{\textbf{II}} U \big( 1 - e^{-2\tau} \big) \Big)^{\frac{1}{2}} d\tau\\
& \geq - \mathcal{L}_f \big\| \nabla f(x_{1,0}) \big\| \int_0^t \big( \beta_{\textbf{II}} ^2 +\beta_{\textbf{II}} U \big)^{\frac{1}{2}} d\tau\\
& = - \mathcal{L}_f \big\| \nabla f(x_{1,0}) \big\| \big( \beta_{\textbf{II}} ^2 +\beta_{\textbf{II}} U \big)^{\frac{1}{2}} t.\end{aligned}$$ Based on the above analysis and the definition of $g(t)$, it follows that $$\begin{aligned}
\big\| \nabla f(x_1) \big\| \geq \ul{\eta}(t) \big\| \nabla f(x_{1,0}) \big\|,\end{aligned}$$ where $\ul{\eta}(t)$ is given in . Proceeding with a similar approach to the one presented above, one can use the inequality $$\begin{aligned}
\frac{d}{dt} g(t) \leq 2 \mathcal{L}_f D(t) \big\| \nabla f(x_{1,0}) \big\| \sqrt{g(t)} ,\end{aligned}$$ and infer that $$\begin{aligned}
\big\| \nabla f(x_1) \big\| \leq \ol{\eta}(t) \big\| \nabla f(x_{1,0}) \big\|,\end{aligned}$$ where $\ol{\eta}(t)$ is defined in . Thus, the last claim in Lemma \[lem\_step1\] also holds.
**Proof of Theorem \[theo\_step\_zeno\]:** We are now in a position to formally state the proof of Theorem \[theo\_step\_zeno\]. Consider the parameter $\delta$ as defined in Theorem \[theo\_step\_zeno\]. Intuitively, this quantity represents a lower bound on the distance of $u_{\textbf{II}} (0)$ from the endpoints of the flow set interval. Thus, one can obtain a lower bound on the inter-jump interval $\tau_{\textbf{II}}$ as follows $$\begin{aligned}
\label{step_4_0}
\tau_{\textbf{II}} \geq \sup~\{t>0:~|u_{\textbf{II}}(t)-u_{\textbf{II}}(0)|\leq \delta \}.\end{aligned}$$ On the other hand, given the structure of $u_{\textbf{II}}$ in , $$\begin{aligned}
%\label{step_4}
-\frac{\ell_f \| x_2 \|^2}{\| \nabla f(x_1) \|^2} + (1-\alpha) \frac{\beta_{\textbf{II}} e^{-\alpha t} \| \nabla f(x_{1,0}) \|^2}{\| \nabla f(x_1) \|^2}
\leq u_{\textbf{II}}(t) \leq
\frac{L_f \| x_2 \|^2}{\| \nabla f(x_1) \|^2} + (1-\alpha) \frac{\beta_{\textbf{II}} e^{-\alpha t} \| \nabla f(x_{1,0}) \|^2}{\| \nabla f(x_1) \|^2},\end{aligned}$$ since the function $f$ satisfies Assumption . In light of Lemma \[lem\_step1\] and considering the above relation, one can infer that for $\alpha\leq 1$, we name Case(i),
\[step\_5\] $$\begin{aligned}
\ul{e}(t):=-\frac{\ell_f D(t)^2}{\ul{\eta}(t)^2} + (1-\alpha) \frac{\beta_{\textbf{II}} e^{-\alpha t} }{\ol{\eta}(t)^2}
\leq u_{\textbf{II}}(t) \leq
\frac{L_f D(t)^2}{\ul{\eta}(t)^2} + (1-\alpha) \frac{\beta_{\textbf{II}} e^{-\alpha t} }{\ul{\eta}(t)^2} =: \ol{e}(t), \label{step_5_1}\end{aligned}$$ and that for $\alpha >1$, we denote by Case (ii), $$\begin{aligned}
\ul{p}(t):=-\frac{\ell_f D(t)^2}{\ul{\eta}(t)^2} + (1-\alpha) \frac{\beta_{\textbf{II}} e^{-\alpha t} }{\ul{\eta}(t)^2}
\leq u_{\textbf{II}}(t) \leq
\frac{L_f D(t)^2}{\ul{\eta}(t)^2} + (1-\alpha) \frac{\beta_{\textbf{II}} e^{-\alpha t} }{\ol{\eta}(t)^2} =: \ol{p}(t). \label{step_5_2}\end{aligned}$$
According to the above discussion, we employ to obtain a lower bound $\tau_{\textbf{II}}$ instead of using . Consider a time instant $t_\circ$ such that $t_\circ < 1/ \omega$ where $\omega$ is defined in Theorem \[theo\_step\_zeno\].
**Case (i) ($\alpha \leq 1$):** Let us denote $\sup_{t\in [0,t_\circ]}\dot{\ol{e}}(t)$ by $b_1$. Observe that $$\begin{aligned}
\dot{\ol{e}}(t)
&= \frac{2 L_f \beta_{\textbf{II}} e^{-2t} (-\beta_{\textbf{II}}+U )(1-\omega t)^2 + 2 \omega (1 - \omega t) L_f \beta_{\textbf{II}} \big( \beta_{\textbf{II}} e^{-2t} +U (1 - e^{-2t}) \big) }{(1-\omega t)^4}\\
& + (1-\alpha) \frac{-\alpha \beta_{\textbf{II}} e^{-\alpha t}(1-\omega t)^2 + 2 \omega (1 - \omega t) \beta_{\textbf{II}} e^{-2t}}{(1-\omega t)^4}\\
& \leq \frac{2 L_f \beta_{\textbf{II}} U e^{-2t} (1-\omega t)^2 +
2 \omega (1 - \omega t) L_f \beta_{\textbf{II}} \big( \beta_{\textbf{II}} e^{-2t} +U (1 - e^{-2t}) \big)}{(1-\omega t)^4}\\
& + (1-\alpha) \frac{ 2 \omega (1 - \omega t) \beta_{\textbf{II}} e^{-2t}}{(1-\omega t)^4} \\
& \leq \frac{2 L_f \beta_{\textbf{II}} \big( U + \omega ( \beta_{\textbf{II}} +U) \big)}{(1-\omega t)^3} + (1-\alpha) \frac{ 2 \omega \beta_{\textbf{II}} }{(1-\omega t)^3} \\
& \leq \frac{2 L_f \beta_{\textbf{II}} \big( U + \omega ( \beta_{\textbf{II}} +U) \big)}{(1-\omega t_\circ)^3} + (1-\alpha) \frac{ 2 \omega \beta_{\textbf{II}} }{(1-\omega t_\circ)^3} =: b_1,\end{aligned}$$ considering . Hence, $\ol{e}(t) \leq b_1 t + \ol{e}(0)$ and as a result $$\begin{aligned}
\label{step_6}
\tau_{\textbf{II}} &\geq \max \{t\in (0,t_\circ]:~ b_1 t +\ol{e}(0) - \ol{e}(0) \leq \delta \}% \nonumber\\ &
= \min\{ t_\circ, \delta/b_1 \},\end{aligned}$$ by virtue of the fact that $b_1 t +\ol{e}(0)$ is a monotonically increasing function that upper bounds $u_{\textbf{II}}(t)$. Now, let us define $b_2:= \inf_{t\in (0,t_\circ]}\dot{\ul{e}}(t)$. Notice that $$\begin{aligned}
\dot{\ul{e}}(t)
& = \frac{2 \ell_f \beta_{\textbf{II}} e^{-2t} (\beta_{\textbf{II}}-U )(1-\omega t)^2 -
2 \omega (1 - \omega t) \ell_f \beta_{\textbf{II}} \big( \beta_{\textbf{II}} e^{-2t} +U (1 - e^{-2t}) \big) }{(1-\omega t)^4} \\
& + (1-\alpha) \frac{-\alpha \beta_{\textbf{II}} e^{-\alpha t}(1+\omega t)^2 - 2 \omega (1 + \omega t) \beta_{\textbf{II}} e^{-2t}}{(1+\omega t)^4}\\
& \geq \frac{-2 \ell_f \beta_{\textbf{II}} e^{-2t} U (1-\omega t)^2 -
2 \omega (1 - \omega t) \ell_f \beta_{\textbf{II}} \big( \beta_{\textbf{II}} e^{-2t} +U (1 - e^{-2t}) \big) }{(1-\omega t)^4} \\
& - (1-\alpha) \frac{\alpha \beta_{\textbf{II}} e^{-\alpha t}(1+\omega t)^2 + 2 \omega (1 + \omega t) \beta_{\textbf{II}} e^{-2t}}{(1+\omega t)^4}\\
& \geq -\frac{2 \ell_f \beta_{\textbf{II}} \big(U + \omega ( \beta_{\textbf{II}} +U ) \big) }{(1-\omega t_\circ)^3} - (1-\alpha) \frac{\alpha \beta_{\textbf{II}} (1+\omega t_\circ) + 2 \omega \beta_{\textbf{II}} }{1} =: -b_2.\end{aligned}$$ Thus, $\ul{e}(t)\geq -b_2 t + \ul{e}(0)$ and as a consequence $$\begin{aligned}
\label{step_6}
\tau_{\textbf{II}} & \geq \max \{t\in (0,t_\circ]:~ \ul{e}(0) - \big(-b_2t + \ul{e}(0)\big)\leq \delta \} %\nonumber\\ &
= \min \{ t_\circ, \delta/b_2 \},\end{aligned}$$ because the function $-b_2t + \ul{e}(0)$ is a monotonically decreasing function that lower bounds $u_{\textbf{II}}(t)$.
**Case (ii) ($\alpha > 1$):** Much of this case follows the same line of reasoning used in Case (i). We thus provide only main mathematical derivations and refer the reader to the previous case for the argumentation. Define $b_3:= \sup_{t\in(0,t_\circ]}\dot{\ol{p}}(t)$. One can deduce from that $$\begin{aligned}
\dot{\ol{p}}(t)
& = \frac{2 L_f \beta_{\textbf{II}} e^{-2t} (-\beta_{\textbf{II}}+U )(1-\omega t)^2 +
2 \omega (1 - \omega t) L_f \beta_{\textbf{II}} \big( \beta_{\textbf{II}} e^{-2t} +U (1 - e^{-2t}) \big) }{(1-\omega t)^4}\\
& + (1-\alpha) \frac{-\alpha \beta_{\textbf{II}} e^{-\alpha t}(1+\omega t)^2 - 2 \omega (1 + \omega t) \beta_{\textbf{II}} e^{-2t}}{(1+\omega t)^4}\\
& \leq \frac{2 L_f \beta_{\textbf{II}} \big( U + \omega (\beta_{\textbf{II}} +U) \big) }{(1-\omega t_\circ)^3} + (\alpha - 1) \frac{\alpha \beta_{\textbf{II}} (1+\omega t_\circ) + 2 \omega \beta_{\textbf{II}} }{1} =: b_3.\end{aligned}$$ Hence, $\ol{p}(t)\leq b_4 t + \ol{p}(0)$ and as a result $$\begin{aligned}
\label{step_8}
\tau \geq \min \{ t_\circ, \delta/b_3 \}.\end{aligned}$$ Finally, define $\dot{\ul{p}}(t):=\inf_{t\in(0,t_\circ]}\ul{p}(t)$ from which it follows that $$\begin{aligned}
\dot{\ul{p}}(t)
& = \frac{2 \ell_f \beta_{\textbf{II}} e^{-2t} (\beta_{\textbf{II}}-U )(1-\omega t)^2 - 2 \omega (1 - \omega t) \ell_f \beta_{\textbf{II}} \big( \beta_{\textbf{II}} e^{-2t} +U (1 - e^{-2t}) \big) }{(1-\omega t)^4} \\
& + (1-\alpha) \frac{-\alpha \beta_{\textbf{II}} e^{-\alpha t}(1-\omega t)^2 + 2 \omega (1 - \omega t) \beta_{\textbf{II}} e^{-2t}}{(1-\omega t)^4}\\
& \geq -\frac{2 \ell_f \beta_{\textbf{II}} \big( U + \omega (\beta_{\textbf{II}} +U) \big) }{(1-\omega t_\circ)^3} - (\alpha-1) \frac{ 2 \omega \beta_{\textbf{II}} }{(1-\omega t_\circ)^3} =: -b_4,\end{aligned}$$ considering . Now, since $\ul{p}(t) \geq -b_4 t + \ul{p}(0)$, it is implied that $$\begin{aligned}
\label{step_9}
\tau_{\textbf{II}} \geq \min\{ t_\circ, \delta/b_4 \}.\end{aligned}$$ Notice that based on the relations derived in -, $$\begin{aligned}
\tau_{\textbf{II}} \geq
\min\Big\{t_\circ, \frac{2 \mathcal{L}_f \beta_{\textbf{II}} \big( U + \omega (\beta_{\textbf{II}} +U) \big) }{(1-\omega t_\circ)^3} + |\alpha - 1| \frac{ 2 \omega \beta_{\textbf{II}} }{(1-\omega t_\circ)^3}
+ |\alpha - 1| \alpha \beta_{\textbf{II}} (1+\omega t_\circ) \Big\}.\end{aligned}$$ Suppose now for some scalar $r\in (0,1)$, $t_\circ$ is chosen such that $t_\circ \leq \frac{r}{\omega}$. It is evident that $$\begin{aligned}
\tau_{\textbf{II}} \geq
\min\Big\{\frac{r}{\omega}, \delta\Big/\Big(\frac{2 \mathcal{L}_f \beta_{\textbf{II}} \big( U + \omega (\beta_{\textbf{II}} +U) \big) }{(1-r)^3} + |\alpha - 1| \frac{ 2 \omega \beta_{\textbf{II}} }{(1-r)^3}
+ |\alpha - 1| \alpha \beta_{\textbf{II}} (1+r)\Big) \Big\}.\end{aligned}$$ It turns out that the relation in Theorem \[theo\_step\_zeno\] is valid and this concludes the proof.
Proof of Theorem \[theo\_2\] {#subsec:proof_2}
----------------------------
In what follows, we provide the proof for the structure **II** and refer the interested reader to [@armanICML Theorem 3.7] for the structure **I**. We emphasize that the technical steps to establish a stable discretization for both structures are similar. According to the forward-Euler method, the velocity $\dot{x}_1$ and the acceleration $\dot{x}_2$ in the dynamics with are discretized as follows:
$$\begin{aligned}
\label{dx1}
\frac{x_1^{k+1}- x_1^k} {s} &= x_2^k,\\
\frac{x_2^{k+1}-x_2^k}{s} &= - u_{d,\textbf{II}}(x^k) \nabla f(x_1^{k}) - x_2^k,
%\frac{x_1^{k+1}- x_1^k} {s} &= x_2^k,\\
%\frac{x_2^{k+1}-x_2^k}{s} &= - \nabla f(x_1^{k}) - u_{d,\textbf{I}}(x^k) x_2^k,\end{aligned}$$
where the discrete input $u_{d,\textbf{II}}(x^k)=u_{\textbf{II}}(x^k)$. Now, observe that the definition of the flow set $\mathcal{C}_{d, \textbf{II}}$ (\[d2\_1\]) implies $$\begin{aligned}
c_1 \| x^k_2 \|^2 \leq \| \nabla f(x^k_1) \|^2 \leq c_2 \langle \nabla f(x^k_1),-x^k_2 \rangle
\leq c_2 \| \nabla f(x_1^k) \| \cdot \| x_2^k\|,\end{aligned}$$ where the extra inequality follows from the Cauchy-Schwarz inequality ($\forall~ a,b\in\mathbb{R}^n$, $\langle a ,b\rangle\leq \| a \|\cdot \| b \|$). In order to guarantee that the flow set $\mathcal{C}_{d,\textbf{II}}$ is non-empty the relation (\[d4\_d1\]) should hold between the parameters $c_1$ and $c_2$ since $\sqrt{c_1}\leq \frac{\| \nabla f(x_1^k) \|}{\| x_2^k \|}\leq c_2$. Next, suppose that the parameters $c_1$, $c_2$, and $\beta$ satisfy (\[d4\_d2\]). Multiplying (\[d4\_d2\]) by $\| \nabla f(x_1^k)\|$, one can observe that the range space of the jump map $G_{d,\textbf{II}}(x^k)=\big((x^k)^\top,-\beta \nabla^\top f(x^k)\big)^\top$ is inside the flow set $\mathcal{C}_{d,\textbf{II}}$ (\[d2\_1\]). From the fact that the discrete dynamics (\[d1\]) evolves respecting the flow set $\mathcal{C}_{d,\textbf{II}}$ defined in (\[d2\_1\]), we deduce $$\begin{aligned}
f(x_1^{k+1}) -f(x_1^k)
& \leq \langle \nabla f(x_1^k), x_1^{k+1} - x_1^k \rangle + \frac{L_f}{2} \| x_1^{k+1} - x_1^k \|^2 \\
& \leq -s \langle \nabla f(x_1^k), -x_2^{k} \rangle + \frac{L_f s^2}{2} \| x_2^k \|^2 \\
& < - \frac{s}{c_2} \| \nabla f(x_1^k) \|^2 + \frac{L_f s^2}{2 c_1} \| \nabla f(x_1^k) \|^2 \\
& = \big( - \frac{s}{c_2} + \frac{L_f}{2 c_1} s^2 \big) \| \nabla f(x_1^k) \|^2 \leq 2 \mu_f \big( - \frac{s}{c_2} + \frac{L_f}{2 c_1} s^2 \big) \big( f(x_1^k)-f^* \big),\end{aligned}$$ where we made use of the relation (\[p2\_g\]), the definition (\[dx1\]), the relation (\[d2\_1\]), and the assumption (\[d\_1\]), respectively. Then, considering the inequality implied by the first and last terms given above and adding $f(x_1^k)-f^*$ to both sides of the considered inequality, we arrive at $$f(x_1^{k+1})-f^*\leq \lambda(s,c_1,c_2,\beta) \left( f(x_1^k)-f^* \right)$$ where $\lambda(s,c_1,c_2,\beta)$ is given by (\[d\_r\]). As a result, if the step size $s$ is chosen such that $s<\frac{2c_1}{c_2 L_f}$ then $\lambda(s,c_1,c_2,\beta) \in (0,1)$. The claim of Theorem \[theo\_2\] follows.
Numerical Examples {#sec:examp}
==================
In this section a numerical example illustrating the results in this paper is represented. The example is a least mean square error (LMSE) problem $f(X)=\|A X -b \|^2$ where $X\in\mathbb{R}^5$ denotes the decision variable, $A\in\mathbb{R}^{50\times 5}$ with $L_f=2\lambda_{\max}(A^\top A)=136.9832$ and $\mu_f=2\lambda_{\min}(A^\top A)=3.6878$, and $b\in\mathbb{R}^{50}$. Since the LMSE function is convex (in our case, this function is strongly convex), we take $\ell_f=0$. We begin with providing the results concerning the continuous-time case. Then, the discrete-time case’s results are shown.
**Continuous-time case:** In what follows, we compare the behaviors of the proposed structures **I** and **II** (denoted by **Struct I** and **Struct II**, respectively) with the following fast methods:
- (**NWR**): Nesterov’s fast method with $\gamma(t)=\frac{3}{t}$ and without any restarting scheme,
- (**NSR**): Nesterov’s fast method with $\gamma(t)=\frac{3}{t}$ with the speed restarting scheme proposed in [@Su2016differential Section 5],
- (**AA-AMD**): the adaptive averaging accelerated mirror descent method proposed in [@krichene2016adaptive Section 2] with the choice of parameters given in [@krichene2016adaptive Example 1], $\beta=3$, and the adaptive heuristic $a(t)=\frac{3}{t}+\text{sign}\big(\max\big\{0,-\langle \nabla f(X(t)),\dot{X}(t)\rangle\big\}\big)\times\frac{1}{t^2}$,
- (**HDA**): the Hessian driven accelerated method proposed in [@attouch2016fast] with $\alpha=3$ and $\beta=1$.
(The notations for some of the parameters involved in the above methods are identical, e.g., the parameter $\beta$ appears in both **AA-AMD** and **HDA**. Notice that these parameters are not necessarily the same. We refer the reader to consult with the cited references for more details.) We set the desired convergence rates $\alpha_{\textbf{I}}$ and $\alpha_{\textbf{II}}$ equal to each other. We then select $\beta_{\textbf{I}}$ and $\beta_{\textbf{II}}$ such that the corresponding flow sets $[\ul{u}_{\textbf{I}}, \ol{u}_{\textbf{I}}]$ and $[\ul{u}_{\textbf{II}}, \ol{u}_{\textbf{II}}]$ are relatively close using Theorem \[theo\_1b\] and Theorem \[theo\_step\_conv\], respectively. The corresponding parameters of **Struct I** and **Struct II** are as follows: $\alpha_{\textbf{I}} = 0.2$, $\beta_{\textbf{I}} = 0.1356$, $ \ul{u}_{\textbf{I}} = -14.352$, $\ol{u}_{\textbf{I}} = 15.1511$; $\alpha_{\textbf{II}} = 0.2$, $\beta_{\textbf{II}} = 0.0298$, $ \ul{u}_{\textbf{II}} = -0.1861$, $\ol{u}_{\textbf{II}} = 5.7457$.
In Figure \[fig:obj\_con\], the behaviors of the suboptimality measure $f\big(X(t)\big)-f^*$ of the considered methods are depicted. The corresponding control inputs of **Struct I**, **Struct II**, and **NSR** are represented in Figure \[fig:input\_con\]. With regards to **Struct I**, observe that the length of inter-jump intervals is small during the early stages of simulation. As time progresses and the value of $\nabla f(X)$ decreases, the length of inter-jump intervals relatively increases (echoing the same message conveyed in Theorem \[theo\_zeno\]). Furthermore, in the case of **Struct I** where $u_{\textbf{I}}$ plays the role of damping, the input $u_{\textbf{I}}$ admits a negative range unlike most of the approaches in the literature.
**Discrete-time case:** The discrete-time case’s results are now shown. We employ Algorithm \[alg:example\] for **Struct I** and **Struct II**.
In Figure \[fig:obj\_disc\_all\], we compare these two structures with the discrete-time methods:
- (**NWR**): Algorithm 1 in [@o2015adaptive] with $q=0$ and $t_k=\frac{1}{L_f}$,
- (**NSR**): Algorithm 1 in [@Su2016differential] with $k_{\min}=1$ and $s=\frac{1}{L_f}$,
- (**AA-AMD**): Algorithm 1 in the supplementary material of [@krichene2016adaptive] with $\beta=\beta^{\max}=3$,
- (**NGR**): Nesterov’s method with the gradient restarting scheme proposed in [@o2015adaptive Section 3.2] with $q=0$ and $t_k=\frac{1}{L_f}$.
It is evident that the discrete counterparts of our proposed structures perform poorly compared to these algorithms, reinforcing the assertion of Remark \[rem\_5\] calling for a smarter discretization technique. Observe that **NGR** provides the best convergence with respect to the other consider methods. In Figure \[fig:obj\_disc\_bests\], we depict the best behavior of the considered methods (excluding **NGR**) for this specific example. It is interesting that **NGR** still outperforms all other methods.
Consider the three methods **Struct I**, **Struct II**, and **NSR** in Figure \[fig:obj\_disc\_all\]. The results depicted in Figure \[fig:obj\_disc\_all\] correspond to the standard parameters involved in each algorithm, i.e., the step size $s = 1/L_f$ for the proposed methods in Corollary \[Cor\_1\], and the parameter $k_{\min} = 1$ in NSR. As we saw in Figure \[fig:obj\_disc\_bests\], these parameters can also be tuned depending on the application at hand. In case of NSR, the role of the parameter $k_{\min}$ is to prevent unnecessary restarting instants that may degrade the overall performance. On the other hand, setting $k_{\min}>1$ may potentially cause the algorithm to lose its monotonicity property. Figure \[fig:obj\_disc\_NSR\] shows how changing $k_{\min}$ affects the performance. The best performance is achieved by setting $k_{\min}=19$ and the algorithm becomes non-monotonic for $k_{\min}>19$. With regards to our proposed methods we observe that if one increases the step size $s$, the performance improves, see Figure \[fig:obj\_disc\_1\] for **Struct I** and Figure \[fig:obj\_disc\_2\] for **Struct II**. Moreover, it is obvious that the discrete-time couterparts of **Struct I** and **Struct II** behave in a very similar fashion that has to do with the lack of a proper discretization that can fully exploit the properties of the corresponding feedback input, see Remark \[rem\_5\].
Conclusions {#sec:conc}
===========
Inspired by a control-oriented viewpoint, we proposed two hybrid dynamical structures to achieve exponential convergence rates for a certain class of unconstrained optimization problems, in a continuous-time setting. The distinctive feature of our methodology is the synthesis of certain inputs in a state-dependent fashion compared to a time-dependent approach followed by most results in the literature. Due to the state-dependency of our proposed methods, the time-discretization of continuous-time hybrid dynamical systems is in fact difficult (and to some extent even more involved than the time-varying dynamics that is commonly used in the literature). In this regard, we have been able to show that one can apply the the forward-Euler method to discretize the continuous-time dynamics and still guarantee exponential rate of convergence. Thus, a more in-depth analysis is due. We expect that because of the state-dependency of our methods a proper venue to search is geometrical types of discretization.
|
---
abstract: |
We prove global existence and uniqueness of solutions to a Cahn-Hilliard system with nonlinear viscosity terms and nonlinear dynamic boundary conditions. The problem is highly nonlinear, characterized by four nonlinearities and two separate diffusive terms, all acting in the interior of the domain or on its boundary. Through a suitable approximation of the problem based on abstract theory of doubly nonlinear evolution equations, existence and uniqueness of solutions are proved using compactness and monotonicity arguments. The asymptotic behaviour of the solutions as the the diffusion operator on the boundary vanishes is also shown.\
[**AMS Subject Classification:**]{} 35D30, 35D35, 35K52, 35K61, 80A22\
[**Key words and phrases:**]{} Cahn-Hilliard system, dynamic boundary conditions, nonlinear viscosity, existence of solutions, uniqueness
author:
- |
[Luca Scarpa]{}\
[Department of Mathematics, University College London]{}\
[Gower Street, London WC1E 6BT, United Kingdom]{}\
[E-mail: `[email protected]`]{}
title: |
Existence and uniqueness of solutions\
to singular Cahn-Hilliard equations\
with nonlinear viscosity terms\
and dynamic boundary conditions[^1]\
---
Introduction
============
\[intro\]
The viscous Cahn-Hilliard equation can be written in its general form as $$\partial_t u - \Delta \mu = 0\,, \qquad
\mu = \alpha(\partial_t u) - \Delta u + \beta(u) + \pi(u) - g \qquad \text{in }(0,T)\times\Omega\,,$$ where the unknown $u$ and $\mu$ represent the so-called order parameter and chemical potential, respectively. Such equation is fundamental in the phase-separation of a binary alloy, for example, and describes important qualitative behaviour like the so-called spinodal decomposition: we refer to the classical works [@cahn-hill; @novick-cohen; @maier-stan1; @maier-stan2] for a physical derivation of the model and some studies on the spinodal decomposition process. Here, $\Omega$ is smooth bounded domain in $\erre^N$ ($N=2,3$) with smooth boundary $\Gamma$, and $T>0$ is the final time. As usual, the term $\beta+\pi$ represents the derivative of a double-well potential, $g$ is a given source and $\alpha$ is a monotone function acting on $\partial_t u$. While in the original model $\alpha$ is a linear function, some generalizations have been proposed where the behaviour of $\alpha$ is of nonlinear type: see in this direction [@gurtin].
In the present contribution, we study the equation above coupled with the homogenous Neumann boundary condition for $\mu$ $$\partial_{\bf n}\mu = 0 \qquad\text{in } (0,T)\times\Gamma\,,$$ which is very natural and ensures the conservation of the mass in the bulk, and a second-order doubly nonlinear dynamic boundary condition for $u$ $$\alpha_\Gamma(\partial_t u) + \partial_{\bf n}u - \eps\Delta_\Gamma u + \beta_{\Gamma}(u) + \pi_\Gamma(u) = g_\Gamma
\qquad\text{in } (0,T)\times\Gamma\,.$$ Here, $\eps>0$ is a fixed constant, $\Delta_\Gamma$ is the usual Laplace-Beltrami operator on $\Gamma$, $g_\Gamma$ is a prescribed source on the boundary and the term $\beta_\Gamma + \pi_\Gamma$ represents the derivative of a double-well potential on the boundary, which may possibly differ from the one in the interior of the domain $\Omega$. Similarly, $\alpha_\Gamma$ is a generic monotone function. Dynamic boundary conditions have been recently proposed by physicists in order to take into account also possible interactions with the walls of a confined system: for a physical motivation of this choice and some studies on parabolic-type equations with dynamic boundary conditions we mention the works [@fish-spinod; @kenz-spinod] and [@gal-DBC; @gal-DBC2; @gal-grass-ACDBC].
Cahn-Hilliard equations with dynamic boundary have been widely studied in the last years in the classical setting in which the viscosity terms depend linearly on the time-derivative of the order parameter. This framework corresponds in our notation to the choices $\alpha=a I$ and $\alpha_\Gamma= b I$, with $a,b>0$ given constants and $I$ the identity on $\erre$. Let us mention in this direction the works [@mir-zel; @col-fuk-eqCH; @colli-fuk-CHmass; @col-gil-spr; @gil-mir-sch; @gil-mir-sch-longtime; @col-scar] dealing with well-posedness, regularity, long-time behaviour of solutions and asymptotics, [@col-far-hass-gil-spr; @col-gil-spr-contr; @col-gil-spr-contr2] for some corresponding optimal control problems, and [@cal-colli; @colli-sprek-optACDBC] focused specifically on Allen-Cahn equations.
On the other hand, an important area of interest has been equally developed on the study of Cahn-Hilliard equations with possibly nonlinear viscosity terms: the reader can refer to the contributions [@mir-sch] for existence-uniqueness and long-time behaviour under classical homogeneous Neumann conditions, and to [@bcst1] for a detailed thermodynamical derivation of the model and well-posedness in the case of Dirichlet conditions for the chemical potential. Let us also mention the work [@mir-zel2] dealing with a doubly nonlinear Cahn-Hilliard equation with a different type of nonlinearity in the viscosity, and the classical contributions [@colli-visin; @sch-seg-stef] on a variational approach to abstract doubly nonlinear equations. As the reader may notice, in this case the attention is mainly focused on the presence of a double nonlinearity in the governing equation, and, consequently, the prescription on the boundary conditions remains quite broad and classical (homogeneous Neumann or Dirichlet type).
The aim of this paper is to provide some unifying existence and uniqueness results for the more general case when both dynamic boundary conditions and nonlinear viscosity terms are present in the system. From the physical point of view, the presence of dynamic boundary conditions and nonlinear viscosity terms is more accurate, and allows for a more genuine description on the process. On the other side, from the mathematical perspective, the model is much more difficult to handle and to study. Indeed, this specific description gives rise to a system with $4$ nonlinearities: $\alpha$ and $\alpha_\Gamma$ acting on the time-derivatives and representing the viscosities, and $\beta$ and $\beta_\Gamma$ acting on the order parameter. Besides the non-triviality of the model, the presence of several nonlinearities is strongly stimulating and challenging. In order to include also possibly non-smooth potentials in our analysis, the nonlinearities are assumed to be possibly multivalued (maximal monotone) graphs.
To summarize, we are concerned with the following system $$\begin{aligned}
\label{eq1}
\partial_t u - \Delta \mu = 0 \qquad&\text{in } (0,T)\times\Omega\,,\\
\label{eq2}
\mu \in \alpha(\partial_t u) - \Delta u + \beta(u) + \pi(u) - g \qquad&\text{in } (0,T)\times\Omega\,,\\
\label{bound}
u=v\,, \quad \partial_{\bf n}\mu=0 \qquad&\text{in } (0,T)\times\Gamma\,,\\
\label{eq3}
\alpha_\Gamma(\partial_tv) + \partial_{\bf n}u - \eps\Delta_\Gamma v + \beta_\Gamma(v) + \pi_\Gamma(v) \ni g_\Gamma
\qquad&\text{in } (0,T)\times\Gamma\,,\\
\label{init}
u(0)=u_0\,, \quad v(0)=v_0 \qquad&\text{in } \Omega\,.\end{aligned}$$ The paper is organized as follows. In Section \[main\_results\] we state the main hypotheses of the work and the main results, commenting on the different set of assumptions that are in play. Section \[approx\] in entirely focused on the construction of suitable approximated solutions, and is based on some abstract results on doubly nonlinear evolution equations. In Sections \[proof1\], \[proof2\] and \[proof3\] we present the proofs of the three existence results of the paper, while Section \[proof4\] contains the proof of the uniqueness result. Finally, in Section \[proof5\] we give a proof of the asymptotic limit as $\eps\to0$, recovering in this way a solution to the system corresponding to the case $\eps=0$.
Setting, assumptions and main results
=====================================
\[main\_results\]
Throughout the paper, $\Omega\subseteq\erre^N$ ($N=2,3$) is a smooth bounded domain with smooth boundary $\Gamma$ and $T>0$ is a fixed final time. We use the notation $Q_t:=(0,t)\times\Omega$ and $\Sigma_t:=(0,t)\times\Gamma$ for every $t\in(0,T]$, with $Q:=Q_T$ and $\Sigma:=\Sigma_T$. The outward normal unit vector on $\Gamma$, the tangential gradient and the Laplace-Beltrami operator on $\Gamma$ are denoted by ${\bf n}$, $\nabla_\Gamma$ and $\Delta_\Gamma$, respectively. We shall also use the symbol $\Delta_{\bf n}$ to denote the Laplace operator with homogeneous Neumann conditions. Moreover, $\eps$ is a positive fixed number.
We introduce the spaces $$\begin{gathered}
H:=L^2(\Omega)\,, \qquad H_\Gamma:=L^2(\Gamma)\,, \qquad \H:=H \times H_\Gamma\,,\\
V:=H^1(\Omega)\,, \qquad
V_{\Gamma}:=H^1(\Gamma)\,,
\qquad \V:=\{(x,y)\in V\times V_{\Gamma}: x=y \text{ on } \Gamma\}\,,\\
W:=H^2(\Omega)\,, \qquad
W_\Gamma:=H^2(\Gamma)\,, \qquad \W:=\{(x,y)\in W\times W_{\Gamma}: x=y \text{ on } \Gamma\}\,,\\
W_{\bf n}:=\{x \in W: \partial_{\bf n}x=0 \text{ on } \Gamma\}\,.\end{gathered}$$ As usual, we identify $H$ and $H_\Gamma$ with their own duals $H^*$ and $H_\Gamma^*$, so that $H\embed V^*$ and $H_\Gamma\embed V_{\Gamma}^*$ with the inclusions given by the inner products of $H$ and $H_\Gamma$, respectively. Moreover, we denote all norms and duality pairings by the symbols $\norm{\cdot}$ and $\ip{\cdot}{\cdot}$, respectively, with a subscript specifying the spaces in consideration.
For any element $y\in V^*$ we define the mean $$y_\Omega:=\frac1{|\Omega|}\ip{y}{1}\,.$$ Moreover, recall that a norm on $V$, equivalent to the usual one, is given by \[V\_eq\] |x|\^2\_V:=\_H\^2 + |x\_|\^2, xV, and that the Laplace operator with Neumann conditions is an isomorphism between the null-mean elements in $V$ and the null-mean elements in $V^*$, so that it is well defined its inverse $$\mathcal N:\{y\in V^*: \; y_\Omega=0\}\to\{y\in V: \;y_\Omega=0\}\,,$$ where, for any $y\in V^*$ with $y_\Omega=0$, $\mathcal Ny$ is the unique element in $V$ with null mean such that $$\int_\Omega\nabla\mathcal Ny\cdot\nabla\varphi = \ip{y}{\varphi} \quad\forall\,\varphi\in V\,.$$
Let us specify the main hypotheses on the data: these will be in order in the whole work and will not be recalled explicitly.
We assume that $$\widehat{\alpha}, \widehat\alpha_\Gamma, \widehat{\beta}, \widehat\beta_\Gamma:\erre\to[0,+\infty]$$ are proper, convex and lower semicontinuous functions such that $$0 = \widehat\alpha(0)=\widehat\alpha_\Gamma(0)=\widehat\beta(0)=\widehat\beta_\Gamma(0)\,,$$ and we set $$\alpha:=\partial\widehat\alpha\,, \quad \alpha_\Gamma:=\partial\widehat\alpha_\Gamma\,, \quad
\beta:=\partial\widehat\beta\,, \quad \beta_\Gamma:=\partial\widehat\beta_\Gamma\,.$$ Moreover, let $$\pi,\pi_\Gamma:\erre\to\erre \quad\text{Lipschitz continuous}\,, \qquad \pi(0)=\pi_\Gamma(0)=0\,,$$ and denote by $C_\pi$ and $C_{\pi_\Gamma}$ their respective Lipchitz constants. We shall always assume that $\alpha_\Gamma$ is coercive and that $\beta$ is controlled by $\beta_\Gamma$, i.e. $$\begin{gathered}
\label{coerc2}
\exists\, b_1, b_2>0:\quad rs\geq b_1|s|^2-b_2 \quad\forall\,s\in D(\alpha_\Gamma)\,,\quad\forall\,r\in\alpha_\Gamma(s)\,,\\
\label{dom1}
D(\beta_\Gamma)\subseteq D(\beta) \quad\text{and}\quad
\exists\,c>0:\abs{\beta^0(s)}\leq c\left(1+\abs{\beta_\Gamma^0(s)}\right) \quad\forall\,s\in D(\beta_\Gamma)\,.\end{gathered}$$ These hypotheses will be always in order and will not be recalled explicitly throughout the paper. Note that is typically not new in the literature dealing with Allen-Cahn and Cahn-Hilliard equations with dynamic boundary conditions: see for example [@cal-colli; @col-gil-spr]. Moreover, condition appears also very natural if we recall that the evolution on the boundary is of order $2$ in space, hence of Allen-Cahn type.
The first existence result that we prove requires additional assumptions on the graphs $\alpha$ and $\alpha_\Gamma$: in particular, and their growth at infinity has to be no more than linear and also $\alpha$ has to be coercive. On the other side, no further hypothesis is made on $\beta$ and $\beta_\Gamma$.
\[thm1\] Suppose that $$\begin{gathered}
\label{g}
g \in L^2(0,T; H)\,, \qquad
g_\Gamma \in L^2(0,T; H_\Gamma)\,,\\
\label{u0}
u_0 \in V\,,\quad u_{0|\Gamma}\in V_{\Gamma}\,,
\qquad \widehat\beta(u_0) \in L^1(\Omega)\,, \qquad \widehat\beta_\Gamma({u_0}_{|\Gamma})\in L^1(\Gamma)\,,\\
\label{u0_mean}
(u_0)_\Omega \in \operatorname{Int} D(\beta_\Gamma)\,,\\
\label{alpha_sub}
D(\alpha)=D(\alpha_\Gamma)=\erre
\quad\text{and}\quad\exists\,L>0: \max\{\abs{\alpha^0(s)}, \abs{\alpha_\Gamma^0(s)}\}\leq L\left(1+|s|\right) \quad\forall\,s\in \erre\,,\\
\label{coerc1}
\exists\, a_1, a_2>0:\quad rs\geq a_1|s|^2-a_2 \quad\forall\,s\in D(\alpha)\,,\quad\forall\,r\in\alpha(s)\,.
\end{gathered}$$ Then, there exists a septuple $(u,v,\mu,\eta,\xi,\eta_\Gamma,\xi_\Gamma)$ such that $$\begin{gathered}
\label{u}
u \in L^\infty(0,T; V)\cap H^1(0,T; H)\cap L^2(0,T; W)\,,\\
\label{v}
v \in L^\infty(0,T; V_{\Gamma})\cap H^1(0,T; H_\Gamma)\cap L^2(0,T; W_\Gamma)\,,\\
\label{mu}
\mu \in L^2(0,T; W_{\bf n})\,,\\
\label{xi_eta}
\eta,\xi \in L^2(0,T; H)\,, \qquad
\eta_\Gamma,\xi_\Gamma \in L^2(0,T; H_\Gamma)\,,\\
\label{cond}
v=u_{|\Gamma} \quad\text{a.e.~in } \Sigma\,, \qquad u(0)=u_0\,,\\
\label{incl}
\eta\in\alpha(\partial_t u)\,, \;\; \xi \in \beta(u)\quad\text{a.e.~in } Q\,, \qquad
\eta_\Gamma \in \alpha_\Gamma(\partial_t v)\,, \;\;
\xi_\Gamma \in\beta_\Gamma(v)\quad\text{a.e.~in } \Sigma
\end{gathered}$$ and satisfying $$\begin{gathered}
\label{1}
\partial_t u - \Delta \mu = 0\,, \\
\label{2}
\mu=\eta - \Delta u + \xi + \pi(u) - g\,,\\
\label{3}
\eta_\Gamma + \partial_{\bf n} u - \eps\Delta_\Gamma v + \xi_\Gamma + \pi_\Gamma(v) = g_\Gamma\,.
\end{gathered}$$
Note that the setting of Theorem \[thm1\] includes the classical linear viscosity case, where $\alpha=a I$ and $\alpha_\Gamma=bI$, for $a,b>0$, and allows for any choice of the potentials acting on $u$ and $v$, provided that the compatibility condition holds. In particular, we are allowed to consider in the choice of $\widehat\beta+\widehat\pi$ and $\widehat\beta_\Gamma+\widehat\pi_\Gamma$ also logarithmic-type potentials, which are the most relevant in terms on thermodynamical consistency of the model, i.e. $$r\mapsto \left((1+r)\ln(1+r)+(1-r)\ln(1-r)\right) - cr^2\,, \qquad r\in(-1,1)\,, \qquad c>0\,.$$
In the next existence result, we show how to remove the coercivity hypothesis on $\alpha$ by requiring stronger assumptions on the data. Again, no further restrictions are assumed on $\beta$ and $\beta_\Gamma$. Moreover, we stress also that if $\alpha$ is coercive, then the further hypotheses on the data ensure additional regularities on the solutions.
\[thm2\] Assume conditions – and $$\begin{gathered}
\label{g'}
g \in L^2(0,T; H)\cap H^1(0,T; V^*)\,, \qquad
g_\Gamma \in L^2(0,T; H_{\Gamma}) \cap H^1(0,T; V_{\Gamma}^*)\,,\\
\label{g'_bis}
g(0) \in H\,, \qquad g_\Gamma(0) \in H_\Gamma\,,\\
\label{u0'}
u_0 \in W\,, \qquad u_{0|\Gamma} \in W_\Gamma\,,\\
\label{u0'_bis}
\exists\,y_0\in H:\;y_0\in\beta(u_0)\quad\text{a.e.~in } \Omega\,, \qquad
\exists\,y_{0\Gamma}\in H_\Gamma:\;y_{0\Gamma}\in\beta_\Gamma(u_{0|\Gamma})\quad\text{a.e.~in } \Gamma\,,\\
\label{u0'_ter}
\exists\,\delta_0>0:\quad
\{-\Delta u_0+\beta_\delta(u_0)+(I-\delta\Delta_{\bf n})^{-1}g(0)\}_{\delta\in(0,\delta_0)} \quad\text{is bounded in } V\,.
\end{gathered}$$ Then there exists a septuple $(u,v,\mu,\eta,\xi,\eta_\Gamma,\xi_\Gamma)$ such that $$\begin{gathered}
\label{u'}
u \in W^{1,\infty}(0,T; V^*)\cap H^1(0,T; V)\cap L^\infty(0,T; W)\,,\\
\label{v'}
v \in W^{1,\infty}(0,T; H_\Gamma)\cap H^1(0,T; V_\Gamma)\cap L^\infty(0,T; W_\Gamma)\,,\\
\label{mu'}
\mu \in L^\infty(0,T; V)\cap L^2(0,T; W_{\bf n}\cap H^3(\Omega))\,,\\
\label{xi_eta'}
\eta\,,\xi \in L^\infty(0,T; H)\,, \qquad \eta_\Gamma\,,\xi_\Gamma \in L^\infty(0,T; H_\Gamma)
\end{gathered}$$ and satisfying conditions –. Moreover, if holds, then the same conclusion is true without the assumption , and additionally $u \in W^{1,\infty}(0,T; H)$ and $\mu \in L^\infty(0,T; W_{\bf n})$.
Note that hypothesis clearly holds if $u_0\in H^3(\Omega)$, $g(0)\in V$ and the family $\{\beta_\delta(u_0)\}_{\delta \in(0,\delta_0)}$ is bounded in $V$. This condition is not new in literature: see for example the work [@col-gil-spr pp. 977–978] for sufficient conditions.
The setting of Theorem \[thm2\] allows to include in our analysis also the cases where $\alpha=\operatorname{sign}$ for example, or $\alpha(r)=r_+$, $r\in\erre$. Again, no further assumption are made on $\beta$ or $\beta_\Gamma$, so that logarithmic-type potentials are included.
The third existence result that we present allows to remove the linear growth condition on $\alpha$ and $\alpha_\Gamma$, but requires in turn a polynomial control of the growth of $\beta$ and $\beta_\Gamma$. Here, the inclusions with respect to the operators $\alpha$ and $\alpha_\Gamma$ are satisfied in a weak sense. To this end, we shall introduce the operators $\alpha_w:V\to2^{V^*}$ and $\alpha_{\Gamma w}:V_\Gamma\to 2^{V_\Gamma^*}$ as $$\begin{aligned}
\alpha_w(x)&:=\left\{y\in V^*:\;\int_\Omega\widehat\alpha(x) + \ip{y}{\varphi-x}_V\leq\int_\Omega\widehat{\alpha}(\varphi)
\quad\forall\,\varphi\in V\right\}\,, \qquad x\in V\,,\\
\alpha_{\Gamma w}(x_\Gamma)&:=\left\{y_\Gamma\in V_\Gamma^*:
\;\int_\Gamma\widehat\alpha_\Gamma(x_\Gamma) + \ip{y_\Gamma}{\psi-x_\Gamma}_{V_\Gamma}\leq\int_\Gamma\widehat\alpha_\Gamma(\psi)
\quad\forall\,\psi\in V_\Gamma\right\}\,, \qquad x_\Gamma\in V_\Gamma\,,\end{aligned}$$ which are clearly the subdifferentials of the proper, convex and l.s.c. functions induced by $\widehat\alpha$ and $\widehat\alpha_\Gamma$ on $V$ and $V_\Gamma$, respectively. Similarly, we shall introduce the (maximal monotone) operator $\widetilde\alpha_w:\V\to2^{\V^*}$ as $$\begin{split}
\widetilde\alpha_w(x,x_\Gamma):=&\left\{y\in \V^*:\;\int_\Omega\widehat\alpha(x)
+\int_\Gamma\widehat\alpha_\Gamma(x_\Gamma)
+ \ip{y}{(\varphi,\psi)-(x,x_\Gamma)}_\V\right.\\
&\qquad\qquad\quad\left.\leq\int_\Omega\widehat{\alpha}(\varphi)
+\int_\Gamma\widehat\alpha_\Gamma(\psi)
\quad\forall\,(\varphi,\psi)\in \V\right\}\,, \qquad (x,x_\Gamma)\in \V\,.
\end{split}$$ Note that $\alpha_w(x)+\alpha_{\Gamma w}(x_\Gamma)\subseteq\widetilde\alpha_w(x,x_\Gamma)$ for every $(x,x_\Gamma)\in \V$, but equality may not hold in general as $\V^*$ is strictly larger than $(V\times V_\Gamma)^*$. This will result in a weaker variational formulation for both the evolution equation itself and for the inclusions with respect to the nonlinear operators acting in the viscosity terms.
\[thm3\] Assume conditions and –. If \[ip\_0dom\] 0 (D()D(\_)) and $$\begin{aligned}
\label{ip_beta}
\exists\,c_1,c_2>0&: \quad |r|\leq c_1|s|^5 + c_2 \quad\forall\,s\in D(\beta)\,,\quad\forall\,r\in\beta(s)\\
\label{ip_beta_g}
\exists\,p\geq 5,\;d_1,d_2>0&:\quad |r| \leq d_1 |s|^{p} + d_2 \quad\forall\,s\in D(\beta_\Gamma)\,, \quad
\forall\,r\in \beta_\Gamma(s)\,,
\end{aligned}$$ then there exists a septuple $(u,v,\mu,\eta,\xi,\eta_\Gamma,\xi_\Gamma)$ such that $$\begin{gathered}
\label{uv_w}
u \in W^{1,\infty}(0,T; V^*)\cap H^1(0,T; V)\,,\qquad
v \in W^{1,\infty}(0,T; H_\Gamma)\cap H^1(0,T; V_\Gamma)\,,\\
\label{mu_w}
\mu \in L^\infty(0,T; V)\cap L^2(0,T; W_{\bf n}\cap H^3(\Omega))\,,\\
\label{eta_w}
\eta_w \in L^\infty(0,T; \V^*)\,, \qquad
\eta_w\in\widetilde\alpha_w(\partial_t u, \partial_t v) \quad\text{a.e.~in } (0,T)\,,\\
\label{xi_w}
\xi \in L^\infty(0,T; L^{6/5}(\Omega))\,, \qquad
\xi_\Gamma \in L^\infty(0,T; L^q(\Gamma))\quad\forall\,q\in[1,+\infty)\,,\\
\label{incl_xi_w}
\xi\in\beta(u) \quad\text{a.e.~in } Q\,,
\qquad \xi_\Gamma \in \beta_\Gamma(v) \quad\text{a.e.~in } \Sigma\,,
\end{gathered}$$ satisfying conditions , and \[eq\_var\]
\_(t)=\_&+ \_u(t)+ \_((t)+(u(t))-g(t))\
&+ \_\_v(t)\_ +\_(\_(t)+\_(v(t))-g\_(t))
for every $(\varphi,\psi)\in \V$ and a.e. $t\in(0,T)$. Moreover, if \[ip\_beta’\] c\_1,c\_2>0: |r|c\_1|s|\^3 + c\_2 sD(),r(s), then $\xi \in L^\infty(0,T; H)$. Furthermore, if holds, then the same conclusions are true also without the assumption , and additionally $u \in W^{1,\infty}(0,T; H)$ and $\mu \in L^\infty(0,T; W_{\bf n})$.
The setting of Theorem \[thm3\] allows $\alpha$ and $\alpha_\Gamma$ to be superlinear at infinity, but in turn requires polynomial growth for $\beta$ and $\beta_\Gamma$. In this setting, note that we can include the classical choice $$r\mapsto \frac14(r^2-1)^2\,, \qquad r\in\erre\,,$$ for $\widehat\beta+\widehat\pi$, and any generic polynomial double-well potential for $\widehat\beta_\Gamma+\widehat\pi_\Gamma$. These may be seen, as usual, as suitable approximation of the more relevant logarithmic potentials.
Let us stress that the hypothesis is the direct generalization of . Indeed, it is readily seen from that $(u)_\Omega$ is constantly equal to $(u_0)_\Omega$, as well as $(\partial_t u)_\Omega=0$ at any time. Consequently, taking into account that $\alpha$ and $\alpha_\Gamma$ are acting on the time derivatives of the solutions, the hypotheses and clearly possess the same structure.
Let us comment on , which is the natural variational formulation in the dual space $\V^*$ of the couple of equations and . Note that since $N\in\{2,3\}$, we have the continuous inclusions $V\embed L^6(\Omega)$ and $V_\Gamma\embed L^q(\Gamma)$ for every $q\in[1,+\infty)$. Hence, it is clear that $L^{6/5}(\Omega)\embed V^*$ and $L^{q'}(\Gamma)\embed V_\Gamma^*$ for every $q'\in(1,+\infty]$. For these reasons, we have in particular that $\xi \in L^\infty(0,T; V^*)$ and $\xi_\Gamma \in L^\infty(0,T;V_\Gamma^*)$, so that the dualities $$\int_\Omega \xi\varphi \qquad\text{and}\qquad \int_\Gamma\xi_\Gamma\psi$$ in the variational formulation make sense by the classical Hölder inequality, and must be read as $\ip{\xi}{\varphi}_V$ and $\ip{\xi_\Gamma}{\psi}_{V_\Gamma}$, respectively.
We turn now to uniqueness of solutions. According to different smoothness or growth assumptions on the potentials, uniqueness in proved both in the class of solutions given by Theorem \[thm2\] and in the largest class of Theorem \[thm1\].
\[thm4\] Assume that $\beta$ and $\beta_\Gamma$ are single-valued, and that $$\begin{gathered}
\label{ip_uniq}
F:=\widehat\beta + \int_0^\cdot\pi(s)\,ds \in C^{2,1}_{loc}(\erre)\,, \qquad
F_\Gamma:=\widehat\beta_\Gamma + \int_0^\cdot\pi_\Gamma(s)\,ds \in C^{2,1}_{loc}(\erre)\,,\\
\label{ip_uniq'}
\exists\,\widetilde{b_1}>0:\quad (s_1-s_2)(r_1-r_2)\geq\widetilde{b_1}|r_1-r_2|^2 \quad\forall\,(r_i,s_i)\in\alpha_\Gamma\,, \;i=1,2\,.
\end{gathered}$$ Then, there is a unique septuple $(u,v,\mu,\eta,\xi,\eta_\Gamma,\xi_\Gamma)$ satisfying – and –. Furthermore, if additionally $$\begin{aligned}
\label{ip_uniq''}
\exists\,M>0:&\quad F'''(r)\leq M\left(1+|r|^3\right) \quad\text{for a.e.~}r\in\erre\,,\\
\label{ip_uniq'''}
\exists\,M,q>0:&\quad F_\Gamma'''(r)\leq M\left(1+|r|^q\right) \quad\text{for a.e.~}r\in\erre\,,
\end{aligned}$$ there exists a unique septuple $(u,v,\mu,\eta,\xi, \eta_\Gamma, \xi_\Gamma)$ satisfying –, and .
Finally, the last result that we present investigates the asymptotic behaviour of the solutions with respect to $\eps$, and provides a further existence result for the problem with $\eps=0$. For sake of brevity, we only consider the case of the linearity assumption on the growth of $\alpha$ and $\alpha_\Gamma$, and provide different asymptotic convergence of the solutions depending on whether the coercivity assumption is in order. In this direction, we need to introduce a weak formulation of the operator $\beta_\Gamma$ induced on the space $H^{1/2}(\Gamma)$. Namely, we define $\beta_{\Gamma w}:H^{1/2}(\Gamma)\to 2^{H^{-1/2}(\Gamma)}$ as the maximal monotone operator $$\beta_{\Gamma w}(x):=\left\{y \in H^{-1/2}(\Gamma):
\int_\Gamma\widehat\beta_\Gamma(x) + \ip{y}{w-x}_{H^{1/2}(\Gamma)} \leq \int_\Gamma\widehat\beta_\Gamma(w)
\quad\forall\,w\in H^{1/2}(\Gamma)\right\}\,.$$
\[thm5\] Assume conditions , – and let $$u_0\in V\,, \qquad \widehat\beta(u_0)\in L^1(\Omega)\,, \qquad \widehat\beta_\Gamma(u_{0|\Gamma})\in L^1(\Gamma)\,.$$ Let $(u_0^\eps)_\eps\subseteq V$ be any family such that $u_0^\eps$ satisfies for every $\eps>0$, $$\begin{gathered}
u_0^\eps\to u_0 \quad\text{in V}\,, \qquad
\eps^{1/2}u_{0|\Gamma}^\eps \to 0 \quad\text{in } V_\Gamma \qquad\text{as } \eps\searrow0\,,\\
\label{est_init}
\eps\norm{u_{0|\Gamma}^\eps}^2_{V_\Gamma}+
\norm{\widehat\beta(u_0^\eps)}_{L^1(\Omega)}+
\norm{\widehat\beta_\Gamma(u_{0|\Gamma}^\eps)}_{L^1(\Gamma)}\leq c \qquad\forall\,\eps>0
\end{gathered}$$ for a positive constant $c$, and let $(u_\eps,v_\eps,\mu_\eps,\eta_\eps,\xi_\eps,\eta_{\Gamma\eps},\xi_{\Gamma\eps})$ be the solutions given by Theorem \[thm1\] satisfying conditions – with initial datum $u_0^\eps$. Then, there exists a sequence $(\eps_n)_n$ with $\eps_n\to0$ as $n\to\infty$ and a septuple $(u,v,\mu,\eta,\xi,\eta_\Gamma,\xi_\Gamma)$ with $$\begin{gathered}
\label{u_eps0}
u \in L^\infty(0,T; V)\cap H^1(0,T; H)\,, \quad \Delta u \in L^2(0,T; H)\,,\\
v \in L^\infty(0,T; H^{1/2}(\Gamma))\cap H^1(0,T; H_\Gamma)\,,\qquad
\mu \in L^2(0,T; W_{\bf n})\,,\\
\label{xi_eta_eps0}
\eta, \xi \in L^2(0,T; H)\,, \quad \eta_\Gamma \in L^2(0,T; H_\Gamma)\,, \quad
\xi_\Gamma \in L^2(0,T; H^{-1/2}(\Gamma))\,,\\
\eta\in\alpha(\partial_t u)\,, \quad \xi\in\beta(u) \quad\text{a.e.~in } Q\,, \qquad
\eta_\Gamma\in\alpha_\Gamma(\partial_t v)\quad\text{a.e.~in } \Sigma\,,\\
\xi_\Gamma \in \beta_w(v)\quad\text{a.e.~in } (0,T)\,,
\end{gathered}$$ satisfying , – and $$\eta_\Gamma + \partial_{\bf n} u + \xi_\Gamma + \pi_\Gamma(v) = g_\Gamma\,,$$ and such that, as $n\to\infty$, $$\begin{gathered}
u_{\eps_n} \to u \quad\text{in } C^0([0,T]; H)\,, \quad
u_{\eps_n} \wto u \quad\text{in } H^1(0,T; H)\,, \quad
u_{\eps_n} \wstarto u \quad\text{in } L^\infty(0,T; V)\,,\\
v_{\eps_n} \to v \quad\text{in } C^0([0,T]; H_\Gamma)\,, \quad
v_{\eps_n} \wto v \quad\text{in } H^1(0,T; H_\Gamma)\,, \quad
v_{\eps_n} \wstarto v \quad\text{in } L^\infty(0,T; H^{1/2}(\Gamma))\,,\\
\mu_{\eps_n}\wto\mu \quad\text{in } L^2(0,T; W_{\bf n})\,,\\
\eta_{\eps_n}\wto\eta \quad\text{in } L^2(0,T; H)\,,\qquad
\xi_{\eps_n}\wto\xi \quad\text{in } L^2(0,T; H)\,,\\
\eta_{\Gamma\eps_n}\wto\eta_\Gamma \quad\text{in } L^2(0,T; H_\Gamma)\,,\qquad
\xi_{\Gamma\eps_n}\wto\xi_\Gamma \quad\text{in } L^2(0,T; H^{-1/2}(\Gamma))\,,\\
\eps v_{\eps_n}\to 0 \quad\text{in } L^\infty(0,T; V_\Gamma)\,.
\end{gathered}$$ Furthermore, if also hypotheses – hold and $(\eps u_{0|\Gamma}^\eps)_\eps$ is bounded in $W_\Gamma$, then the same conclusion is true without the coercivity assumption , and we also have $$\begin{gathered}
u \in W^\infty(0,T; V^*)\cap H^1(0,T; V)\,, \quad \Delta u \in L^\infty(0,T; H)\,,\\
v \in W^{1,\infty}(0,T; H_\Gamma)\cap H^1(0,T; H^{1/2}(\Gamma))\,,\qquad
\mu \in L^\infty(0,T; V)\cap L^2(0,T; W_{\bf n}\cap H^3(\Omega))\,,\\
\eta, \xi \in L^\infty(0,T; H)\,, \quad \eta_\Gamma \in L^\infty(0,T; H_\Gamma)\,, \quad
\xi_\Gamma \in L^\infty(0,T; H^{-1/2}(\Gamma))\,,
\end{gathered}$$ and $$\begin{gathered}
u_{\eps_n} \wstarto u \quad\text{in } W^{1,\infty}(0,T; V^*)\,, \qquad
u_{\eps_n} \wto u \quad\text{in } H^1(0,T; V)\,,\\
v_{\eps_n} \wstarto v \quad\text{in } W^{1,\infty}(0,T; H_\Gamma)\,, \qquad
v_{\eps_n} \wto v \quad\text{in } H^1(0,T; H^{1/2}(\Gamma))\,,\\
\mu_{\eps_n}\wstarto\mu \quad\text{in } L^\infty(0,T; V)\,,\qquad
\mu_\eps\wto \mu \quad\text{in } L^2(0,T; W_{\bf n}\cap H^3(\Omega))\,,\\
\eta_{\eps_n}\wstarto\eta \quad\text{in } L^\infty(0,T; H)\,,\qquad
\xi_{\eps_n}\wstarto\xi \quad\text{in } L^\infty(0,T; H)\,,\\
\eta_{\Gamma\eps_n}\wstarto\eta_\Gamma \quad\text{in } L^\infty(0,T; H_\Gamma)\,,\qquad
\xi_{\Gamma\eps_n}\wstarto\xi_\Gamma \quad\text{in } L^\infty(0,T; H^{-1/2}(\Gamma))\,,\\
\eps v_{\eps_n}\to 0 \quad\text{in } H^1(0,T; V_\Gamma)\,.
\end{gathered}$$
Let us comment on the existence of an approximating family $(u_0^\eps)_\eps$. If the initial datum $u_0\in V$ satisfies $a\leq u_0 \leq b$ almost everywhere in $\Omega$ for certain $a,b\in\erre$ such that $[a,b]\subseteq\operatorname{Int}D(\beta_\Gamma)$, then a possible approximating sequence $(u_0^\eps)_\eps$ always exists. Indeed, we can set, for every $\eps>0$, $u_0^\eps$ as the unique solution to the elliptic problem $$\begin{cases}
u_0^\eps - \eps^{1/2}\Delta u_0^\eps = u_0 \quad&\text{in } \Omega\,,\\
\partial_{\bf n} u_0^\eps =0 \quad&\text{in } \Gamma\,.
\end{cases}$$ Such problem is well-posed by the classical theory on bilinear forms and admits a unique solution $u_0^\eps \in W_{\bf n}\cap H^3(\Omega)$. Testing by $u_0^\eps$ and using the Young inequality one has $$\frac12\norm{u_0^\eps}_H^2 + \eps^{1/2}\norm{\nabla u_0^\eps}_H^2
\leq \frac12\norm{u_0}_H^2 \,,$$ while testing the first equation by $-\Delta u_0^\eps$ and integrating by parts yields $$\frac12\norm{\nabla u_0^\eps}^2_H + \eps^{1/2}\norm{\Delta u_0^\eps}_H^2
\leq \frac12\norm{\nabla u_0}^2_H\,.$$ We infer that (along a subsequence) $$u_0^\eps \wto u_0 \quad\text{in } V\,, \qquad \norm{u_0^\eps}_V\leq\norm{u_0}_V\quad\forall\,\eps>0\,,$$ so that $u_0^\eps\to u_0$ in $V$, hence also $\eps^{1/2}\Delta u_0^\eps\to 0$ in $V$. We deduce then that $\eps^{1/2}u_\eps \to 0$ in $H^3(\Omega)$, which implies in particular that $\eps^{1/2}u_{0|\Gamma}^\eps\to 0$ in $V_\Gamma$. Furthermore, by the maximum principle we have $a\leq u_0^\eps\leq b$ a.e. in $\Omega$, hence also $a\leq u_{0|\Gamma}^\eps\leq b$ a.e. in $\Sigma$, and we can conclude recalling that $D(\beta_\Gamma)\subseteq D(\beta)$ and the fact that every proper, convex and lower semicontinuous function is continuous in the interior of its domain.
The approximated problem
========================
\[approx\]
In this section we approximate the problem – and we precise the exact regularities of the approximated solutions, depending on the assumptions on the data. Note that throughout this section $\eps>0$ is fixed, so that we shall omit any specific notation for the dependence on $\eps$.
For any $\lambda>0$, let $\beta_\lambda$ and $\beta_{\Gamma\lambda}$ be the Yosida approximations of the graphs $\beta$ and $\beta_\Gamma$ with approximating parameters $\lambda$ and $c\lambda$, respectively, where $c$ is the same as in : the reason why we choose this specific approximation will be clarified in Section \[third\] below. Similarly, let $\alpha_\lambda$ and $\alpha_{\Gamma\lambda}$ denote the Yosida approximations of $\alpha$ and $\alpha_\Gamma$, respectively, with parameter $\lambda$. Furthermore, let $(g_\lambda)_\lambda$ and $(g_{\Gamma\lambda})_\lambda$ be two approximating sequences of $g$ and $g_\Gamma$, respectively, such that $$\begin{gathered}
(g_\lambda)_\lambda \subseteq L^2(0,T; V)\cap H^1(0,T; V^*)\,, \qquad
(g_{\Gamma\lambda})_\lambda \subseteq L^2(0,T; V_\Gamma)\cap H^1(0,T; V_\Gamma^*)\,,\\
g_\lambda\to g \quad\text{in } L^2(0,T; H)\,, \qquad
g_{\Gamma\lambda}\to g_\Gamma \quad\text{in } L^2(0,T; H_\Gamma)\,.\end{gathered}$$ It will be implicitly intended that the convergences hold also in the spaces $H^1(0,T; V^*)$ and $H^1(0,T; V_\Gamma^*)$ whenever is in order. For example, we can define $g:=(I-\lambda\Delta_{\bf n})^{-1}g$ and $g_\Gamma:=(I-\lambda\Delta_\Gamma)^{-1}g_\Gamma$, i.e. as the solutions to the following elliptic problems: $$\begin{cases}
g_\lambda - \lambda\Delta g_\lambda = g \quad&\text{in } \Omega\,,\\
\partial_{\bf n} g_\lambda = 0 \quad&\text{in } \Gamma\,,
\end{cases}
\qquad
g_{\Gamma\lambda} - \lambda\Delta_\Gamma g_{\Gamma\lambda} = g_\Gamma \quad\text{in } \Gamma\,.$$
The idea is to consider the regularized system given by $$\begin{aligned}
\partial_t u_\lambda + \lambda\mu_\lambda - \Delta\mu_\lambda = 0 \qquad&\text{in } Q\,,\\
\mu_\lambda = \lambda \partial_t u_\lambda + \alpha_\lambda(\partial_t u_\lambda) +\lambda u_\lambda - \Delta u_\lambda
+\beta_\lambda(u_\lambda) + T_\lambda\pi(u_\lambda) - g_\lambda \qquad&\text{in } Q\,,\\
u_\lambda=v_\lambda\,, \quad \partial_{\bf n}\mu_\lambda=0 \qquad&\text{in } \Sigma\,,\\
\lambda\partial_t v_\lambda + \alpha_{\Gamma\lambda}(\partial_tv_\lambda) + \partial_{\bf n}u_\lambda
- \eps\Delta_\Gamma v_\lambda
+\beta_{\Gamma\lambda}(v_\lambda) + T_\lambda\pi_\Gamma(v_\lambda) = g_{\Gamma\lambda} \qquad&\text{in } \Sigma\,,\\
u_\lambda(0)=u_0 \qquad&\text{in } \Omega\,,\end{aligned}$$ where $T_\lambda:\erre\to\erre$ is the usual truncation operator at level $\frac1\lambda$ defined by $$T_\lambda(r):=\max\left\{-\frac1\lambda, \min\left\{\frac1\lambda, r\right\}\right\}\,, \quad r\in\erre\,.$$
In order to show that such regularized problem is well-posed, we use an abstract result on doubly nonlinear evolution equations on the product space $\H$. To this end, we introduce the operator $$G_\lambda:H\to H\,, \qquad G_\lambda x:=\lambda x-\Delta x\,, \quad x\in D(G_\lambda):=W_{\bf n}\,,$$ which is maximal monotone and invertible on $H$ with $G_\lambda^{-1}:H\to W_{\bf n}$; in particular, the first equation together with the boundary condition for $\mu_\lambda$ can be written as $\mu_\lambda=-G_\lambda^{-1}(\partial_t u_\lambda)$. Hence, it is natural to define $$\begin{aligned}
A_\lambda:\H\to\H\,, \qquad &A_\lambda(x,y):=
(\lambda x + \alpha_\lambda(x) + G_\lambda^{-1}(x), \lambda y + \alpha_{\Gamma\lambda}(y))\,,\\
B_\lambda:\H\to\H\,, \qquad
&B_\lambda(x,y):=(\lambda x-\Delta x + \beta_\lambda(x), \partial_{\bf n}x -\eps\Delta_\Gamma y +
\beta_{\Gamma\lambda}(y))\,,\end{aligned}$$ where $$D(A_\lambda):=\H\,, \qquad
D(B_\lambda):=\W\,.$$ Taking into account the definition of $A_\lambda$ and $B_\lambda$, the entire approximated system can be formulated as a doubly nonlinear evolution equation in the variable $(u_\lambda, v_\lambda)$ on the product space $\H$ in the following compact form: $$A_\lambda\partial_t(u_\lambda,v_\lambda) + B_\lambda(u_\lambda,v_\lambda)=
(g_\lambda,g_{\Gamma\lambda}) - (T_\lambda\pi(u_\lambda), T_\lambda\pi_\Gamma(v_\lambda))\,, \qquad
(u_\lambda, v_\lambda)(0) = (u_0, u_{0|\Gamma})\,.$$ We collect some useful properties of the operators $A_\lambda$ and $B_\lambda$ in the following lemma.
\[prop\] The operators $A_\lambda$ and $B_\lambda$ are maximal monotone on $\H$ and $D(B_\lambda)\subseteq \V$. Moreover, the following conditions hold: $$\begin{aligned}
(i)&\qquad \forall\,(x,y)\in\H \quad \left(A_\lambda(x,y), (x,y)\right)_\H\geq\lambda\norm{(x,y)}_\H^2\,,\\
(ii)&\qquad \exists\,k_\lambda>0:\quad\forall\,(x,y)\in\H
\quad \norm{A_\lambda(x,y)}_\H\leq k_\lambda\norm{(x,y)}_\H\,,\\
(iii)&\qquad B_\lambda=\partial\psi_\lambda\,, \quad \psi_\lambda:\H\to(-\infty,+\infty] \text{ proper, convex and l.s.c.}\,,
\quad D(\psi_\lambda)\subseteq\V\\
(iv)&\qquad \exists\,\ell_1, \ell_2>0:\quad \psi_\lambda(x,y)\geq\ell_1\norm{(x,y)}_{\V}^2-\ell_2\norm{(x,y)}_\H^2
\quad\forall\,(x,y)\in D(\psi_\lambda)\,,\\
(v)&\qquad A_\lambda=\partial\phi_\lambda\,, \quad \phi_\lambda:\H\to(-\infty,+\infty] \text{ proper, convex and l.s.c.}\,,
\quad D(\phi_\lambda)=\H\,,\\
(vi)&\qquad A_\lambda \quad\text{is bounded in } \H\,,\\
(vii)&\qquad B_\lambda:\V\to\V^* \quad\text{is Lipschitz continuous and strongly monotone}\,.
\end{aligned}$$
It is clear that $A_\lambda$ and $B_\lambda$ are maximal monotone. By monotonicity of $\alpha_\lambda$ and $\alpha_{\Gamma\lambda}$ and the definition of $G_\lambda$, we have that $$\begin{split}
\left(A_\lambda(x,y), (x,y)\right)_\H&=\lambda\int_\Omega|x|^2 + \lambda\int_\Gamma|y|^2 + \int_\Omega\alpha_\lambda(x)x +
\int_\Gamma\alpha_{\Gamma\lambda}(y)y + \int_\Omega G_\lambda^{-1}(x)x\\
&\geq\lambda\norm{(x,y)}_\H^2 + \lambda\int_\Omega|G_\lambda^{-1}(x)|^2 + \int_\Omega|\nabla G_\lambda^{-1}(x)|^2
\geq\lambda\norm{(x,y)}_\H^2
\end{split}$$ for every $(x,y)\in\H$, from which the first condition. Secondly, for every $(x,y)\in\H$, the Lipschitz continuity of $\alpha_\lambda$, $\alpha_{\Gamma\lambda}$ and the continuity of $G_\lambda^{-1}:\H\to W_{\bf n}$, we have $$\norm{A_\lambda(x,y)}_\H\leq\left(\lambda+\frac1\lambda\right)\norm{x,y}_\H + \norm{G_\lambda^{-1}(x)}_H
\leq\left(\lambda+\frac1\lambda+\frac1{\sqrt\lambda}\right)\norm{x,y}_\H\,,$$ from which the second condition. Furthermore, it is a standard matter to check that $(iii)$ holds with the choice $\psi_\lambda:\H\to[0,+\infty]$ $$\psi_\lambda(x,y):=
\begin{cases}
\frac\lambda2\int_\Omega|x|^2 + \frac12\int_\Omega|\nabla x|^2 +
\frac\eps2\int_\Gamma|\nabla_\Gamma y|^2 + \int_\Omega\widehat{\beta}_\lambda(x)
+\int_\Gamma\widehat{\beta}_{\Gamma\lambda}(y) &\text{ if } (x,y)\in \V\,,\\
+\infty &\text{ otherwise}\,.
\end{cases}$$ It is clear that $D(\psi_\lambda)\subseteq\V$ and that, for every $(x,y)\in \V$, $$\psi_\lambda(x,y)\geq\frac\lambda2\int_\Omega|x|^2+\frac12\int_\Omega|\nabla x|^2+\frac\eps2\int_\Gamma|\nabla_\Gamma y|^2
\geq\frac12\min\{1,\lambda,\eps\}\norm{(x,y)}_{\V}^2-\frac\eps2\norm{(x,y)}_\H^2$$ and also condition $(iv)$ is proved. Moreover, it is readily seen that $(v)$ holds with $$\phi_\lambda(x,y):=\frac\lambda2\int_\Omega|x|^2 + \int_\Omega\widehat\alpha_\lambda(x)+
F_\lambda^*(x) + \frac\lambda2\int_\Gamma|y|^2 + \int_\Gamma\widehat\alpha_{\Gamma\lambda}(y)\,,
\qquad (x,y)\in\H\,,$$ where $F_\lambda^*$ is the convex conjugate of the proper, convex, l.s.c. function $$F_\lambda(x):=\begin{cases}
\frac\lambda2\int_\Omega|x|^2 + \int_\Omega|\nabla x|^2 \quad&\text{if } x\in V\,,\\
+\infty \quad&\text{if } x\in H\setminus V\,.
\end{cases}$$ Since $\partial\phi_\lambda=G_\lambda^{-1}$ is Lipschitz continuous on $H$, it is also clear that $D(\phi_\lambda)=H$, and $(v)$ is proved. Moreover, $(vi)$ is an easy consequence of the Lipschitz continuity of $\alpha_\lambda$, $\alpha_{\Gamma\lambda}$ and $G_\lambda^{-1}$ on $H$. Finally, let us focus on $(vii)$. In this case, we are looking at $B_\lambda$ as its weak formulation $B_\lambda:\V\to\V^*$ given by $$\ip{B_\lambda(x,y)}{(z,w)}_{\V}=\lambda\int_\Omega xz + \int_\Omega\nabla x\cdot\nabla z
+\int_\Omega \beta_\lambda(x)z + \eps\int_\Gamma\nabla_\Gamma y\cdot\nabla_\Gamma w
+\int_\Gamma\beta_{\Gamma\lambda}(y)w\,.$$ Hence, it follows by the Lipschitz continuity of $\beta_\lambda$ and $\beta_{\Gamma\lambda}$ that, for every $(x_1,y_1),(x_2,y_2)\in\V$ $$\begin{split}
&\norm{B_\lambda(x_1,y_1)-B_\lambda(x_2,y_2)}_{\V^*}\\
&\qquad\leq\left(\lambda+\frac1\lambda\right)\norm{x_1-x_2}_H + \norm{\nabla(x_1-x_2)}_H
+ \eps\norm{\nabla_\Gamma(y_1-y_2)}_{H_\Gamma} +\frac1{c\lambda}\norm{y_1-y_2}_{H_\Gamma}
\end{split}$$ from which the Lipschitz continuity of $B_\lambda$. Similarly, by the monotonicity of $\beta_\lambda$ and $\beta_{\Gamma\lambda}$, $$\begin{split}
&\ip{B_\lambda(x_1,y_1)-B_\lambda(x_2,y_2)}{(x_1,y_1)-(x_2,y_2)}_{\V}\\
&\qquad\geq\lambda\norm{x_1-x_2}_H^2 + \norm{\nabla(x_1-x_2)}_H^2 + \eps\norm{\nabla_\Gamma(y_1-y_2)}_H^2
\geq C_{\lambda\eps}\norm{(x_1,y_1)-(x_2,y_2)}_{\V}^2
\end{split}$$ for a certain positive constant $C_{\lambda\eps}$, from which the strong monotonicity of $B_\lambda$.
Now, we fix $\lambda>0$ and we show that the approximated problem is well-posed. Given $(f,f_\Gamma)\in L^2(0,T; \H)$, Lemma \[prop\] and the hypotheses – ensure that we can apply the existence result contained in [@colli-visin Thm. 2.1] and infer that there exists $$(u_\lambda, v_\lambda) \in H^1(0,T; \H)\cap L^\infty(0,T; \V)\,, \qquad
A_\lambda(\partial_tu_\lambda, \partial_tv_\lambda)\,,\; B_\lambda(u_\lambda,v_\lambda) \in L^2(0,T; \H)$$ such that $$\begin{gathered}
\label{app1}
A_\lambda\partial_t(u_\lambda,v_\lambda) + B_\lambda(u_\lambda,v_\lambda)=
(g_\lambda,g_{\Gamma\lambda}) - (T_\lambda\pi(f), T_\lambda\pi_\Gamma(f_\Gamma))\quad\text{a.e.~in } (0,T)\,,\\
\label{app2}
(u_\lambda, v_\lambda)(0) = (u_0, u_{0|\Gamma})\,.\end{gathered}$$ Let us show that such solution $(u_\lambda, v_\lambda)$ is indeed unique and satisfies useful estimates.
\[lem\_app\] For every $\lambda>0$, there exists $c_\lambda>0$ such that $$\norm{\partial_tu_\lambda}_{L^2(0,T;H)} + \norm{\partial_tv_\lambda}_{L^2(0,T; H_\Gamma)}+
\norm{u_\lambda}_{L^\infty(0,T; V)} + \norm{v_\lambda}_{L^\infty(0,T; V_{\Gamma})}\leq c_\lambda\,.$$ Moreover, there is $c_\lambda'>0$ such that, for every $(f^i, f_\Gamma^i)\in L^2(0,T; \H)$, if $(u_\lambda^i, v_\lambda^i)$ are any respective solution to –, $i=1,2$, we have $$\begin{split}
\norm{\partial_t(u^1_\lambda-u_\lambda^2)}_{L^2(0,T; H)} &+ \norm{\partial_t(v^1_\lambda-v_\lambda^2)}_{L^2(0,T; H_\Gamma)}+
\norm{u_\lambda^1-u_\lambda^2}_{L^\infty(0,T; V)} + \norm{v^1_\lambda-v_\lambda^2}_{L^\infty(0,T; V_{\Gamma})}\\
&\leq c'_\lambda\left(\norm{f^1-f^2}_{L^2(0,T; H)} + \norm{f_\Gamma^1-f_\Gamma^2}_{L^2(0,T; H_\Gamma)}\right)\,.
\end{split}$$
Testing by $\partial_t(u_\lambda, v_\lambda)$ and integrating on $(0,t)$, thanks to the monotonicity of the operators $\alpha_\lambda$, $\alpha_{\Gamma\lambda}$ and $G_\lambda^{-1}$, using the Young inequality and the fact that $|T_\lambda|\leq\frac1\lambda$ we have $$\begin{split}
&\lambda\int_0^t\norm{\partial_tu_\lambda(s)}_H^2\,ds
+\frac\lambda2\int_\Omega|u_\lambda(t)|^2
+\frac12\int_\Omega|\nabla u_\lambda(t)|^2
+\int_\Omega \widehat\beta_\lambda(u_\lambda(t))\\
&\qquad+\lambda\int_0^t\norm{\partial_tv_\lambda(s)}_{H_\Gamma}^2\,ds
+ \frac\eps2\int_\Gamma|\nabla_\Gamma v_\lambda(t)|^2
+ \int_\Gamma\widehat\beta_{\Gamma\lambda}(v_\lambda(t))\\
&\leq \frac\lambda2\norm{u_0}_H^2+ \frac12\norm{\nabla u_0}_H^2 + \frac\eps2\norm{\nabla_\Gamma u_{0|\Gamma}}_{H_\Gamma}^2
+\int_\Omega\widehat\beta_\lambda(u_0) + \int_\Gamma\widehat\beta_{\Gamma\lambda}(u_{0|\Gamma})\\
&\qquad+\int_0^t\!\!\int_\Omega \left(g_\lambda(s)-T_\lambda\pi(f(s))\right)\partial_tu_\lambda(s)\,ds +
\int_0^t\!\!\int_\Gamma \left(g_{\Gamma\lambda}(s)-T_\lambda\pi_\Gamma(f_\Gamma(s))\right)\partial_t v_\lambda(s)\,ds\\
&\leq c_{\lambda}\norm{(u_0, u_{0|\Gamma})}_{\V}^2
+\frac\lambda2\int_0^t\norm{\partial_tu_\lambda(s)}_H^2\,ds +
\frac\lambda2\int_0^t\norm{\partial_tv_\lambda(s)}_{H_\Gamma}^2\,ds\\
&\qquad+\frac1\lambda\norm{(g,g_{\Gamma})}^2_{L^2(0,T; \H)} + \frac1{\lambda^2}(|Q|+|\Sigma|)
\end{split}$$ for a certain $c_{\lambda}>0$, so that rearranging the terms we obtain the first estimate. Similarly, given $(f^i,f_\Gamma^i)$ and any respective solutions $(u_\lambda^i, v_\lambda^i)$ to –, for $i=1,2$, taking the difference of and testing by $\partial_t(u_\lambda^1-u_\lambda^2, v_\lambda^1-v_\lambda^2)$, using the monotonicity of $\alpha_\lambda$, $\alpha_{\Gamma\lambda}$ and $G_\lambda^{-1}$, the Lipschitz continuity of $\beta_\lambda$, $\beta_{\Gamma\lambda}$, $\pi$, $\pi_\Gamma$ and $T_\lambda$, an easy computation shows that $$\begin{split}
\lambda&\int_0^t\norm{\partial_t(u_\lambda^1-u_\lambda^2)(s)}_H^2\,ds +
\lambda\int_0^t\norm{\partial_t(v_\lambda^1-v_\lambda^2)(s)}_{H_\Gamma}^2\,ds\\
&\quad+\frac\lambda2\int_\Omega|(u_\lambda^1-u_\lambda^2)(t)|^2
+\frac12\int_\Omega|\nabla(u_\lambda^1-u_\lambda^2)(t)|^2
+ \frac\eps2\int_\Gamma|\nabla(v_\lambda^1-v_\lambda^2)(t)|^2\\
&\leq\int_0^t\!\!\int_\Omega\left(|\beta_\lambda(u_\lambda^1(s))-\beta_\lambda(u_\lambda^2(s))|+
|T_\lambda\pi(f^1(s))-T_\lambda\pi(f^2(s))|\right)
|\partial_t(u_\lambda^1-u_\lambda^2)(s)|\,ds\\
&\quad+\int_0^t\!\!\int_\Gamma\left(|\beta_{\Gamma\lambda}(v_\lambda^1(s))-
\beta_{\Gamma\lambda}(v_\lambda^2(s))|+
|T_\lambda\pi_\Gamma(f_\Gamma^1(s))-T_\lambda\pi_\Gamma(f_\Gamma^2(s))|\right)
|\partial_t(v_\lambda^1-v_\lambda^2)(s)|\,ds\\
&\leq\frac1\lambda\int_0^t\!\!\int_\Omega|u^1_\lambda(s)-u_\lambda^2(s)||\partial_t(u_\lambda^1-u_\lambda^2)(s)|\,ds +
\frac1\lambda\int_0^t\!\!\int_\Gamma|v^1_\lambda(s)-v_\lambda^2(s)||\partial_t(v_\lambda^1-v_\lambda^2)(s)|\,ds\\
&\quad+C_\pi\int_0^t\!\!\int_\Omega|f^1(s)-f^2(s)||\partial_t(u_\lambda^1-u_\lambda^2)(s)|\,ds
+C_{\pi_\Gamma}\int_0^t\!\!\int_\Gamma|f^1_\Gamma(s)-f^2_\Gamma(s)||\partial_t(v_\lambda^1-v_\lambda^2)(s)|\,ds\\
&\leq\frac\lambda2\int_0^t\norm{\partial_t(u_\lambda^1-u_\lambda^2)(s)}_H^2\,ds +
\frac\lambda2\int_0^t\norm{\partial_t(v_\lambda^1-v_\lambda^2)(s)}_{H_\Gamma}^2\,ds
+\frac1{\lambda^2}\int_0^t\norm{u_\lambda^1(s)-u_\lambda^2(s)}_H^2\,ds\\
&\quad+\frac1{\lambda^2}\int_0^t\norm{v_\lambda^1(s)-v_\lambda^2(s)}_{H_\Gamma}^2\,ds
+\frac{C_\pi^2}\lambda\norm{f^1-f^2}^2_{L^2(0,T; H)}
+\frac{C_{\pi_\Gamma}^2}\lambda\norm{f_\Gamma^1-f_\Gamma^2}^2_{L^2(0,T; H)}\,,
\end{split}$$ and the second inequality follows from the Gronwall lemma.
Lemma \[lem\_app\] ensures that, for any $\lambda>0$, it is well-defined the map $$\Theta_\lambda: E_\lambda\to E_\lambda\,, \qquad (f,f_\Gamma)\mapsto (u_\lambda, v_\lambda)\,,$$ where $$\begin{split}
E_\lambda&:=\left\{(x,y)\in H^1(0,T; \H)\cap L^\infty(0,T; \V): \right.\\
&\qquad\left.\norm{\partial_tx}_{L^2(0,T;H)} + \norm{\partial_ty}_{L^2(0,T; H_\Gamma)}+
\norm{x}_{L^\infty(0,T; V)} + \norm{y}_{L^\infty(0,T; V_{\Gamma})}\leq c_\lambda\right\}\,.
\end{split}$$ Since $E_\lambda$ is compact and convex in $L^2(0,T; \H)$ and $\Theta_\lambda$ is continuous on $L^2(0,T; \H)$ by Lemma \[lem\_app\], Shauder’s fixed point theorem ensures that there is a fixed point $(u_\lambda, v_\lambda)\in E_\lambda$ for $\Theta_\lambda$. It is also clear by the second inequality in the previous lemma and the Gronwall lemma that $(u_\lambda, v_\lambda)$ is also unique. As it is natural, we set $\mu_\lambda:=-G_\lambda^{-1}\partial_tu_\lambda$.
Let us collect the properties of $(u_\lambda, v_\lambda, \mu_\lambda)$ in the following lemmata. The first result states precisely the regularities of the approximated solutions under the weakest assumptions of Theorem \[thm1\] on the data, while the second specifies some additional regularity provided by the strongest hypotheses of Theorems \[thm2\]–\[thm3\].
\[prop\_app\] Under the assumptions – we have $$\begin{gathered}
u_\lambda \in H^1(0,T; H)\cap L^\infty(0,T; V)\cap L^2(0,T; W)\,,\\
v_\lambda \in H^1(0,T; H_\Gamma)\cap L^\infty(0,T; V_{\Gamma})
\cap L^2(0,T; W_\Gamma)\,,\\
\mu_\lambda \in L^2(0,T; W_{\bf n})
\end{gathered}$$ and $$\begin{aligned}
\label{eq1_app}
\partial_t u_\lambda + \lambda\mu_\lambda - \Delta\mu_\lambda = 0 \qquad&\text{in } Q\,,\\
\label{eq2_app}
\mu_\lambda = \lambda \partial_t u_\lambda + \alpha_\lambda(\partial_t u_\lambda) +\lambda u_\lambda - \Delta u_\lambda
+\beta_\lambda(u_\lambda) + T_\lambda\pi(u_\lambda) - g_\lambda \qquad&\text{in } Q\,,\\
u_\lambda=v_\lambda\,, \quad \partial_{\bf n}\mu_\lambda=0 \qquad&\text{in } \Sigma\,,\\
\label{eq3_app}
\lambda\partial_t v_\lambda + \alpha_{\Gamma\lambda}(\partial_tv_\lambda) + \partial_{\bf n}u_\lambda
- \eps\Delta_\Gamma v_\lambda
+\beta_{\Gamma\lambda}(v_\lambda) + T_\lambda\pi_\Gamma(v_\lambda) = g_{\Gamma\lambda} \qquad&\text{in } \Sigma\,,\\
\label{init_app}
u_\lambda(0)=u_0 \qquad&\text{in } \Omega\,,\end{aligned}$$
Thanks to classical elliptic regularity results (see [@brezzi-gilardi Thm. 3.2]), the regularities of the approximated solutions $u_\lambda$ and $v_\lambda$ easily follow from the fact that $(u_\lambda,v_\lambda) \in E_\lambda$ and $B_\lambda(u_\lambda, v_\lambda)\in L^2(0,T; \H)$. Indeed, from this last condition it follows that $\Delta u_\lambda \in L^2(0,T; H)$ and $\partial_{\bf n}u_\lambda -\eps\Delta v_\lambda \in L^2(0,T; H_\Gamma)$. The conditions $u_\lambda \in L^\infty(0,T; V)$, $\Delta u_\lambda \in L^2(0,T; H)$ and $v_\lambda \in L^\infty(0,T; V_\Gamma)$ imply that $u_\lambda \in L^2(0,T; H^{3/2}(\Omega))$, hence also $\partial_{\bf n}u_\lambda \in L^2(0,T; H_\Gamma)$. It follows then by comparison that $\Delta_\Gamma v_\lambda \in L^2(0,T; H_\Gamma)$, from which $v_\lambda\in
L^2(0,T; W_\Gamma)$ and also $u_\lambda \in L^2(0,T; W)$. Finally, the regularity of $\mu$ is straightforward from the definition of $G_\lambda$, and – follow from the definition of $\Theta_\lambda$ itself.
\[prop\_app2\] Under the further assumptions – we also have $$\begin{gathered}
u_\lambda \in H^1(0,T; V)\cap L^2(0,T; H^3(\Omega))\cap C^0([0,T]; W)\cap C^1([0,T]; H)\,,\\
v_\lambda \in H^1(0,T; V_{\Gamma\eps})\cap L^2(0,T; H^{3}(\Gamma))\cap C^0([0,T]; W_\Gamma)\cap C^1([0,T]; H_\Gamma)\,,\\
\mu_\lambda \in L^2(0,T; H^3(\Omega))\cap C^0([0,T]; W_{\bf n})\,.
\end{gathered}$$
Thanks to conditions $(v)$–$(vii)$ in Lemma \[prop\] and the hypotheses –, the result [@colli-visin Thm 2.2] ensures that the range of the function $\Theta_\lambda$ is contained in $H^1(0,T; \V)$, hence $u_\lambda \in H^1(0,T;V)$ and $v_\lambda \in H^1(0,T; V_{\Gamma})$. Consequently, by comparison in , we have $\mu_\lambda \in L^2(0,T; V)$, so that $\mu_\lambda \in L^2(0,T; H^3(\Omega))$ by elliptic regularity. Moreover, by comparison in –, thanks to and the fact that $\partial_t u_\lambda \in L^2(0,T; V)$ and $\partial_t v_\lambda \in L^2(0,T; V_{\Gamma})$, we deduce that $-\Delta u_\lambda\in L^2(0,T; V)$ and $\partial_{\bf n}u_\lambda - \eps\Delta v_\lambda \in
L^2(0,T; V_{\Gamma})$. Since we have $\Delta u_\lambda \in L^2(0,T; V)$ and (by Lemma \[prop\_app\]) $v \in L^2(0,T; W_\Gamma)$, then $u_\lambda \in L^2(0,T; H^{5/2}(\Omega))$ and $\partial_{\bf n}u_\lambda\in L^2(0,T; V_\Gamma)$. By difference then we deduce that $\Delta_\Gamma v_\lambda \in L^2(0,T; V_\Gamma)$, so that $v_\lambda \in L^2(0,T; H^3(\Gamma))$ by elliptic regularity on the boundary, and consequently also $u_\lambda \in L^2(0,T; H^3(\Omega))$. Furthermore, we have $u_\lambda \in L^2(0,T; H^3(\Omega))\cap H^1(0,T; V)\embed C^0([0,T]; W)$ and $v_\lambda \in L^2(0,T; H^3(\Gamma))\cap H^1(0,T; V_{\Gamma})\embed C^0([0,T]; W_\Gamma)$; in particular, we deduce that $\partial_{\bf n}u_\lambda \in C^0([0,T]; H^{1/2}(\Gamma))$. Hence, setting $z_\lambda:=g_\lambda-\lambda u_\lambda + \Delta u_\lambda - \beta_\lambda(u_\lambda) - T_\lambda\pi(u_\lambda)$ and $w_\lambda:=g_{\Gamma\lambda}
-\partial_{\bf n}u_\lambda - \beta_{\Gamma\lambda}(v_\lambda)-T_\lambda\pi_\Gamma(v_\lambda)$, from – we have that $A_\lambda(\partial_t u_\lambda, \partial_t v_\lambda)=(z_\lambda,w_\lambda)\in C^0([0,T];\H)$: since $A_\lambda^{-1}:\H\to\H$ is Lipschitz continuous, we infer that $u_\lambda \in C^1([0,T]; H)$ and $v_\lambda \in C^1([0,T]; H_\Gamma)$, hence also $\mu_\lambda \in C^0([0,T]; W_{\bf n})$ from .
The first existence result
==========================
\[proof1\]
We present here the proof of the first main result. Recall that here we are working under the assumptions –, so that the regularity of the approximated solutions is the one specified in Lemma \[prop\_app\]. Since the passage to the limit will consist in letting $\lambda\searrow0$, it is not restrictive to consider $\lambda\in(0,1)$ for example.
The first estimate {#first}
------------------
Testing by $\mu_\lambda$, by $\partial_t u_\lambda$ and taking the difference, by integration by parts we have that, for every $t\in(0,T)$, $$\begin{split}
&\lambda\int_{Q_t}|\mu_\lambda|^2 + \int_{Q_t}|\nabla\mu_\lambda|^2
+\lambda\int_{Q_t}|\partial_t u_\lambda|^2 + \int_{Q_t}\alpha_\lambda(\partial_t u_\lambda)\partial_t u_\lambda
+\frac\lambda2\int_\Omega|u_\lambda(t)|^2 + \frac12\int_\Omega|\nabla u_\lambda(t)|^2\\
&\qquad+\lambda\int_{\Sigma_t}|\partial_t v_\lambda|^2
+ \int_{\Sigma_t}\alpha_{\Gamma\lambda}(\partial_t v_\lambda)\partial_t v_\lambda
+\frac\eps2\int_\Sigma|\nabla_\Gamma v_\lambda(t)|^2 + \int_\Omega\widehat\beta_\lambda(u_\lambda(t))
+\int_\Sigma\widehat\beta_{\Gamma\lambda}(v_\lambda(t))\\
&=\frac\lambda2\int_\Omega|u_0|^2 + \frac12\int_\Omega|\nabla u_0|^2 +
\frac\eps2\int_\Sigma|\nabla_\Gamma u_{0|\Gamma}|^2
+\int_\Omega\widehat\beta_\lambda(u_0) + \int_\Sigma\widehat\beta_{\Gamma\lambda}(u_{0|\Gamma})\\
&\qquad+\int_{Q_t}\left(g_\lambda-T_\lambda\pi(u_\lambda)\right)\partial_t u_\lambda
+\int_{\Sigma_t}\left(g_{\Gamma\lambda}-T_\lambda\pi_\Gamma(v_\lambda)\right)\partial_t v_\lambda\,.
\end{split}$$ Now, let $J_\lambda:=(I+\lambda\alpha)^{-1}:\erre\to\erre$ and $J_{\Gamma_\lambda}:=(I+\lambda\alpha_\Gamma)^{-1}:\erre\to\erre$ denote the resolvents of $\alpha$ and $\alpha_\Gamma$, respectively. By elementary properties of maximal monotone graphs it is well known that $J_\lambda$ and $J_{\Gamma\lambda}$ are contractions on $\erre$, and that $\alpha_\lambda(\cdot) \in \alpha(J_\lambda(\cdot))$ and $\alpha_{\Gamma\lambda}(\cdot)\in\alpha_\Gamma(J_{\Gamma\lambda}\cdot)$: consequently, by the coercivity assumptions and we deduce that $$\begin{gathered}
\alpha_{\lambda}(\partial_t u_\lambda)\partial_tu_\lambda =
\alpha_{\lambda}(\partial_t u_\lambda)J_{\lambda}\partial_tu_\lambda
+\lambda|\alpha_{\lambda}(\partial_t u_\lambda)|^2\geq
a_1|J_{J_\lambda}\partial_t u_\lambda|^2 - a_2 +
\lambda|\alpha_{\lambda}(\partial_t u_\lambda)|^2\,,\\
\alpha_{\Gamma\lambda}(\partial_t v_\lambda)\partial_tv_\lambda =
\alpha_{\Gamma\lambda}(\partial_t v_\lambda)J_{\Gamma\lambda}\partial_tv_\lambda
+\lambda|\alpha_{\Gamma\lambda}(\partial_t v_\lambda)|^2\geq
b_1|J_{\Gamma\lambda}\partial_t v_\lambda|^2 - b_2 +
\lambda|\alpha_{\Gamma\lambda}(\partial_t v_\lambda)|^2\,.\end{gathered}$$ Taking into account these relations, the left-hand side of the last inequality is bounded from below by $$\begin{split}
&\lambda\int_{Q_t}|\mu_\lambda|^2 + \int_{Q_t}|\nabla\mu_\lambda|^2
+\lambda\int_{Q_t}|\partial_t u_\lambda|^2 + a_1\int_{Q_t}|J_{\lambda}\partial_t u_\lambda|^2
+\lambda\int_{Q_t}|\alpha_{\lambda}(\partial_t u_\lambda)|^2\\
&\qquad+\frac\lambda2\int_\Omega|u_\lambda(t)|^2 + \frac12\int_\Omega|\nabla u_\lambda(t)|^2
+\lambda\int_{\Sigma_t}|\partial_t v_\lambda|^2
+ b_1\int_{\Sigma_t}|J_{\Gamma\lambda}\partial_t v_\lambda|^2
+\lambda\int_{\Sigma_t}|\alpha_{\Gamma\lambda}(\partial_t v_\lambda)|^2\\
&\qquad+\frac\eps2\int_\Sigma|\nabla_\Gamma v_\lambda(t)|^2 + \int_\Omega\widehat\beta_\lambda(u_\lambda(t))
+\int_\Sigma\widehat\beta_{\Gamma\lambda}(v_\lambda(t))
\end{split}$$ while the right-hand side can be handled using the Young inequality by $$\begin{split}
&a_2|Q| + b_2|\Sigma| +
\frac12\norm{u_0}_V^2 + \frac\eps2\norm{u_{0|\Gamma}}_{V_{\Gamma}}^2 + \norm{\widehat\beta(u_0)}_{L^1(\Omega)}
+\norm{\widehat\beta_{\Gamma}(u_{0|\Gamma})}_{L^1(\Gamma)}\\
&\quad+\int_{Q_t}\left(g_\lambda-T_\lambda\pi(u_\lambda)\right)\partial_t u_\lambda
+\int_{\Sigma_t}\left(g_{\Gamma\lambda}-T_\lambda\pi(v_\lambda)\right)\partial_t v_\lambda\\
&\leq a_2|Q| + b_2|\Sigma| +
\frac12\norm{u_0}_V^2 + \frac\eps2\norm{u_{0|\Gamma}}_{V_{\Gamma}}^2 + \norm{\widehat\beta(u_0)}_{L^1(\Omega)}
+\norm{\widehat\beta_{\Gamma}(u_{0|\Gamma})}_{L^1(\Gamma)}
+\frac\delta2\int_{Q_t}|\partial_t u_\lambda|^2\\
&\quad+\frac\delta2\int_{\Sigma_t}|\partial_t v_\lambda|^2
+\frac1{\delta}\norm{g}^2_{L^2(0,T; H)} + \frac1{\delta}\norm{g_\Gamma}^2_{L^2(0,T; H_\Gamma)}
+\frac{C_\pi^2}{\delta}\int_{Q_t}|u_\lambda|^2+\frac{C_{\pi_\Gamma}^2}{\delta}\int_{\Sigma_t}|v_\lambda|^2
\end{split}$$ for every $\delta>0$. Now, by definition of $\alpha_\lambda$ and $\alpha_{\Gamma\lambda}$, $$\frac\delta2\int_{Q_t}|\partial_t u_\lambda|^2 \leq \delta\int_{Q_t}|J_\lambda\partial_t u_\lambda|^2
+\delta\lambda^2\int_{Q_t}|\alpha_\lambda(\partial_t u_\lambda)|^2$$ and $$\frac\delta2\int_{\Sigma_t}|\partial_t v_\lambda|^2 \leq \delta\int_{\Sigma_t}|J_{\Gamma\lambda}\partial_t v_\lambda|^2
+\delta\lambda^2\int_{\Sigma_t}|\alpha_{\Gamma\lambda}(\partial_t v_\lambda)|^2\,.$$ Let us handle the last two terms on the right hand side. Testing by $\frac1{|\Omega|}$ we easily have $$\partial_t(u_\lambda)_\Omega + \lambda(\mu_\lambda)_\Omega=0\,,$$ which yields \[mean\] (u\_(t))\_=(u\_0)\_-\_0\^t(\_(s))\_ds t, >0. As a consequence, by the Poincaré inequality, an easy computation yields \[normH\]
\_H& \_H+\_H +\_0\^t\_Hds\
&C(\_H + \_H + \_0\^t\_Hds)
for a positive constant $C$ independent of $\lambda$, from which (updating $C$) $$\int_{Q_t}|u_\lambda|^2\leq C\left(\int_{Q_t}|\nabla u_\lambda|^2 + \norm{u_0}_H^2 + \lambda^2\int_{Q_t}|\mu_\lambda|^2\right)\,.$$ Moreover, by the Poincaré inequality on the boundary we also have $$\int_{\Sigma_t}|v_\lambda|^2 \leq C\int_{\Sigma_t}|\nabla_\Gamma v_\lambda|^2\,.$$ Taking these considerations into account on the right hand side of the estimate we obtain $$\begin{split}
&\lambda\int_{Q_t}|\mu_\lambda|^2 + \int_{Q_t}|\nabla\mu_\lambda|^2
+\lambda\int_{Q_t}|\partial_t u_\lambda|^2 + a_1\int_{Q_t}|J_{\lambda}\partial_t u_\lambda|^2
+\lambda\int_{Q_t}|\alpha_{\lambda}(\partial_t u_\lambda)|^2\\
&\qquad+\frac\lambda2\int_\Omega|u_\lambda(t)|^2 + \frac12\int_\Omega|\nabla u_\lambda(t)|^2
+\lambda\int_{\Sigma_t}|\partial_t v_\lambda|^2
+ b_1\int_{\Sigma_t}|J_{\Gamma\lambda}\partial_t v_\lambda|^2
+\lambda\int_{\Sigma_t}|\alpha_{\Gamma\lambda}(\partial_t v_\lambda)|^2\\
&\qquad+\frac\eps2\int_\Sigma|\nabla_\Gamma v_\lambda(t)|^2 + \int_\Omega\widehat\beta_\lambda(u_\lambda(t))
+\int_\Sigma\widehat\beta_{\Gamma\lambda}(v_\lambda(t))\\
&\leq a_2|Q| + b_2|\Sigma| +
C\norm{u_0}_V^2 + \frac\eps2\norm{u_{0|\Gamma}}_{V_{\Gamma}}^2 + \norm{\widehat\beta(u_0)}_{L^1(\Omega)}
+\norm{\widehat\beta_{\Gamma}(u_{0|\Gamma})}_{L^1(\Gamma)}
+\delta\int_{Q_t}|J_\lambda\partial_t u_\lambda|^2\\
&\qquad+\delta\lambda^2\int_{Q_t}|\alpha_\lambda(\partial_t u_\lambda)|^2
+\delta\int_{\Sigma_t}|J_{\Gamma\lambda}\partial_t v_\lambda|^2
+\delta\lambda^2\int_{\Sigma_t}|\alpha_{\Gamma\lambda}(\partial_t v_\lambda)|^2\\
&\qquad+\frac1{\delta}\norm{g}^2_{L^2(0,T; H)} + \frac1{\delta}\norm{g_\Gamma}^2_{L^2(0,T; H_\Gamma)}
+ C_\delta\int_{Q_t}|\nabla u_\lambda|^2 + C_\delta\lambda^2\int_{Q_t}|\mu_\lambda|^2
\end{split}$$ where we have updated step by step the constant $C$ independent of $\lambda$ and $C_\delta>0$ depends only on $\delta$. Fix now $\delta:=\min\{\frac{a_1}2, \frac{b_1}2, \frac12\}$: since it is not restrictive to consider $\lambda\in(0,\frac1{2C_\delta}]$, rearranging the terms and using the Gronwall lemma yields $$\begin{gathered}
\label{est1}
\norm{\nabla u_\lambda}_{L^\infty(0,T; H)} + \lambda^{1/2}\norm{u_\lambda}_{H^1(0,T; H)\cap L^\infty(0,T; H)} \leq C\,,\\
\label{est2}
\eps^{1/2}\norm{v_\lambda}_{L^\infty(0,T; V_{\Gamma})} + \lambda^{1/2}\norm{v_\lambda}_{H^1(0,T; H_\Gamma)} \leq C\,,\\
\label{est3}
\norm{J_{\lambda}\partial_tu_\lambda}_{L^2(0,T;H)} + \lambda^{1/2}\norm{\alpha_\lambda(\partial_t u_\lambda)}_{L^2(0,T; H)}
\leq C\,,\\
\label{est4}
\norm{J_{\Gamma\lambda}\partial_tv_\lambda}_{L^2(0,T; H_\Gamma)}
+ \lambda^{1/2}\norm{\alpha_{\Gamma\lambda}(\partial_t v_\lambda)}_{L^2(0,T; H_\Gamma)}\leq C\,,\\
\label{est5}
\lambda^{1/2}\norm{\mu_\lambda}_{L^2(0,T; H)} + \norm{\nabla\mu_\lambda}_{L^2(0,T; H)} \leq C\,,\\
\label{est6}
\norm{\widehat\beta_\lambda(u_\lambda)}_{L^\infty(0,T; L^1(\Omega))} +
\norm{\widehat\beta_{\Gamma\lambda}(v_\lambda)}_{L^\infty(0,T; L^1(\Gamma))} \leq C\,.\end{gathered}$$ From estimates , , condition and equation , it follows that \[est1’\] \_[L\^(0,T; V)]{} + \_[H\^1(0,T; V\^\*)]{}C. Moreover, from , and the fact that $\partial_t u_\lambda=
\lambda\alpha_\lambda(\partial_t u_\lambda)+J_\lambda\partial_t u_\lambda$ (by definition of Yosida approximation), by comparison in we have \[est5’\] \_[L\^2(0,T; H)]{}C. Finally, and – ensure that \[est7\] \_[L\^2(0,T; H)]{} + \_[L\^2(0,T; H\_)]{}C.
The second estimate {#second}
-------------------
We show here an additional estimate for $\mu_\lambda$ in the space $L^2(0,T; W_{\bf n})$. By , and , it is enough to show that $(\mu_\lambda)_\Omega$ is bounded in $L^2(0,T)$ uniformly in $\lambda$. To this end, we are inspired by the computations in [@col-gil-spr].
We test by $G_\lambda^{-1}(u_\lambda-(u_\lambda(t))_\Omega)$, by $u_\lambda-(u_\lambda(t))_\Omega$, take the difference, but not integrate in time: we deduce that, for almost every $t\in(0,T)$, $$\begin{split}
&\int_\Omega|\nabla u_\lambda(t)|^2 +
\int_\Omega\beta_\lambda(u_\lambda(t))(u_\lambda(t)-(u_\lambda(t))_\Omega)
+\frac\eps2\int_\Gamma|\nabla_\Gamma v_\lambda(t)|^2\\ &\qquad+
\int_\Gamma\beta_{\Gamma\lambda}(v_\lambda(t))(v_\lambda(t)-(u_\lambda(t))_\Omega)
=-\int_\Omega\partial_t u_\lambda(t)G_\lambda^{-1}(u_\lambda(t)-(u_\lambda(t))_\Omega)\\
&\qquad+\int_\Omega\left(g_\lambda(t)-T_\lambda\pi(u_\lambda(t))-\lambda\partial_tu_\lambda(t)-\alpha_\lambda(u_\lambda(t))\right)
(u_\lambda(t)-(u_\lambda(t))_\Omega)\\
&\qquad+\int_\Gamma\left(g_{\Gamma\lambda}(t)-T_\lambda\pi_\Gamma(v_\lambda(t))-\lambda\partial_tv_\lambda(t)
-\alpha_{\Gamma\lambda}(v_\lambda(t))\right)(v_\lambda(t)-(u_\lambda(t))_\Omega)\,.
\end{split}$$ Let us show that the right hand side is bounded in $L^2(0,T)$ uniformly in $\lambda$. It is clear that the last two terms are bounded in $L^2(0,T)$ by the Hölder inequality and the estimates , and . Moreover, by definition of $G_\lambda^{-1}$ it is immediate to check that $(G_\lambda^{-1}(y))_\Omega=\frac1\lambda y_\Omega$ for every $y\in H$: hence, we deduce that $(G_\lambda^{-1}(u_\lambda(t)- (u_\lambda(t))_\Omega))_\Omega=0$ and by the Poincaré inequality we have $$\begin{split}
-\int_\Omega\partial_t u_\lambda(t)G_\lambda^{-1}(u_\lambda(t)-(u_0)_\Omega)
& \leq\norm{\partial_t u_\lambda(t)}_{V^*}\norm{G_\lambda^{-1}(u_\lambda(t)-(u_\lambda(t))_\Omega)}_V\\
&\leq C\norm{\partial_t u_\lambda(t)}_{V^*}\norm{\nabla G_\lambda^{-1}(u_\lambda(t)-(u_\lambda(t))_\Omega)}_H
\end{split}$$ for a positive constant $C$. Now, for any $y\in H$ with $y_\Omega=0$, setting $y_\lambda:=G_\lambda^{-1}(y)\in W_{\bf n}$, we have $\lambda y_\lambda - \Delta y_\lambda = y$, so that testing by $y_\lambda$ we infer that $$\lambda\int_\Omega|y_\lambda|^2 + \int_\Omega|\nabla y_\lambda|^2=\int_\Omega yy_\lambda
\leq \frac{1}{4\delta}\norm{y}_{V^*}^2 + \delta\norm{y_\lambda}_V^2\,,$$ for every $\delta>0$, where $\norm{y_\lambda}^2_V\leq C\norm{\nabla y_\lambda}^2_H$ for a positive constant $C$. Choosing $\delta=\frac1{2C}$ yields $$\lambda\norm{G_\lambda^{-1}(y)}_H^2 + \norm{\nabla G_\lambda^{-1}(y)}_H^2\leq C\norm{y}_{V^*}^2
\qquad\forall\,y\in H:\; y_\Omega=0\,,$$ so that going back to the last inequality we have $$-\int_\Omega\partial_t u_\lambda(t)G_\lambda^{-1}(u_\lambda(t)-(u_0)_\Omega)\leq
C\norm{\partial_t u_\lambda(t)}_{V^*}\norm{u_\lambda(t)}_{V^*}\,.$$ By we deduce that also this last term is bounded in $L^2(0,T)$.
Now, by assumption we know that $(u_0)_\Omega$ belongs to the interior of $D(\beta_\Gamma)$ (hence, also of $D(\beta)$ by ). This implies that there are two constants $k'_0, k_0''>0$ (depending only on $(u_0)_\Omega$) such that $$\beta_\lambda(r)(r-(u_0)_\Omega)\geq k_0'|\beta_\lambda(r)| - k_0''\,, \quad
\beta_{\Gamma\lambda}(r)(r-(u_0)_\Omega)\geq k_0'|\beta_\lambda(r)| - k_0' \qquad
\forall\,r\in\erre$$ (see for example [@col-gil-spr p. 984], [@gil-mir-sch p. 908] and [@mir-zel Prop. A.1]). Moreover, note that by and we have $$|(u_\lambda(t))_\Omega - (u_0)_\Omega| \leq \lambda \int_0^t|(\mu_\lambda(s))_\Omega|\,ds
\leq C\lambda^{1/2} \qquad\forall\,t\in[0,T]\,.$$ Consequently, we have $$\begin{split}
\int_\Omega\beta_\lambda(u_\lambda(t))(u_\lambda(t)-(u_\lambda(t))_\Omega)&=
\int_\Omega\beta_\lambda(u_\lambda(t))(u_\lambda(t)-(u_0)_\Omega)
+\int_\Omega\beta_\lambda(u_\lambda(t))((u_0)_\Omega - (u_\lambda(t))_\Omega)\\&\geq
k_0'\int_\Omega|\beta_\lambda(u_\lambda(t))| - k_0''|\Omega| - C\lambda^{1/2}\int_\Omega|\beta_\lambda(u_\lambda(t))|
\end{split}$$ and similarly $$\int_\Gamma\beta_{\Gamma\lambda}(v_\lambda(t))(v_\lambda(t)-(u_\lambda(t))_\Omega)
\geq
k_0'\int_\Gamma|\beta_{\Gamma\lambda}(v_\lambda(t))| - k_0''|\Gamma| - C\lambda^{1/2}\int_\Gamma|\beta_{\Gamma\lambda}(v_\lambda(t))|$$ Putting this information together, we deduce that $$\norm{\beta_\lambda(u_\lambda)}_{L^2(0,T; L^1(\Omega))} +
\norm{\beta_{\Gamma\lambda}(v_\lambda)}_{L^2(0,T; L^1(\Gamma))} \leq C\,.$$ Hence, testing by $\pm1$ we have $$\begin{split}
\pm |\Omega|(\mu_\lambda)_\Omega&\leq \int_\Omega|\beta_\lambda(u_\lambda)| + \int_\Gamma|\beta_{\Gamma\lambda}(v_\lambda)|
+\int_\Omega\left|\lambda\partial_t u_\lambda
+\alpha_\lambda(\partial_t u_\lambda) + \lambda u_\lambda + T_\lambda\pi(u_\lambda)-g_\lambda\right|\\
&+\int_\Gamma\left|\lambda\partial_t v_\lambda
+\alpha_{\Gamma\lambda}(\partial_t v_\lambda) + \lambda v_\lambda + T_\lambda\pi_\Gamma(v_\lambda)-
g_{\Gamma\lambda}\right|\,,
\end{split}$$ where the right hand side is bounded in $L^2(0,T)$ by the estimates already computed and by – and –. Hence, we have that \[est9\] \_[L\^2(0,T; [W\_[**n**]{}]{})]{}C.
The third estimate {#third}
------------------
We test by $\beta_\lambda(u_\lambda)$: integrating by parts yields $$\begin{split}
&\lambda\int_\Omega\widehat\beta_\lambda(u_\lambda(t)) +
\lambda\int_{Q_t}\beta_\lambda(u_\lambda)u_\lambda
+\int_{Q_t}\beta_\lambda'(u_\lambda)|\nabla u_\lambda|^2 + \int_{Q_t}|\beta_\lambda(u_\lambda)|^2\\
&\qquad+\lambda\int_\Gamma\widehat\beta_\lambda(v_\lambda(t)) +
\int_{\Sigma_t}\beta_\lambda'(v_\lambda)|\nabla_\Gamma v_\lambda|^2
+\int_{\Sigma_t}\beta_{\Gamma\lambda}(v_\lambda)\beta_\lambda(v_\lambda)
=\lambda\int_\Omega\widehat\beta_\lambda(u_0) + \lambda\int_\Gamma\widehat\beta_\lambda(u_{0|\Gamma})\\
&\qquad+\int_{Q_t}\left(g_\lambda-T_\lambda\pi(u_\lambda)-\alpha_\lambda(\partial_t u_\lambda)\right)\beta_\lambda(u_\lambda)
+\int_{\Sigma_t}\left(g_{\Gamma\lambda}-T_\lambda\pi_\Gamma(v_\lambda)-\alpha_{\Gamma\lambda}(\partial_t v_\lambda)\right)
\beta_\lambda(v_\lambda)
\end{split}$$ By the Young inequality, the estimates – and –, the hypotheses – and the monotonicity of $\beta$ and $\beta_\Gamma$, we infer that for every $\delta>0$, we have $$\frac12\int_{Q_t}|\beta_\lambda(u_\lambda)|^2 + \int_{\Sigma_t}\beta_{\Gamma\lambda}(v_\lambda)\beta_\lambda(v_\lambda)
\leq C_\delta + \norm{\widehat\beta(u_0)}_{L^1(\Omega)} + \norm{\widehat\beta_\Gamma(u_{0|\Gamma})}_{L^1(\Gamma)}
+ \delta\int_{\Sigma_t}|\beta_\lambda(v_\lambda)|^2$$ for a positive constant $C_\delta$, independent of $\lambda$. Now, by the assumption and [@cal-colli Lemma 4.4], recalling the definition of $\beta_\lambda$ and $\beta_{\Gamma\lambda}$, it follows that $$|\beta_\lambda(r)|\leq c\left(|\beta_{\Gamma\lambda}(r)|+1\right) \quad\forall\,r\in\erre\,.$$ Hence, substituting in the last inequality and using the Young inequality we get (updating the constant $C_\delta$ at each step) $$\frac12\int_{Q}|\beta_\lambda(u_\lambda)|^2 + \frac1c\int_{\Sigma}|\beta_\lambda(v_\lambda)|^2\leq
C_\delta+\delta\int_\Sigma|\beta_\lambda(v_\lambda)|^2 + \int_\Sigma|\beta_\lambda(v_\lambda)|\leq
C_\delta + 2\delta\int_\Sigma|\beta_\lambda(v_\lambda)|^2\,.$$ Choosing $\delta:=\frac1{4c}$, we infer that \[est10\] \_[L\^2(0,T; H)]{} + \_[L\^2(0,T; H\_)]{} C. By comparison in , recalling also , and , we deduce that \[est11\] \_[L\^2(0,T; H)]{}C. Hence, thanks to the classical results on elliptic regularity (see [@brezzi-gilardi Thm. 3.2]), , and yield \[est12\] \^[1/2]{}\_[L\^2(0,T; H\^[3/2]{}())]{} + \^[1/2]{}\_[L\^2(0,T; H\_)]{} C, and by comparison in also $$\norm{-\eps^{3/2}\Delta_\Gamma v_\lambda + \eps^{1/2}\beta_{\Gamma\lambda}(v_\lambda)}_{L^2(0,T; H_\Gamma)}\leq C\,.$$ Now, since the operators $-\Delta_\Gamma$ and $\beta_{\Gamma\lambda}$ are monotone on $H_\Gamma$, testing $-\eps^{3/2}\Delta_\Gamma v_\lambda + \eps^{1/2}\beta_{\Gamma\lambda}(v_\lambda)$ by either $-\eps^{3/2}\Delta_\Gamma v_\lambda$ or $\eps^{1/2}\beta_{\Gamma\lambda}(v_\lambda)$, integrating by parts on $\Gamma$, using monotonicity, the last estimate and the Young inequality implies by a classical argument that \[est13\] \^[3/2]{}\_[L\^2(0,T; H\_)]{} + \^[1/2]{} C.
The passage to the limit {#limit}
------------------------
In this section, we pass to the limit in the approximated problem – and we prove the existence of a solution for the original problem.
First of all, thanks to the estimates –, there are $$\begin{gathered}
u \in L^\infty(0,T; V)\cap L^2(0,T; W)\,, \qquad
v \in L^\infty(0,T; V_{\Gamma})\cap L^2(0,T; W_\Gamma)\,,\qquad
\mu \in L^2(0,T; W_{\bf n})\,,\\
\eta, \xi \in L^2(0,T; H)\,, \qquad \eta_\Gamma, \xi_\Gamma \in L^2(0,T; H_\Gamma)\,,\end{gathered}$$ such that, along a subsequence that we still denote by $\lambda$ for simplicity, $$\begin{gathered}
\label{conv1}
u_\lambda \wstarto u \quad\text{in } L^\infty(0,T; V)\,, \qquad
u_\lambda \wto u \quad\text{in } L^2(0,T; W)\,,\\
\label{conv2}
v_\lambda \wstarto u \quad\text{in } L^\infty(0,T; V_{\Gamma})\,,
\qquad v_\lambda \wto v \quad\text{in } L^2(0,T; W_\Gamma)\,,\\
\label{conv4}
\mu_\lambda \to \mu \quad\text{in } L^2(0,T; W_{\bf n})\,,\\
\label{conv5}
\alpha_\lambda(\partial_t u_\lambda) \wto \eta \quad\text{in } L^2(0,T; H)\,, \qquad
\alpha_{\Gamma\lambda}(\partial_t v_\lambda) \wto \eta_\Gamma \quad\text{in } L^2(0,T; H_\Gamma)\,,\\
\label{conv6}
\beta_\lambda(u_\lambda) \wto \xi \quad\text{in } L^2(0,T; H)\,, \qquad
\beta_{\Gamma\lambda}(v_\lambda) \wto \xi_\Gamma \quad\text{in } L^2(0,T; H_\Gamma)\end{gathered}$$ and \[conv7\] u\_0 H\^1(0,T; H), v\_0 H\^1(0,T; H\_), \_0 L\^2(0,T; H). Moreover, noting that, by definition of Yosida approximation, $$|\partial_t u_\lambda - J_\lambda\partial_t u_\lambda|=\lambda|\alpha_\lambda(\partial_t u_\lambda)|\,, \qquad
|\partial_t v_\lambda - J_{\Gamma\lambda}\partial_t v_\lambda|=\lambda|\alpha_{\Gamma\lambda}(\partial_t v_\lambda)|\,,$$ it is readily seen that – imply that $u \in H^1(0,T; H)$, $v \in H^1(0,T; H_\Gamma)$ and \[conv8\] J\_\_t u\_\_t u L\^2(0,T; H), J\_\_t v\_\_t v L\^2(0,T; H\_). It is clear that $u_{|\Gamma}=v$. Moreover, since the inclusion $\V\embed \H$ is compact, by the classical compactness results for functions with values in Banach spaces (see [@simon Cor. 4, p. 85]), we have \[conv9\] u\_u C\^0(\[0,T\]; H), v\_v C\^0(\[0,T\]; H\_), which together with and the strong-weak closure of the maximal monotone operators $\beta$ and $\beta_\Gamma$ ensure that $$\xi \in \beta(u) \quad\text{a.e.~in } Q\,, \qquad \xi_\Gamma\in \beta_\Gamma(v) \quad\text{a.e.~in } \Sigma\,.$$ Furthermore, by the Lipschitz continuity of $T_\lambda$, $\pi$ and $\pi_\Gamma$, using the strong convergences of $u_\lambda$ and $v_\lambda$ it is a standard matter to check that $$T_\lambda\pi(u_\lambda)\to \pi(u) \quad\text{in } L^2(0,T; H)\,, \qquad
T_\lambda\pi_\Gamma(v_\lambda)\to \pi_\Gamma(v) \quad\text{in } L^2(0,T; H_\Gamma)\,.$$
Taking this information into account and letting $\lambda\searrow0$ in –, we get $$\begin{gathered}
\label{lim1}
\partial_t u - \Delta \mu = 0\,, \\
\label{lim2}
\mu=\eta - \Delta u + \xi + \pi(u) - g\,,\qquad
\eta_\Gamma + \partial_{\bf n} u - \eps\Delta_\Gamma v + \xi_\Gamma + \pi_\Gamma(v) = g_\Gamma\,.\end{gathered}$$
The last thing that we have to prove is that $\eta\in\alpha(\partial_t u)$ a.e. in $Q$ and $\eta_\Gamma\in\alpha_\Gamma(\partial_t v)$ a.e. in $\Sigma$. To this end, performing the same test as in Section \[first\], one can easily infer that $$\begin{split}
&\int_Q|\nabla \mu_\lambda|^2
+ \int_Q\alpha_\lambda(\partial_t u_\lambda)\partial_t u_\lambda
+\frac12\int_\Omega|\nabla u_\lambda(T)|^2
+\int_\Omega\widehat\beta_\lambda(u_\lambda(t))\\
&\qquad+ \int_\Sigma\alpha_{\Gamma\lambda}(\partial_t v_\lambda)\partial_t v_\lambda
+\frac\eps2\int_\Gamma|\nabla_\Gamma v_\lambda(t)|^2
+\int_\Gamma\widehat\beta_{\Gamma\lambda}(v_\lambda(t))\\
&\leq\frac\lambda2\int_\Omega|u_0|^2 + \frac12\int_\Omega|\nabla u_0|^2 + \int_\Omega\widehat\beta(u_0)
+\frac\eps2\int_\Gamma|\nabla_\Gamma u_{0|\Gamma}|^2 + \int_\Gamma\widehat\beta_\Gamma(u_{0|\Gamma})\\
&\qquad+\int_Q\left(g_\lambda-T_\lambda\pi(u_\lambda)\right)\partial_t u_\lambda
+\int_\Sigma\left(g_{\Gamma\lambda}-T_\lambda\pi(v_\lambda)\right)\partial_t v_\lambda\,.
\end{split}$$ Now, since $u\in H^1(0,T; H)$, $v\in H^1(0,T; H_\Gamma)$, $\xi \in \beta(u)$ a.e. in $Q$ and $\xi_\Gamma \in \beta_\Gamma(v)$ a.e. in $\Sigma$, by [@brezis Lemma 3.3] the functions $$t \mapsto \int_\Omega \widehat\beta(u(t))\,, \qquad t\mapsto \int_\Gamma \widehat\beta_\Gamma(v(t))\,,$$ are absolutely continuous on $[0,T]$ with derivatives given by $(\xi,\partial_t u)_H$ and $(\xi_\Gamma, \partial_t v)_{H_\Gamma}$, respectively. Moreover, the strong convergence of $u_\lambda$ and $v_\lambda$ together with [@brezis Prop. 2.11] ensure that $$\int_\Omega\widehat\beta_\lambda(u_\lambda(T))\to \int_\Omega\widehat\beta(u(T))\,, \qquad
\int_\Omega\widehat\beta_{\Gamma\lambda}(v_\lambda(T))\to \int_\Omega\widehat\beta_\Gamma(v(T))\,.$$ Hence, by – and the weak lower semicontinuity of the convex integrands, we infer $$\begin{split}
&\limsup_{\lambda\searrow0}\left[\int_Q\alpha_\lambda(\partial_t u_\lambda)\partial_t u_\lambda
+\int_\Sigma\alpha_{\Gamma\lambda}(\partial_t v_\lambda)\partial_t v_\lambda\right]\\
&\leq\frac12\int_\Omega|\nabla u_0|^2
+\frac\eps2\int_\Gamma|\nabla_\Gamma u_{0|\Gamma}|^2
+ \int_\Omega\widehat\beta(u_0)
+ \int_\Gamma\widehat\beta_\Gamma(u_{0|\Gamma})
+\int_Q\left(g-\pi(u)\right)\partial_t u\\
&\quad+\int_\Sigma\left(g_\Gamma-\pi(v)\right)\partial_t v
-\liminf_{\lambda\searrow0}\left[\int_Q|\nabla\mu_\lambda|^2 + \frac12\int_\Omega|\nabla u_\lambda(T)|^2
+\int_\Omega\widehat\beta(u_\lambda(T)) + \int_\Gamma \widehat\beta_\Gamma(v_\lambda(T))\right]\\
&\leq\frac12\int_\Omega|\nabla u_0|^2
+\frac\eps2\int_\Gamma|\nabla_\Gamma u_{0|\Gamma}|^2
+ \int_\Omega\widehat\beta(u_0)
+ \int_\Gamma\widehat\beta_\Gamma(u_{0|\Gamma})
+\int_Q\left(g-\pi(u)\right)\partial_t u\\
&\quad+\int_\Sigma\left(g_\Gamma-\pi(v)\right)\partial_t v
-\int_Q|\nabla\mu|^2 - \frac12\int_\Omega|\nabla u(T)|^2
-\int_\Omega\widehat\beta(u(T)) - \int_\Gamma \widehat\beta_\Gamma(v(T))
\end{split}$$ Now, testing equation by $\mu$, the first equation in by $\partial_t u$ and taking the difference, it is a standard matter to check that the right hand side of the last inequality coincides with $$\int_Q\eta\partial_t u + \int_\Sigma \eta_\Gamma \partial_t v\,,$$ so that $$\limsup_{\lambda\searrow0}\left[\int_Q\alpha_\lambda(\partial_t u_\lambda)\partial_t u_\lambda
+\int_\Sigma\alpha_{\Gamma\lambda}(\partial_t v_\lambda)\partial_t v_\lambda\right]\leq
\int_Q\eta\partial_t u + \int_\Sigma \eta_\Gamma \partial_t v\,.$$ This implies by a classical argument on maximal monotone operators that $\eta \in \alpha(\partial_tu)$ a.e. in $Q$ and $\eta_\Gamma \in \alpha_\Gamma(\partial_t v)$ a.e. in $\Sigma$. This concludes the proof of Theorem \[thm1\].
The second existence result
===========================
\[proof2\]
We present here the proof of the second main result of the paper. Recall that we are working now under the stronger conditions –, so that the regularity of the approximated solutions is the one given by Lemma \[prop\_app2\].
The first estimate {#first'}
------------------
We proceed as in Section \[first\], using the monotonicity of $\alpha_\lambda$ on the left hand side. For every $t\in[0,T]$ we obtain $$\begin{split}
&\lambda\int_{Q_t}|\mu_\lambda|^2 + \int_{Q_t}|\nabla\mu_\lambda|^2
+\lambda\int_{Q_t}|\partial_t u_\lambda|^2
+\frac\lambda2\int_\Omega|u_\lambda(t)|^2 + \frac12\int_\Omega|\nabla u_\lambda(t)|^2
+ \int_\Omega\widehat\beta_\lambda(u_\lambda(t))\\
&\qquad+\lambda\int_{\Sigma_t}|\partial_t v_\lambda|^2
+\int_{\Sigma_t}\alpha_{\Gamma\lambda}(\partial_t v_\lambda)\partial_t v_\lambda
+\frac\eps2\int_\Sigma|\nabla_\Gamma v_\lambda(t)|^2
+\int_\Sigma\widehat\beta_{\Gamma\lambda}(v_\lambda(t))\\
&\leq\frac\lambda2\int_\Omega|u_0|^2 + \frac12\int_\Omega|\nabla u_0|^2 +
\frac\eps2\int_\Sigma|\nabla_\Gamma u_{0|\Gamma}|^2
+\int_\Omega\widehat\beta_\lambda(u_0) + \int_\Sigma\widehat\beta_{\Gamma\lambda}(u_{0|\Gamma})\\
&\qquad+\int_{Q_t}\left(g_\lambda-T_\lambda\pi(u_\lambda)\right)\partial_t u_\lambda
+\int_{\Sigma_t}\left(g_{\Gamma\lambda}-T_\lambda\pi_\Gamma(v_\lambda)\right)\partial_t v_\lambda\,.
\end{split}$$ Now, in order to handle the terms on the boundary, we proceed exactly as in Section \[first\] using the coercivity of $\alpha_\Gamma$ on the left hand side combined with the weighted Young inequality on the last term in right-hand side. Furthermore, thanks to hypothesis and , integrating by parts and taking into account that $\lambda\in(0,1)$ and the Lipschitz continuity of $T_\lambda$ and $\pi$, we have $$\begin{split}
&\int_{Q_t}\left(g_\lambda-T_\lambda\pi(u_\lambda)\right)\partial_t u_\lambda=
\int_{Q_t}g_\lambda\partial_t u_\lambda + \int_{Q_t}T_\lambda\pi(u_\lambda)\left(\lambda\mu_\lambda-\Delta\mu_\lambda\right)\\
&=-\int_0^t\ip{\partial_t g(s)}{u_\lambda(s)}_V\,ds + \int_\Omega g_\lambda(t)u_\lambda(t) - \int_\Omega g_\lambda(0)u_0\\
&\qquad+\lambda\int_{Q_t}T_\lambda\pi(u_\lambda)\mu_\lambda
+\int_{Q_t}\nabla T_\lambda\pi(u_\lambda)\cdot \nabla \mu_\lambda\\
&\leq \frac12\norm{g}^2_{H^1(0,T; V^*)} + \frac12\norm{u_\lambda}_{L^2(0,t; V)}
+\frac1{4\delta}\norm{g}_{L^\infty(0,T; V^*)}^2 + \delta\norm{u_\lambda(t)}_V^2
+\norm{g}_{L^\infty(0,T; V^*)}\norm{u_0}_V\\
&\qquad+\frac\lambda2\int_{Q_t}|\mu_\lambda|^2+ \frac12\int_{Q_t}|\nabla\mu_\lambda|^2
+ \frac{C_\pi^2+1}2\norm{u_\lambda}_{L^2(0,t; V)}
\end{split}$$ for every $\delta>0$. Now, we write $$\norm{u_\lambda}_V^2=\norm{\nabla u_\lambda}^2_H + \norm{u_\lambda}^2_H\,,$$ where the first term can be handled using Gronwall’s lemma and the second by . Hence, choosing $\delta$ small enough and rearranging the terms, thanks to the Gronwall lemma we still obtain the estimates – and –.
The second estimate {#second'}
-------------------
First of all, in order to perform this estimate, we need to identify the initial values at $t=0$ of $\partial_t u_\lambda$, $\partial_t v_\lambda$ and $\mu_\lambda$: to this end, it is natural to require that these satisfy the system – at $t=0$. We have the following result.
\[init\_reg\] There is a unique triplet $(u_{0\lambda}', v_{0\lambda}', \mu_{0\lambda})\in H\times H_{\Gamma}\times W_{\bf n}$ such that $$\begin{cases}
u_{0\lambda}' + \lambda\mu_{0\lambda} - \Delta\mu_{0\lambda}=0 \quad&\text{in } \Omega\,,\\
\mu_{0\lambda} = \lambda u_{0\lambda}' + \alpha_\lambda(u_{0\lambda}') + \lambda u_0
-\Delta u_0 + \beta_\lambda(u_0) + T_\lambda\pi(u_0) - g_\lambda(0)\quad&\text{in } \Omega\,,\\
\lambda v_{0\lambda}' + \alpha_{\Gamma\lambda}(v_{0\lambda}') + \partial_{\bf n}u_0
-\eps\Delta_\Gamma u_{0|\Gamma} + \beta_{\Gamma\lambda}(u_{0|\Gamma})
+T_\lambda\pi_\Gamma(u_{0|\Gamma})=g_{\Gamma\lambda}(0) \quad&\text{in } \Gamma\,.
\end{cases}$$ Furthermore, there exists $C>0$, independent of $\lambda$, such that $$\lambda\norm{\mu_{0\lambda}}_H^2 + \norm{\nabla\mu_{0\lambda}}_H^2
+\lambda\norm{u_{0\lambda}'}_H^2
+ \norm{\widehat{\alpha_\lambda^{-1}}(\alpha_\lambda(u_{0\lambda}'))}_{L^1(\Omega)}
+\lambda\norm{v_{0\lambda}'}_H^2
+\norm{\widehat{\alpha_{\Gamma\lambda}^{-1}}(\alpha_{\Gamma\lambda}(v_{0\lambda}'))}_{L^1(\Gamma)} \leq C\,.$$
Setting $z_{0\lambda}:=g_\lambda(0)-T_\lambda\pi(u_0)-\beta_\lambda(u_0)+\Delta u_0 - \lambda u_0$ and $w_{0\lambda}:=g_{\Gamma\lambda}(0)-T_\lambda\pi_\Gamma(u_{0|\Gamma})-\beta_{\Gamma\lambda}(u_{0|\Gamma})
+\eps\Delta_\Gamma u_{0|\Gamma}-\partial_{\bf n}u_0$, by the hypothesis we have $(z_{0\lambda}, w_{0\lambda})\in \H$. Moreover, the system which we are interested in reduces to $A_\lambda(u_{0\lambda}', v_{0\lambda}')=(z_{0\lambda}, w_{0\lambda})$, with $\mu_{0\lambda}=-G_\lambda^{-1}(u_{0\lambda}')$. Since $A_\lambda$ is bi-Lipschitz continuous on $\H$, there is a unique pair $(u_{0\lambda}', v_{0\lambda}')\in\H$ solving the system with $\mu_{0\lambda}=-G_\lambda^{-1}(u_{0\lambda}')\in W_{\bf n}$ by definition of $G_\lambda$. Furthermore, testing the first equation by $\mu_{0\lambda}$, the second by $u_{0\lambda}'$, taking the difference and recalling the hypotheses –, we have $$\begin{split}
\lambda\int_\Omega|\mu_{0\lambda}|^2&+\int_\Omega|\nabla\mu_{0\lambda}|^2
+\lambda\int_\Omega|u_{0\lambda}'|^2 + \int_\Omega\alpha_\lambda(u_{0\lambda}')u_{0\lambda}'
+\lambda\int_\Gamma|v_{0\lambda}'| + \int_\Gamma\alpha_{\Gamma\lambda}(v_{0\lambda}')v_{0\lambda}'\\
&\leq\int_\Omega z_{0\lambda}u_{0\lambda}' + \int_\Gamma w_{0\lambda}v_{0\lambda}'\,.
\end{split}$$ On the left hand side, we use and the fact that $\alpha_{\Gamma\lambda}\in\alpha_\Gamma(J_{\Gamma\lambda})$ to infer that $$\int_\Gamma\alpha_{\Gamma\lambda}(v_{0\lambda}')v_{0\lambda}'\geq
b_1\int_{\Gamma}|J_{\Gamma\lambda} v_{0\lambda}'|^2 +
\lambda\int_\Gamma|\alpha_{\Gamma\lambda}(v_{0\lambda}')|^2 - b_2|\Gamma|\,,$$ while on the right hand side, since $v_{0\lambda}'-J_{\Gamma\lambda} v_{0\lambda}'=
\lambda\alpha_{\Gamma\lambda}(v_{0\lambda}')$, for every $\delta>0$ we have $$\int_\Gamma w_{0\lambda}v_{0\lambda}' \leq \frac1{2\delta}\int_\Gamma|w_{0\lambda}|^2
+\frac\delta2\int_\Gamma|v_{0\lambda}'|^2\leq
\frac1{2\delta}\int_\Gamma|w_{0\lambda}|^2
+\delta\lambda^2\int_\Gamma|\alpha_{\Gamma\lambda}(v_{0\lambda}')|^2 +
\delta\int_\Gamma|J_{\Gamma\lambda} v_{0\lambda}'|^2\,,$$ where $w_{0\lambda}$ is bounded in $H_\Gamma$ uniformly in $\lambda$ by –. Now, recall that either or is in order: we distinguish the two cases. Under hypothesis , we have, on the left hand side, $$\int_\Omega\alpha_\lambda(u_{0\lambda}')u_{0\lambda}'\geq
a_1\int_{\Omega}|J_\lambda u_{0\lambda}'|^2 + \lambda\int_\Omega|\alpha_\lambda(u_{0\lambda}')|^2 - a_2|\Omega|$$ while on the right hand side, since $u_{0\lambda}'-J_\lambda u_{0\lambda}'=\lambda\alpha_\lambda(u_{0\lambda}')$, $$\int_\Omega z_{0\lambda}u_{0\lambda}' \leq \frac1{2\delta}\int_\Omega|z_{0\lambda}|^2
+\frac\delta2\int_\Omega|u_{0\lambda}'|^2\leq
\frac1{2\delta}\int_\Omega|z_{0\lambda}|^2
+\delta\lambda^2\int_\Omega|\alpha_\lambda(u_{0\lambda}')|^2 + \delta\int_\Omega|J_\lambda u_{0\lambda}'|^2\,.$$ Since $z_{0\lambda}$ is uniformly bounded in $H$ by –, choosing $\delta>0$ sufficiently small and rearranging the terms yields the desired estimate. Otherwise, if is in order, then $z_{0\lambda}$ is uniformly bounded also in $V$ by and we can estimate the term on the right hand side in the duality $V$–$V^*$: $$\int_\Omega z_{0\lambda}u_{0\lambda}'=
-\lambda\int_\Omega z_{0\lambda}\mu_{0\lambda} - \int_\Omega \nabla z_{0\lambda}\cdot\nabla\mu_{0\lambda}\leq
\frac\lambda2\int_\Omega|\mu_{0\lambda}|^2 + \frac12\int_\Omega|\nabla\mu_{0\lambda}|^2 + \norm{z_{0\lambda}}_V^2\,,$$ from which the desired estimate follows rearranging the terms. Note that we have used the fact that $$\widehat{\alpha_\lambda^{-1}}(\alpha_\lambda(u_{0\lambda}'))\leq
\widehat\alpha_\lambda(u_{0\lambda}')+
\widehat{\alpha_\lambda^{-1}}(\alpha_\lambda(u_{0\lambda}'))=
\alpha_\lambda(u_{0\lambda}')u_{0\lambda}'$$ and the equivalent statement for $\alpha_{\Gamma}$.
We are ready now to perform the estimate. The intuitive idea is to test equation by $\partial_t \mu_\lambda$, the time-derivative of equation by $\partial_t u_\lambda$, take the difference and integrate. However, the regularity of the approximated solutions does not allow us to do so. Consequently, we prove by hand that the resulting estimate holds anyway. To this end, we proceed in a technical way through a discrete-time argument as in [@bcst1 Section 5.2], to which we refer for further detail; however, we avoid any detailed computation for sake of conciseness.
Fix $t\in[0,T]$ and set, for every $n\in\enne$, $\tau_n:=\frac{t}{n}$ and $t:=i\tau_n$ for $i\in\{0,\ldots,n\}$. Now, by the regularities given by Lemma \[prop\_app2\], we note that – and hold for every $s\in[0,T]$. Hence, it makes sense to test at time $t_n^i$ by $\mu_\lambda(t_n^i)-\mu_\lambda(t_n^{i-1})$, the difference between at $t_n^i$ and at $t_n^{i-1}$ by $\partial_t u_\lambda(t_n^i)$, and take the difference. Moreover, since $\partial \widehat{\alpha_\lambda^{-1}}=\alpha_\lambda^{-1}$, for every $i=1,\ldots,n$, we have that $$\left(\alpha_\lambda(\partial_t u_\lambda(t_n^i))-\alpha_\lambda(\partial_t u_\lambda(t_n^{i-1}))\right)\partial_t u_\lambda(t_n^i)
\geq\widehat{\alpha_\lambda^{-1}}(\alpha_\lambda(\partial_t u_\lambda(t_n^i)))-
\widehat{\alpha_\lambda^{-1}}(\alpha_{\Gamma\lambda}(\partial_t u_\lambda(t_n^{i-1})))$$ and similarly for the terms in $\alpha_{\Gamma\lambda}$. Hence, integrating by parts and summing over $i$ yields (after some technical computations analogue to the ones in [@bcst1 Section 5.2]), $$\begin{split}
&\frac\lambda2\int_\Omega|\mu_\lambda(t)| + \frac12\int_\Omega|\nabla\mu_\lambda(t)|^2
+\frac\lambda2\int_\Omega|\partial_t u_\lambda(t)|^2
+ \int_\Omega\widehat{\alpha_\lambda^{-1}}(\alpha_\lambda(\partial_t u_\lambda(t)))\\
&\quad+\lambda\int_{Q_t}|\partial_t u_\lambda|^2+\int_{Q_t}|\nabla\partial_t u_\lambda|^2
+\int_{Q_t}\beta_\lambda'(u_\lambda)|\partial_t u_\lambda|^2
+\frac\lambda2\int_\Gamma|\partial_t v_\lambda(t)|^2\\
&\quad+\int_\Gamma\widehat{\alpha_{\Gamma\lambda}^{-1}}(\alpha_{\Gamma\lambda}(\partial_t v_\lambda(t)))
+\eps\int_{\Sigma_t}|\nabla_\Gamma\partial_t v_\lambda|^2
+\int_{\Sigma_t}\beta_{\Gamma\lambda}'(v_\lambda)|\partial_t v_\lambda|^2\\
&\leq\frac\lambda2\int_\Omega|\mu_{0\lambda}|^2 + \frac12\int_\Omega|\nabla\mu_{0\lambda}|^2
+\frac\lambda2\int_\Omega|u_{0\lambda}'|^2
+\!\int_\Omega\widehat{\alpha_\lambda^{-1}}(\alpha_\lambda(u_{0\lambda}'))
+\frac\lambda2\int_\Gamma|v_{0\lambda}'|^2
+\!\int_\Gamma\widehat{\alpha_{\Gamma\lambda}^{-1}}(\alpha_{\Gamma\lambda}(v_{0\lambda}'))\\
&\quad+\int_{Q_t}\partial_t g_\lambda \partial_t u_\lambda -\int_{Q_t} T_\lambda'(\pi(u_\lambda))\pi'(u_\lambda)|\partial_t u_\lambda|^2
+\int_{\Sigma_t}\partial_t g_{\Gamma\lambda} \partial_t v_\lambda
-\int_{\Sigma_t} T_\lambda'(\pi_\Gamma(v_\lambda))\pi_\Gamma'(v_\lambda)|\partial_t v_\lambda|^2\,.
\end{split}$$ Now, the first six terms and the last term on the right-hand side are bounded uniformly in $\lambda$ thanks to Lemma \[init\_reg\] and the estimate , respectively (recall that $|T_\lambda'|\leq 1$ and $|\pi_\Gamma'|\leq C_{\pi_\Gamma}$). Moreover, the three remaining terms can be estimated using the duality $V$–$V^*$, the assumption , the Young inequality and by $$C_\delta+ \delta\norm{\partial_t u_\lambda}^2_{L^2(0,t; V)} \leq C_\delta + \delta\norm{\nabla \partial_t u_\lambda}^2_{L^2(0,t; H)}
+\delta \lambda^2\norm{\mu_\lambda}^2_{L^2(0,t; H)}\,,$$ for every $\delta >0$. Hence, choosing $\delta$ sufficiently small, we deduce that there is a positive constant $C$ such that $$\begin{gathered}
\label{est14}
\norm{\nabla\partial_t u_\lambda}_{L^2(0,T; H)} + \lambda^{1/2}\norm{\partial_t u_\lambda}_{L^\infty(0,T; H)}\leq C\,,\\
\label{est15}
\norm{\nabla_\Gamma \partial_t v_\lambda}_{L^2(0,T; H_\Gamma)} +
\lambda^{1/2}\norm{\partial_t v_\lambda}_{L^\infty(0,T; H_\Gamma)}\leq C\,,\\
\label{est16}
\norm{\nabla\mu_\lambda}_{L^\infty(0,T; H)} + \lambda^{1/2}\norm{\mu_\lambda}_{L^\infty(0,T; H)}\leq C\,,\\
\label{est16bis}
\norm{\widehat{\alpha_\lambda^{-1}}(\alpha_\lambda(\partial_t u_\lambda))}_{L^\infty(0,T; L^1(\Omega))}
+\norm{\widehat{\alpha_{\Gamma\lambda}^{-1}}(\alpha_{\Gamma\lambda}(\partial_t v_\lambda))}_{L^\infty(0,T; L^1(\Gamma))}\leq C\end{gathered}$$ Thanks to –, condition and and , it follows that $\partial_t u_\lambda$ and $\partial_t v_\lambda$ are uniformly bounded in $L^2(0,T; V)$ and $L^2(0,T; V_{\Gamma})$, respectively. Moreover, integrating it easily follows that $\widehat{\alpha}_\lambda$ and $\widehat\alpha_{\Gamma\lambda}$ are uniformly bounded in $\lambda$ from above by a quadratic function: hence, $\widehat{\alpha_\lambda^{-1}}=(\widehat\alpha_\lambda)^*$ and $\widehat{\alpha_{\Gamma\lambda}^{-1}}=(\widehat\alpha_{\Gamma\lambda})^*$ are uniformly bounded from below by a quadratic function. Consequently, from the estimate we infer also that \[est17\] \_[L\^(0,T; H)]{} + \_[L\^(0,T; H\_)]{}C. Moreover, from the coercivity of $\alpha_\Gamma$ and the Young inequality, we have $$b_1|J_{\Gamma\lambda}\partial_t v_\lambda|^2 - b_2 \leq
\alpha_{\Gamma\lambda}(\partial_t v_\lambda)J_{\Gamma\lambda}\partial_t v_\lambda\leq
\frac{b_1}{2}|J_{\Gamma\lambda}\partial_t v_\lambda|^2
+\frac{1}{2b_1}|\alpha_{\Gamma\lambda}(\partial_t v_\lambda)|^2,$$ so that by we deduce that \[est17\_bis\] \_[L\^(0,T; H)]{}C. Finally, arguing exactly as in Section \[second\] but using the stronger estimates –, it is readily seen that $(\mu_\lambda)_\Omega$ is uniformly bounded in $L^\infty(0,T)$, so that by we have $$\norm{\mu_\lambda}_{L^\infty(0,T; V)}\leq C\,.$$ Moreover, thanks to and , by comparison in and elliptic regularity we have \[est18\] \_[L\^(0,T; V)L\^2(0,T; W\_[**n**]{}H\^3())]{} +\_[L\^(0,T; V\^\*)]{}C. It is clear that under the assumption , the same argument ensures that $J_\lambda\partial_t u_\lambda$ is uniformly bounded in $L^{\infty}(0,T; H)$ as well, hence also $\mu_\lambda$ in $L^\infty(0,T; W_{\bf n})$ form , from which the last sentence of Theorem \[thm2\] follows.
The third estimate {#third'}
------------------
For every $t\in[0,T]$, we test equation by $-\Delta u_\lambda(t)$ and integrate by parts: $$\begin{split}
\int_\Omega|\Delta u_\lambda(t)|^2 &+
\int_\Omega\beta_\lambda'(u_\lambda(t))|\nabla u_\lambda(t)|^2
+\eps\int_\Gamma\beta_\lambda'(v_\lambda(t))|\nabla_\Gamma v_\lambda(t)|^2
+\int_\Gamma\beta_\lambda(v_\lambda(t))\beta_{\Gamma\lambda}(v_\lambda(t))\\
&=-\int_\Omega\left(g_\lambda(t)-T_\lambda\pi(u_\lambda(t))-\lambda u_\lambda(t)+\mu_\lambda(t)-\lambda\partial_t u_\lambda(t)
-\alpha_\lambda(\partial_t u_\lambda(t))\right)\Delta u_\lambda(t)\\
&\quad-\int_\Gamma\left(g_{\Gamma\lambda}(t)-T_\lambda\pi_\Gamma(v_\lambda(t))
-\lambda\partial_t v_\lambda(t)-\alpha_{\Gamma\lambda}(\partial_t v_\lambda(t))\right)\beta_\lambda(v_\lambda(t))\,.
\end{split}$$ Thanks to the estimates –, the terms in brackets on the right hand side are bounded uniformly in $\lambda$. Hence, using the weighted Young inequality and the hypothesis as in Section \[third\], we infer that \[est19\] \_[L\^(0,T; H)]{} + \_[L\^(0,T; H\_)]{}C. By comparison in we deduce that \[est20\] \_[L\^(0,T; H)]{}C. Moreover, by the classical results on elliptic regularity (see [@brezzi-gilardi Thm. 3.2]), estimate implies, together with and , that \[est21\] \^[1/2]{}\_[L\^(0,T; H\^[3/2]{}())]{}+\^[1/2]{}\_[L\^(0,T; H\_)]{} C and, by comparison in , $$\norm{-\eps^{3/2}\Delta_\Gamma v_\lambda + \eps^{1/2}\beta_{\Gamma\lambda}(v_\lambda)}_{L^\infty(0,T; H_\Gamma)}\leq C\,.$$ We deduce, as usual, that \[est22\] \^[3/2]{}\_[L\^(0,T; H\_)]{} +\^[1/2]{}\_[L\^(0,T; H\_)]{}C.
The passage to the limit {#the-passage-to-the-limit}
------------------------
Taking into account –, – and –, recalling that $\eps>0$ is fixed, we infer that there are $$\begin{gathered}
u \in W^{1,\infty}(0,T; V^*)\cap H^1(0,T; V)\cap L^\infty(0,T; W)\,, \\
v \in W^{1,\infty}(0,T; H)\cap H^1(0,T; V_\Gamma)\cap L^\infty(0,T; W_\Gamma)\,,\\
\mu \in L^\infty(0,T; W_{\bf n})\cap L^2(0,T; H^3(\Omega))\,,\\
\eta\,,\xi \in L^\infty(0,T; H)\,, \qquad \eta_\Gamma\,,\xi_\Gamma \in L^\infty(0,T; H_\Gamma)\,,\end{gathered}$$ such that, along a subsequence that we still denote by $\lambda$ for simplicity, $$\begin{aligned}
u_\lambda \wstarto u \quad\text{in } W^{1,\infty}(0,T; V^*)\cap L^\infty(0,T; W)\,,& \qquad
u_\lambda \wto v \quad\text{in } H^1(0,T; V)\,,\\
v_\lambda \wstarto u \quad\text{in } W^{1,\infty}(0,T; H_\Gamma)\cap L^\infty(0,T; W_{\Gamma})\,,&
\qquad v_\lambda \wto v \quad\text{in } H^1(0,T; V_\Gamma)\,,\\
\mu_\lambda \wstarto \mu \quad\text{in } L^\infty(0,T; V)\,,&
\qquad \mu_\lambda \wto \mu \quad\text{in } L^2(0,T; W_{\bf n}\cap H^3(\Omega))\,,\\
\alpha_\lambda(\partial_t u_\lambda) \wstarto \eta \quad\text{in } L^\infty(0,T; H)\,,& \qquad
\alpha_{\Gamma\lambda}(\partial_t v_\lambda) \wstarto \eta_\Gamma \quad\text{in } L^\infty(0,T; H_\Gamma)\,,\\
\beta_\lambda(u_\lambda) \wstarto \xi \quad\text{in } L^\infty(0,T; H)\,,& \qquad
\beta_{\Gamma\lambda}(v_\lambda) \wstarto \xi_\Gamma \quad\text{in } L^\infty(0,T; H_\Gamma)\end{aligned}$$ and $$\lambda u_\lambda \to 0 \quad\text{in } W^{1,\infty}(0,T; H)\,, \quad
\lambda v_\lambda \to 0 \quad\text{in } W^{1,\infty}(0,T; H_\Gamma)\,, \quad
\lambda\mu_\lambda\to0 \quad\text{in } L^\infty(0,T; H)\,.$$ At this point, it is straightforward to conclude as in Section \[limit\] and Theorem \[thm2\] is proved.
The third existence result {#proof3}
==========================
First of all, note that all the estimates which do not involve the assumption continue to hold also in this setting. Namely, going back to Sections \[first\] and \[first’\], it is readily seen that –, –, – are satisfied.
Secondly, by , there is $\delta>0$ such that $\pm\delta\in D(\alpha)\cap D(\alpha_\Gamma)$. Hence, by the Young inequality we have $$\begin{gathered}
\pm\delta\alpha_\lambda(\partial_t u_\lambda)\leq \widehat\alpha_\lambda(\pm \delta)
+ \widehat{\alpha_{\lambda}^{-1}}(\partial_t u_\lambda)
\leq \widehat\alpha(\pm\delta) + \widehat{\alpha_{\lambda}^{-1}}(\partial_t u_\lambda)\,,\\
\pm\delta\alpha_{\Gamma\lambda}(\partial_t v_\lambda)\leq \widehat\alpha_{\Gamma\lambda}(\pm \delta)
+ \widehat{\alpha_{\Gamma\lambda}^{-1}}(\partial_t v_\lambda)
\leq \widehat\alpha_{\Gamma}(\pm\delta) + \widehat{\alpha_{\Gamma\lambda}^{-1}}(\partial_t v_\lambda)\,,\end{gathered}$$ so that by we deduce that $$\norm{\alpha_\lambda(\partial_t u_\lambda)}_{L^\infty(0,T; L^1(\Omega))}+
\norm{\alpha_{\Gamma\lambda}(\partial_t v_\lambda)}_{L^\infty(0,T; L^1(\Gamma))}\leq C\,.$$ Furthermore, thanks to the assumptions –, the estimates and , as well as the continuous inclusions $V\embed L^6(\Omega)$ and $V_\Gamma\embed L^q(\Gamma)$ (for every $q\geq1$), we have that for every $q\in[1,+\infty)$ \[est23’\] \_[L\^(0,T; L\^[6/5]{}())]{} + \_[L\^(0,T; L\^[q]{}())]{} C for every $q\geq1$. Consequently, testing by the constants $\pm1$ we get $$\begin{split}
\pm|\Omega|(\mu_\lambda(t))_\Omega&\leq
\int_\Omega|\lambda\partial_t u_\lambda + \alpha_\lambda(\partial_t u_\lambda) + \lambda u_\lambda
+\beta_\lambda(u_\lambda)+T_\lambda\pi(u_\lambda)|(t) + |\Omega||(g_\lambda(t))_\Omega|\\
&+ \int_\Gamma|\lambda\partial_t v_\lambda + \alpha_{\Gamma\lambda}(\partial_t v_\lambda)
+\beta_{\Gamma\lambda}(v_\lambda)+T_\lambda\pi_\Gamma(v_\lambda)|(t) + |\Gamma||(g_{\Gamma\lambda}(t))_\Gamma|\,,
\end{split}$$ where the right-hand side is bounded in $L^\infty(0,T)$ thanks to the estimates already shown, – and by assumption . We infer together with that $$\norm{\mu_\lambda}_{L^\infty(0,T; V)}\leq C\,.$$ Furthermore, by comparison in and the estimates – that \[est23\] \_[L\^(0,T; V)L\^2(0,T; W\_[**n**]{}H\^3())]{} + \_[L\^(0,T; V\^\*)]{}C. Again, if also holds, the same argument ensures that $J_\lambda\partial_t u_\lambda$ is uniformly bounded in $L^{\infty}(0,T; H)$, hence also $\mu_\lambda$ in $L^\infty(0,T; W_{\bf n})$ form , from which the last sentence of Theorem \[thm3\] follows.
Let us focus now on the main estimate. We know that the approximated problem can be written as $$A_\lambda(\partial_t u_\lambda, \partial_t v_\lambda) + B_\lambda(u_\lambda,v_\lambda)=
(g_\lambda,g_{\Gamma\lambda})-(T_\lambda\pi(u_\lambda), T_\lambda\pi_\Gamma(v_\lambda))\,,$$ where the operators $A_\lambda$ and $B_\lambda$ have been introduced in Section \[approx\]. Note that by and , we have that $(u_\lambda, v_\lambda)_\lambda$ is bounded in $L^\infty(0,T; \V)$: hence, by linearity and boundedness of the operator $$(-\Delta, \partial_{\bf n} - \eps\Delta_\Gamma): \V\to \V^*\,,$$ we deduce that $(-\Delta u_\lambda, \partial_{\bf n}u_\lambda - \eps\Delta v_\lambda)_\lambda$ is bounded uniformly in $L^\infty(0,T; \V^*)$. Moreover, since $L^{6/5}(\Omega)\embed V^*$ and $L^{q'}(\Gamma)\embed V_\Gamma^*$ for every $q'\in(1,+\infty]$, by we deduce that $(\beta_\lambda(u_\lambda), \beta_{\Gamma\lambda}(v_\lambda))_\lambda$ is bounded in $L^\infty(0,T; \V^*)$ as well. Hence, we infer that $$\norm{B_\lambda(u_\lambda, v_\lambda)}_{L^\infty(0,T; \V^*)}\leq C\,.$$ By comparison in the equation written above we have then $$\norm{(\alpha_\lambda(\partial_t u_\lambda), \alpha_{\Gamma\lambda}(\partial_t v_\lambda))_\lambda}_{L^\infty(0,T; \V^*)}\leq C\,.$$
Let us pass to the limit. The estimates that we have collected ensure that there are $$\begin{gathered}
u \in W^{1,\infty}(0,T; V^*)\cap H^1(0,T; V)\cap L^\infty(0,T; W)\,, \\
v \in W^{1,\infty}(0,T; H)\cap H^1(0,T; V_\Gamma)\cap L^\infty(0,T; W_\Gamma)\,,\\
\mu \in L^\infty(0,T; W_{\bf n})\cap L^2(0,T; H^3(\Omega))\,,\\
\xi \in L^\infty(0,T; L^{6/5}(\Omega))\,, \qquad \xi_\Gamma \in L^\infty(0,T; L^q(\Gamma)) \quad\forall\,q\in[1,+\infty)\,,\\
\eta_w \in L^\infty(0,T; \V^*)\,,\end{gathered}$$ such that, along a subsequence that we still denote by $\lambda$ for simplicity, $$\begin{aligned}
u_\lambda \wstarto u \quad\text{in } W^{1,\infty}(0,T; V^*)\cap L^\infty(0,T; W)\,,& \qquad
u_\lambda \wto v \quad\text{in } H^1(0,T; V)\,,\\
v_\lambda \wstarto u \quad\text{in } W^{1,\infty}(0,T; H_\Gamma)\cap L^\infty(0,T; W_{\Gamma})\,,&
\qquad v_\lambda \wto v \quad\text{in } H^1(0,T; V_\Gamma)\,,\\
\mu_\lambda \wstarto \mu \quad\text{in } L^\infty(0,T; V)\,,&
\qquad \mu_\lambda \wto \mu \quad\text{in } L^2(0,T; W_{\bf n}\cap H^3(\Omega))\,,\\
\beta_\lambda(u_\lambda) \wstarto \xi \quad\text{in } L^\infty(0,T; L^{6/5}(\Omega))\,,& \qquad
\beta_{\Gamma\lambda}(v_\lambda) \wstarto \xi_\Gamma \quad\text{in } L^\infty(0,T; H_\Gamma)\,,\\
(\alpha_\lambda(\partial_t u_\lambda), \alpha_{\Gamma\lambda}(\partial_t v_\lambda))\wstarto\eta_w&
\qquad\text{in } L^\infty(0,T; \V^*)\end{aligned}$$ and $$\lambda u_\lambda \to 0 \quad\text{in } W^{1,\infty}(0,T; H)\,, \quad
\lambda v_\lambda \to 0 \quad\text{in } W^{1,\infty}(0,T; H_\Gamma)\,, \quad
\lambda\mu_\lambda\to0 \quad\text{in } L^\infty(0,T; H)\,.$$ If the stronger condition is in order, then the continuous embedding $V\embed L^6(\Omega)$ and imply that $(\beta_\lambda(u_\lambda))_\lambda$ is bounded in $L^\infty(0,T; H)$, from which $\xi \in L^\infty(0,T; H)$ as well. Testing the approximated equations – by a generic element $(\varphi, \psi)\in \V$, integrating by parts and letting $\lambda\to0^+$, it is a standard matter to check that $$\begin{split}
\int_\Omega\mu(t)\varphi&=\ip{\eta_w(t)}{(\varphi,\psi)}_\V + \int_\Omega\nabla u(t)\cdot\nabla\varphi +
\int_\Omega\left(\xi(t)+\pi(u(t))-g(t)\right)\varphi\\
&+ \eps\int_\Gamma\nabla_\Gamma v(t)\cdot\nabla_\Gamma\psi
+\int_\Gamma(\xi_\Gamma(t)+\pi_\Gamma(v(t))-g_\Gamma(t))\psi\,.
\end{split}$$ Moreover, proceeding as in the previous sections, we also have $\xi\in\beta(u)$ a.e. in $Q$ and $\xi_\Gamma\in \beta_\Gamma(v)$ a.e. in $\Sigma$. Finally, as in Section \[limit\], comparing the approximated equations – and the corresponding limit ones, we can infer that $$\limsup_{\lambda\searrow0}\left[\int_Q\alpha_\lambda(\partial_t u_\lambda)\partial_t u_\lambda
+\int_\Sigma\alpha_{\Gamma\lambda}(\partial_t v_\lambda)\partial_t v_\lambda\right]\leq
\int_0^T\ip{\eta_w(t)}{(\partial_t u(t), \partial_t v(t))}_\V\,dt\,,$$ which implies by a well-known criterion on maximal monotonicity that $\eta_w \in \widetilde\alpha_w(\partial_t u, \partial_t v)$.
The uniqueness result
=====================
\[proof4\]
In the hypotheses – of Theorem \[thm4\], we clearly have $\xi_i=F'(u_i)-\pi(u_i)$ and $\xi_{\Gamma i}=F_\Gamma'(v_i)-\pi_\Gamma(v_i)$ for $i=1,2$. Now, we write the difference of the equations – at $i=1$ and $i=2$, test by $\mu_1-\mu_2$, by $-\partial_t(u_1-u_2)$ and sum: by standard computations, the monotonicity of $\alpha$ and we obtain $$\begin{split}
\int_{Q_t}&|\nabla(\mu_1-\mu_2)|^2
+\frac12\int_\Omega|\nabla (u_1-u_2)(t)|^2 +\frac\eps2\int_\Gamma|\nabla_\Gamma(v_1-v_2)(t)|^2
+\widetilde{b_1}\int_{\Sigma_t}|\partial_t(v_1-v_2)|^2\\
&\qquad+ \int_{Q_t}\left(F'(u_1)-F'(u_2)\right)\partial_t(u_1-u_2)
+ \int_{\Sigma_t}(F_\Gamma'(v_1)-F'_\Gamma(v_2))\partial_t(v_1-v_2)\leq0
\end{split}$$ for every $t\in[0,T]$. We are now inspired by the argument contained in the works [@ef-zel Thm. 2.2] and [@mir-sch p. 689]: note that $$\begin{split}
&\left(F'(u_1)-F'(u_2)\right)\partial_t(u_1-u_2) \\
&\qquad=
\partial_t\left[F(u_1)-F(u_2)-F'(u_2)(u_1-u_2)\right]
-\left[F'(u_1)-F'(u_2)-F''(u_2)(u_1-u_2)\right]\partial_t u_2
\end{split}$$ and similarly $$\begin{split}
&\left(F'_\Gamma(v_1)-F_\Gamma'(v_2)\right)\partial_t(v_1-v_2) \\
&\qquad= \partial_t\left[F_\Gamma(v_1)-F_\Gamma(v_2)-F'_\Gamma(v_2)(v_1-v_2)\right]
-\left[F_\Gamma'(v_1)-F_\Gamma'(v_2)-F''_\Gamma(v_2)(v_1-v_2)\right]\partial_t v_2\,,
\end{split}$$ so that $$\begin{split}
&\int_{Q_t}|\nabla(\mu_1-\mu_2)|^2
+\frac12\int_\Omega|\nabla (u_1-u_2)(t)|^2 +\frac\eps2\int_\Gamma|\nabla_\Gamma(v_1-v_2)(t)|^2
+\widetilde{b_1}\int_{\Sigma_t}|\partial_t(v_1-v_2)|^2\\
&\qquad+ \int_\Omega\left[F(u_1)-F(u_2)-F'(u_2)(u_1-u_2)\right](t)
+ \int_\Gamma\left[F_\Gamma(v_1)-F_\Gamma(v_2)-F'_\Gamma(v_2)(v_1-v_2)\right](t) \\
&\leq\int_{Q_t}\left[F'(u_1)-F'(u_2)-F''(u_2)(u_1-u_2)\right]\partial_t u_2
+\int_{\Sigma_t}\left[F_\Gamma'(v_1)-F_\Gamma'(v_2)-F''_\Gamma(v_2)(v_1-v_2)\right]\partial_t v_2
\end{split}$$ for every $t\in[0,T]$. Now, by the mean value theorem it is readily seen that $$\begin{aligned}
F(u_1)-F(u_2)-F'(u_2)(u_1-u_2)&\geq -C_\pi|u_1-u_2|^2\,, \\
F_\Gamma(v_1)-F_\Gamma(v_2)-F'_\Gamma(v_2)(v_1-v_2)&\geq
-C_{\pi_\Gamma}|v_1-v_2|^2\,,\end{aligned}$$ while the usual Taylor expansion for $F'$ yields $$\left[F'(u_1)-F'(u_2)-F''(u_2)(u_1-u_2)\right]\partial_t u_2=
\frac12F'''(\tilde{u}_{12})|u_1-u_2|^2\partial_t u_2$$ for a certain $\tilde{u}_{12}$ between $u_1$ and $u_2$. Now, recall that $\partial_t u_2 \in L^2(0,T; V)\embed L^2(0,T; L^6(\Omega))$ and $u_i \in L^\infty(0,T; W)\embed L^\infty(Q)$ for $i=1,2$: this implies in particular that $F'''(\tilde u_{12})\in L^\infty(Q)$, because $F''' \in L^\infty_{loc}(\erre)$ by . Hence, recalling also that $u_1-u_2$ has null mean, we have that $$\begin{split}
\int_{Q_t}F'''(\tilde{u}_{12})|u_1-u_2|^2\partial_t u_2&\leq
\norm{F'''(\tilde{u}_{12})}_{L^\infty(Q)}\int_0^t\norm{\partial_t u_2(s)}_{L^6(\Omega)}\norm{|u_1-u_2|^2(s)}_{L^{6/5}(\Omega)}\,ds\\
&\leq C\int_0^t\norm{\partial_t u_2(s)}_V\norm{\nabla(u_1-u_2)(s)}^2_H\,ds
\end{split}$$ for a certain constant $C>0$. Similarly, we obtain $$\int_{\Sigma_t}\left[F_\Gamma'(v_1)-F_\Gamma'(v_2)-F''_\Gamma(v_2)(v_1-v_2)\right]\partial_t v_2
\leq C\int_0^t\norm{\partial_t v_2(s)}_{V_\Gamma}\norm{\nabla_\Gamma(v_1-v_2)(s)}^2_{H_\Gamma}\,ds\,.$$ Furthermore, by the Young inequality we can write (updating the constant $C$ at each step) $$\begin{split}
C_\pi\int_\Omega|u_1-u_2|^2(t)&=2C_\pi\int_{Q_t}\partial_t(u_1-u_2)(u_1-u_2)\\
&\leq
\frac12\norm{\partial_t(u_1-u_2)}^2_{L^2(0,t; V^*)} +
C\norm{u_1-u_2}^2_{L^2(0,t; V)}\\
&\leq\frac12\int_{Q_t}|\nabla(\mu_1-\mu_2)|^2 + C\norm{\nabla(u_1-u_2)}^2_{L^2(0,t; H)}
\end{split}$$ and similarly $$C_{\pi_\Gamma} \int_\Gamma|v_1-v_2|^2(t)\leq
\frac{\widetilde{b_1}}2\int_{Q_t}|\partial_t(v_1-v_2)|^2+
C\norm{\nabla_\Gamma(v_1-v_2)}^2_{L^2(0,t; H_\Gamma)}\,.$$ Taking into account this information and rearranging the terms yields $$\begin{split}
&\frac12\int_{Q_t}|\nabla(\mu_1-\mu_2)|^2
+\frac12\int_\Omega|\nabla (u_1-u_2)(t)|^2 +\frac\eps2\int_\Gamma|\nabla_\Gamma(v_1-v_2)(t)|^2
+\frac{\widetilde{b_1}}2\int_{\Sigma_t}|\partial_t(v_1-v_2)|^2\\
&\qquad\leq C\int_0^t(1+\norm{\partial_t u_2(s)}_{V})\norm{\nabla(u_1-u_2)(s)}^2_{H}\,ds\\
&\qquad\quad+
C\int_0^t(1+\norm{\partial_t v_2(s)}_{V_\Gamma})\norm{\nabla_\Gamma(v_1-v_2)(s)}^2_{H_\Gamma}\,ds
\qquad\forall\,t\in[0,T]\,,
\end{split}$$ and the thesis follows from the Gronwall lemma.
In order to prove the second part of the theorem, we proceed in exactly the same way: we test by $\mu_1-\mu_2$, by -$(\partial_t (u_1-u_2), \partial_t(v_1-v_2))\in\V$ and sum. The only difference here is that the estimate on the term involving $F'''$ has to performed using the weaker regularity of the solutions and the hypothesis , together with the fact that $V\embed L^6(\Omega)$, as follows: $$\begin{split}
&\int_{Q_t}F'''(\tilde{u}_{12})(u_1-u_2)\partial_t u_2\\
&\qquad\leq
M\norm{|Q|+|u_1|^3+|u_2|^3}_{L^\infty(0,T; H)}\int_0^t\norm{\partial_t u_2(s)}_{L^6(\Omega)}\norm{|u_1-u_2|^2(s)}_{L^{3}(\Omega)}\,ds\\
&\qquad\leq C\left(1+\norm{u_1}^2_{L^\infty(0,T; V)}+\norm{u_2}^2_{L^\infty(0,T; V)}\right)
\int_0^t\norm{\partial_t u_2(s)}_{V}\norm{\nabla(u_1-u_2)(s)}^2_{H}\,ds
\end{split}$$ Similarly, the term involving $F_\Gamma'''$ is handled using and the inclusion $V_\Gamma\embed L^q(\Gamma)$ for every $q\in[1,+\infty)$.
The asymptotic as $\eps\searrow0$
=================================
\[proof5\]
For every $\eps>0$, the septuple $(u_\eps,v_\eps,\mu_\eps,\eta_\eps,\xi_\eps,\eta_{\Gamma\eps},\xi_{\Gamma\eps})$ is the solution satisfying – given by Theorem \[thm1\]. Hence, recalling how such solutions were built from the approximated ones, all the estimates that we performed in Section \[proof1\] (and that are $\eps$-independent) are preserved. In particular, going back to Section \[proof1\] and taking into account, it is readily seen that $$\begin{gathered}
\norm{u_\eps}_{L^\infty(0,T; V)\cap H^1(0,T; H)}
+ \norm{v_\eps}_{L^\infty(0,T; H^{1/2}(\Gamma))\cap H^1(0,T; H_\Gamma)} +
\eps^{1/2}\norm{v_\eps}_{L^\infty(0,T; V_\Gamma)}\leq C\,,\\
\norm{\mu_\eps}_{L^2(0,T; W_{\bf n})}\leq C\,,\\
\norm{\eta_\eps}_{L^2(0,T; H)} + \norm{\eta_{\Gamma\eps}}_{L^2(0,T; H_\Gamma)}+
\norm{\xi_\eps}_{L^2(0,T; H)} \leq C\,,\\
\norm{\Delta u_\eps}_{L^2(0,T; H)} +
\norm{\partial_{\bf n}u_\eps - \eps\Delta_\Gamma v_\eps + \xi_{\Gamma\eps}}_{L^2(0,T; H_\Gamma)}\leq C\,.\end{gathered}$$ By the classical results on elliptic regularity, we can only infer that $$\norm{\partial_{\bf n}u_\eps}_{L^2(0,T; H^{-1/2}(\Gamma))} + \eps^{1/2}\norm{\partial_{\bf n}u_\eps}_{L^2(0,T; H_\Gamma)}\leq c\,.$$ Taking into account that $-\Delta_\Gamma:V_\Gamma\to V_\Gamma^*$ is continuous and monotone, we also have that $$\eps^{1/2}\norm{\Delta_\Gamma v_\eps}_{L^\infty(0,T; V_\Gamma^*)} + \eps^{3/2}\norm{\Delta_\Gamma v_\eps}_{L^2(0,T; H_\Gamma)}\leq c\,,$$ which yields by interpolation $$\eps\norm{\Delta v_\eps}_{L^2(0,T; H^{-1/2}(\Gamma))}\leq c\,,$$ hence also, by comparison, $$\norm{\xi_{\Gamma\eps}}_{L^2(0,T; H^{-1/2}(\Gamma))}\leq c\,.$$ It readily seen that, along a subsequence $(\eps_n)_n$, the weak convergences of Theorem \[thm5\] hold. Furthermore, by the classical compactness results [@simon Cor. 4, p. 85] we also have $$u_{\eps_n}\to u \quad\text{in } C^0([0,T]; H)\,, \qquad
v_{\eps_n}\to v \quad\text{in } C^0([0,T]; H_\Gamma)\,,$$ which yields $\xi \in \beta(u)$ a.e. in $Q$ by the strong-weak closure of $\beta$. Passing to the weak limit as $n\to\infty$ in – we deduce that $(u,v,\mu,\eta,\xi,\eta_\Gamma,\xi_\Gamma)$ satisfies the limit equations stated in Theorem \[thm5\]. Moreover, testing by $\mu_\eps$, by $-\partial_t u_\eps$ and summing we get $$\begin{split}
&\int_Q|\nabla\mu_\eps|^2+\int_{Q}\eta_\eps\partial_t u_\eps +
\frac12\int_\Omega|\nabla u_\eps(T)|^2+\int_\Omega\widehat\beta(u_\eps(T))\\
&\qquad+\int_\Sigma\eta_{\Gamma\eps}\partial_t v_\eps + \frac\eps2\int_\Gamma|\nabla_\Gamma v_\eps(T)|^2
+\int_\Gamma\widehat\beta_\Gamma(v_\eps(T))
=\frac12\int_\Omega|\nabla u_0^\eps|^2+\frac\eps2\int_\Gamma|\nabla_\Gamma u_0^\eps|^2\\
&\qquad+ \int_\Omega\widehat\beta(u_0^\eps) + \int_\Gamma\widehat\beta_\Gamma(u_0^\eps)
+\int_Q(g -\pi(u_\eps))\partial_t u_\eps + \int_\Sigma(g_\Gamma-\pi_\Gamma(v_\eps))\partial_t v_\eps\,,
\end{split}$$ from which, by standard weak lower semicontinuity results, the convergence $u_0^\eps\to u_0$ in $V$ and the estimate , $$\begin{split}
\limsup_{n\to\infty}\left(\int_{Q}\eta_\eps\partial_t u_\eps+\int_\Sigma\eta_{\Gamma\eps}\partial_t v_\eps\right)&\leq
\frac12\int_\Omega|\nabla u_0|^2+ \int_\Omega\widehat\beta(u_0) + \int_\Gamma\widehat\beta_\Gamma(u_0)\\
&-\int_Q|\nabla\mu|^2-\frac12\int_\Omega|\nabla u(T)|^2-\int_\Omega\widehat\beta(u(T)) - \int_\Gamma\widehat\beta_\Gamma(v(T))\\
&+\int_Q(g -\pi(u))\partial_t u + \int_\Sigma(g_\Gamma-\pi_\Gamma(v))\partial_t v\,.
\end{split}$$ Now, performing the analogue estimate on the limiting equations, we easily deduce that the right-hand side coincides with $$\int_Q\eta\partial_t u + \int_\Sigma\eta_\Gamma\partial_t v\,.$$ Hence, we also have that $\eta\in\alpha(\partial_t u)$ a.e. in $Q$ and $\eta_\Gamma\in\alpha_\Gamma(\partial_t v)$ a.e. in $\Sigma$. It remains to prove that $\xi_\Gamma\in\beta_{\Gamma w}(v)$ a.e. in $(0,T)$. To this end, we test by $\mathcal N(u_\eps-(u_0^\eps)_\Omega)$, $\eqref{2}$ by $-(u_\eps-(u_0^\eps)_\Omega)$, and sum: $$\begin{split}
&\norm{\nabla\mathcal{N}(u_\eps(T)-(u_0^\eps)_\Omega)}_H^2 +
\int_{Q}\eta_\eps(u_\eps-(u^\eps_0)_\Omega) + \int_{Q}|\nabla u_\eps|^2 + \int_Q\xi_\eps(u_\eps-(u^\eps_0)_\Omega)\\
&\qquad+\int_{\Sigma}\eta_{\Gamma\eps}(v_\eps-(u^\eps_0)_\Omega) + \eps\int_\Sigma|\nabla_\Gamma v_\eps|^2
+\int_\Sigma \xi_{\Gamma\eps}(v_\eps-(u^\eps_0)_\Omega)\\
&\qquad=\int_Q(g-\pi(u_\eps))(u_\eps-(u^\eps_0)_\Omega) + \int_\Sigma(g_\Gamma-\pi_\Gamma(v_\eps))(v_\eps-(u^\eps_0)_\Omega)\,.
\end{split}$$ Now, recalling that $u_\eps-(u_0^\eps)_\Omega$ has null mean and that $u_\eps\to u$ in $C^0([0,T]; H)$, we have in particular that $u_\eps(T)-(u_0^\eps)_\Omega \to u(T)-(u_0)_\Omega$ in $V^*$, hence also, by the properties of $\mathcal N$, $\mathcal N(u(T)_\eps-(u_0)_\Omega)\to\mathcal N(u(T)-(u_0)_\Omega)$ in $V$. Furthermore, using the convergences already proved and the weak lower semicontinuity of the norms, we infer that $$\begin{split}
&\limsup_{\eps\searrow0}\int_\Sigma\xi_{\Gamma\eps} v_\eps \leq
\int_Q(g-\pi(u))(u-(u_0)_\Omega) + \int_\Sigma(g_\Gamma-\pi_\Gamma(v))(v-(u_0)_\Omega)
-\int_Q\eta(u-(u_0)_\Omega)\\
&+\int_\Sigma\xi_{\Gamma}(u_0)_\Omega-\norm{\nabla\mathcal{N}(u(T)-(u_0)_\Omega)}_H^2 - \int_Q|\nabla u|^2
-\int_Q\xi(u-(u_0)_\Omega) - \int_\Sigma\eta_{\Gamma}(v-(u_0)_\Omega)\,.
\end{split}$$ As before, performing the same estimate on the limiting equations, we see that the right-hand side coincides with $$\int_0^T\ip{\xi_\Gamma(t)}{v(t)}_{H^{1/2}(\Gamma)}\,dt\,,$$ and we can conclude by the maximal monotonicity of $\beta_{\Gamma w}$.
Finally, if the further assumptions – hold and $(\eps u_{0|\Gamma}^\eps)_\eps$ is bounded in $W_\Gamma$, we can proceed similarly performing the estimates in Section \[proof2\] instead. In particular, note that with these hypotheses the constant $C$ appearing in Lemma \[init\_reg\] is independent of $\eps$. Hence, we infer $$\begin{gathered}
\norm{u_\eps}_{W^{1,\infty}(0,T; V^*)\cap H^1(0,T; V)}
+\norm{v_\eps}_{W^{1,\infty}(0,T; H_\Gamma)\cap H^1(0,T; H^{1/2}(\Gamma))}
+\eps^{1/2}\norm{v_\eps}_{H^1(0,T; V_\Gamma)}\leq c\,,\\
\norm{\mu_\eps}_{L^\infty(0,T; V)\cap L^2(0,T; W_{\bf n}\cap H^3(\Omega))}\leq c\,,\\
\norm{\eta_\eps}_{L^\infty(0,T; H)} + \norm{\eta_{\Gamma\eps}}_{L^\infty(0,T; H_\Gamma)}
+\norm{\xi_\eps}_{L^\infty(0,T; H)}\leq c\,,\\
\norm{\Delta u_\eps}_{L^\infty(0,T; H)} + \norm{\partial_{\bf n}u_\eps-\eps\Delta_\Gamma v_\eps + \xi_{\Gamma\eps}}_{L^\infty(0,T; H_\Gamma)}\leq c\,.\end{gathered}$$ Now, arguing as before by elliptic regularity and interpolation arguments, we deduce that $$\norm{\partial_{\bf n}u_\eps}_{L^\infty(0,T; H^{-1/2}(\Gamma))} +
\eps\norm{\Delta_\Gamma v_\eps}_{L^\infty(0,T; H^{-1/2}(\Gamma))}+\norm{\xi_{\Gamma\eps}}_{L^\infty(0,T; H^{-1/2}(\Gamma))}\leq c\,.$$ Hence, the conclusion of the proof follows easily by a completely similar argument.
[10]{}
E. Bonetti, P. Colli, L. Scarpa, and G. Tomassetti. A doubly nonlinear [C]{}ahn-[H]{}illiard system with nonlinear viscosity. , 17(3):1001–1022, 2018.
H. Brézis. . North-Holland Publishing Co., Amsterdam-London; American Elsevier Publishing Co., Inc., New York, 1973. North-Holland Mathematics Studies, No. 5. Notas de Matemática (50).
J. W. Cahn and J. E. Hilliard. Free energy of a nonuniform system. i. interfacial free energy. , 28(2):258–267, 1958.
L. Calatroni and P. Colli. Global solution to the [A]{}llen-[C]{}ahn equation with singular potentials and dynamic boundary conditions. , 79:12–27, 2013.
P. Colli, M. H. Farshbaf-Shaker, G. Gilardi, and J. Sprekels. Optimal boundary control of a viscous [C]{}ahn-[H]{}illiard system with dynamic boundary condition and double obstacle potentials. , 53(4):2696–2721, 2015.
P. Colli and T. Fukao. Cahn-[H]{}illiard equation with dynamic boundary conditions and mass constraint on the boundary. , 429(2):1190–1213, 2015.
P. Colli and T. Fukao. Equation and dynamic boundary condition of [C]{}ahn-[H]{}illiard type with singular potentials. , 127:413–433, 2015.
P. Colli, G. Gilardi, and J. Sprekels. On the [C]{}ahn-[H]{}illiard equation with dynamic boundary conditions and a dominating boundary potential. , 419(2):972–994, 2014.
P. Colli, G. Gilardi, and J. Sprekels. A boundary control problem for the pure [C]{}ahn-[H]{}illiard equation with dynamic boundary conditions. , 4(4):311–325, 2015.
P. Colli, G. Gilardi, and J. Sprekels. A boundary control problem for the viscous [C]{}ahn-[H]{}illiard equation with dynamic boundary conditions. , 73(2):195–225, 2016.
P. Colli and L. Scarpa. From the viscous [C]{}ahn-[H]{}illiard equation to a regularized forward-backward parabolic equation. , 99(3-4):183–205, 2016.
P. Colli and J. Sprekels. Optimal control of an [A]{}llen-[C]{}ahn equation with singular potentials and dynamic boundary condition. , 53(1):213–234, 2015.
P. Colli and A. Visintin. On a class of doubly nonlinear evolution equations. , 15(5):737–756, 1990.
M. Efendiev and S. Zelik. Finite-dimensional attractors and exponential attractors for degenerate doubly nonlinear equations. , 32(13):1638–1668, 2009.
H. P. Fischer, P. Maass, and W. Dieterich. Novel surface modes in spinodal decomposition. , 79:893–896, Aug 1997.
C. G. Gal. On a class of degenerate parabolic equations with dynamic boundary conditions. , 253(1):126–166, 2012.
C. G. Gal. The role of surface diffusion in dynamic boundary conditions: [W]{}here do we stand? , 83(2):237–278, 2015.
C. G. Gal and M. Grasselli. The non-isothermal [A]{}llen-[C]{}ahn equation with dynamic boundary conditions. , 22(4):1009–1040, 2008.
G. Gilardi, A. Miranville, and G. Schimperna. On the [C]{}ahn-[H]{}illiard equation with irregular potentials and dynamic boundary conditions. , 8(3):881–912, 2009.
G. Gilardi, A. Miranville, and G. Schimperna. Long time behavior of the [C]{}ahn-[H]{}illiard equation with irregular potentials and dynamic boundary conditions. , 31(5):679–712, 2010.
M. E. Gurtin. Generalized [G]{}inzburg-[L]{}andau and [C]{}ahn-[H]{}illiard equations based on a microforce balance. , 92(3-4):178–192, 1996.
H. Kardestuncer and D. H. Norrie, editors. . McGraw-Hill Book Co., New York, 1987.
R. Kenzler, F. Eurich, P. Maass, B. Rinn, J. Schropp, E. Bohl, and W. Dieterich. Phase separation in confined geometries: [S]{}olving the [C]{}ahn-[H]{}illiard equation with generic boundary conditions. , 133(2):139 – 157, 2001.
S. Maier-Paape and T. Wanner. Spinodal decomposition for the [C]{}ahn-[H]{}illiard equation in higher dimensions. [I]{}. [P]{}robability and wavelength estimate. , 195(2):435–464, 1998.
S. Maier-Paape and T. Wanner. Spinodal decomposition for the [C]{}ahn-[H]{}illiard equation in higher dimensions: nonlinear dynamics. , 151(3):187–219, 2000.
A. Miranville and G. Schimperna. On a doubly nonlinear [C]{}ahn-[H]{}illiard-[G]{}urtin system. , 14(2):675–697, 2010.
A. Miranville and S. Zelik. Robust exponential attractors for [C]{}ahn-[H]{}illiard type equations with singular potentials. , 27(5):545–582, 2004.
A. Miranville and S. Zelik. Doubly nonlinear [C]{}ahn-[H]{}illiard-[G]{}urtin equations. , 38(2):315–360, 2009.
A. Novick-Cohen. On the viscous [C]{}ahn-[H]{}illiard equation. In [*Material instabilities in continuum mechanics ([E]{}dinburgh, 1985–1986)*]{}, Oxford Sci. Publ., pages 329–342. Oxford Univ. Press, New York, 1988.
G. Schimperna, A. Segatti, and U. Stefanelli. Well-posedness and long-time behavior for a class of doubly nonlinear equations. , 18(1):15–38, 2007.
J. Simon. Compact sets in the space [$L^p(0,T;B)$]{}. , 146:65–96, 1987.
[^1]: [**Acknowledgments.**]{} The author is very grateful to Pierluigi Colli for his expert support and fundamental advice. The author is also thankful for the warm hospitality and excellent working conditions at the Dipartimento di Matematica “F. Casorati”, Università di Pavia (Italy), where a part of this work was written.
|
6.1in .3in .3in
ABSTRACT
We reformulate the method recently proposed for constructing quasitriangular Hopf algebras of the quantum-double type from the $R$-matrices obeying the Yang-Baxter equations. Underlying algebraic structures of the method are elucidated and an illustration of its facilities is given. The latter produces an example of a new quasitriangular Hopf algebra. The corresponding universal $\cal
R$-matrix is presented as a formal power series.
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
E-mail: [email protected]
[**1**]{}. Following the approaches of refs. [@FRT] and [@Ma] we proposed in [@Vl] a recipe for constructing quantum doubles (quasitriangular Hopf algebras of the special type [@Dr; @Tj]) associated with invertible solutions of the quantum Yang-Baxter equations (QYBE). Let us briefly recall this procedure.
It is known [@FRT] that any invertible solution $R$ of QYBE $$R_{12}R_{13}R_{23}=R_{23}R_{13}R_{12} \label{1}$$ naturally generates a bialgebra $\cal T$ with generators $\{1,\,t^i_j\}$ and relations $$R_{12}T_1T_2=T_2T_1R_{12},\ \ \ \ \Delta (T)=T\otimes T,
\ \ \ \ \varepsilon (T)={\bf 1} \label{2}$$ ($t_{ij}$ form a matrix $T$, $\Delta$ is a coproduct and $\varepsilon$ a counit). We can now define an analogous bialgebra ${\cal U}=\{1,\,u^i_j\}$ by $$R_{12}U_1U_2=U_2U_1R_{12},\ \ \ \ \Delta (U)=U\otimes U,
\ \ \ \ \varepsilon (U)={\bf 1}, \label{3}$$ and introduce a pairing between these two, $$<U_1,T_2>=R_{12}\,, \label{4}$$ as a bilinear map $<\cdot,\cdot>:{\cal U}\otimes {\cal T}
\rightarrow {\cal K}$ into the underlying field ${\cal K}$. The pairing (\[4\]) proves to be consistent with the bialgebra structure [@Vl] but is, as a rule, degenerate. Removing the degeneracy by factoring out so-called null bi-ideals [@Ma] allows us to introduce antipodes by the relations $$<S(U_1),\,T_2>=<U_1,\,S^{-1}(T_2)>=R^{-1}_{12}\,, \label{5}$$ and then establish the quantum-double structure on ${\cal T}\otimes
{\cal U}$ using the original Drinfeld recipe [@Dr; @Tj] $$\alpha b=\sum \sum<S(\alpha_{(1)}),b_{(1)}><\alpha_{(3)},b_{(3)}>
b_{(2)}\alpha_{(2)}\,, \label{6}$$ where $$\Delta^2(\alpha)=\sum \alpha_{(1)}\otimes \alpha_{(2)}\otimes
\alpha_{(3)},\ \ \ \Delta^2(b)=\sum b_{(1)}\otimes b_{(2)}
\otimes b_{(3)}\,. \label{7}$$ In the case (\[2\])-(\[4\]) this recipe results in the well known formula $$R_{12}U_1T_2=T_2U_1R_{12}\,. \label{8}$$ However, it is not very well known that (\[8\]) can be interpreted [@Vl] as the quantum-double cross-multiplication condition as well.
In the present paper we develop the method [@Vl] along the following aspects. Firstly, we change the order of certain steps described above: a definition of antipode will now precede the bracketing procedure. This will immediately produce the ($R$-generated) Hopf algebra, because Reshetikhin’s result [@Re] enables one to introduce invertible antipodes explicitly and so give up implicit definitions (\[5\]), where the invertibility of $S$ was not guaranteed. However, after the removal of degeneracy of $<\cdot,\cdot>$ by abovementioned factorization, these two ways lead us to the same quantum double.
Secondly, we now understand why the cross-multiplication relation (\[8\]) appears in its final form actually before (and independently of) any factorization. We show that ${\cal T}\otimes {\cal U}$ can be provided with the bialgebra (or the Hopf algebra) structure merely due to appropriate features of the pairing, though degenerate.
Therefore, in the present version of the method, the quotienting by null bi-ideals does not look so unpredictably dangerous as it does in [@Ma] and [@Vl]. Now it can at most trivialize the whole output. To show that sometimes it does not, we perform the construction of the quantum double for one of the $4\times4\ R$-matrices listed in [@Hi]. The resulting Hopf algebra is by no means trivial and appears to be quasitriangular. We assume the corresponding universal $\cal R$-matrix to be a formal power series and evaluate its terms up to the fourth order.
[**2**]{}. Here we are to explain how an antipode can be introduced [@Re] into the $R$-generated bialgebra ${\cal T}$. For generality, let us consider its inhomogeneous version [@Vl] (cf. [@Be; @SWW; @Lu]) with generators $\{1,t^i_j,E_p\}$ (we prefer to display all the indices): $$R^{ij}_{mn}\,t^m_p\,t^n_q=R^{mn}_{pq}\,t^j_n\,t^i_m\,,\ \ \
E_p\,t^j_q=R^{mn}_{pq}\,t^j_n\,E_m\,, \label{9}$$ $$\Delta(t^i_j)=t^i_k\otimes t^k_j\,,\ \ \varepsilon(t^i_j)=\delta^i_j\,,
\ \ \Delta(E_j)=E_i\otimes t^i_j+1\otimes E_j\,,\ \ \varepsilon(E_j)=0
\,. \label{10}$$ $R$-matrix is a solution of QYBE (\[1\]): $$R^{ij}_{lm}R^{lk}_{pn}R^{mn}_{qr}=R^{jk}_{lm}R^{im}_{nr}R^{nl}_{pq}\,.
\label{11}$$
Now let us extend this bialgebra by the inverse elements $\overline{t}{}^i_j$ (overlining a quantity will always mean its inverse): $$t^i_k\,\overline{t}{}^k_j=\overline{t}{}^i_k\,t^k_j=\delta ^i_j\,.
\label{12}$$ As a consequence, the following relations are to be added to (\[9\]), (\[10\]): $$R^{ij}_{mn}\,\overline{t}{}^n_p\,\overline{t}{}^m_q
=R^{mn}_{qp}\,\overline{t}{}^i_m\,\overline{t}{}^j_n\,,\ \ \
R^{im}_{nq}\,\overline{t}{}^j_m\,t^n_p
=R^{mj}_{pn}\,t^i_m\,\overline{t}{}^n_q\,,\ \ \
\overline{t}{}^j_q\,E_p=R^{mj}_{pn}\,E_m\,\overline{t}{}^n_q\,,
\label{13}$$ $$\Delta(\overline{t}{}^i_j)=\overline{t}{}^k_j\otimes
\overline{t}{}^i_k\,,
\ \ \ \ \ \ \ \varepsilon (\overline{t}{}^i_j)=\delta ^i_j\,.
\label{14}$$
Further, assume that $R$ admits not only an inverse matrix $\overline{R}$, $$R^{ij}_{mn}\overline{R}{}^{mn}_{pq}=\overline{R}{}^{ij}_{mn}R^{mn}_{pq}
=\delta ^i_p\,\delta ^j_q\,, \label{15}$$ but also the matrices $\widetilde{R}$ and $\widetilde{\overline{R}}$ (‘twisted inverses’ for $R$ and $\overline{R}$, respectively): $$R^{mj}_{pn}\widetilde{R}^{in}_{mq}=\widetilde{R}^{mj}_{pn}R^{in}_{mq}
=\delta ^i_p\,\delta ^j_q\,,\ \ \
\overline{R}{}^{mj}_{pn}\widetilde{\overline{R}}{}^{in}_{mq}
=\widetilde{\overline{R}}{}^{mj}_{pn}\overline{R}{}^{in}_{mq}
=\delta ^i_p\,\delta ^j_q\,. \label{16}$$ Let us define the tensors $$\Omega ^i_j\equiv \widetilde{R}^{mi}_{jm}\,,\ \ \ \ \ \
\overline{\Omega}{}^i_j\equiv \widetilde{\overline{R}}{}^{mi}_{jm}\,,
\label{17}$$ which are inverse to each other, $$\overline{\Omega }{}^i_k\,\Omega^k_j=\delta ^i_j\,. \label{18}$$ This can be easily seen from $$\widetilde{R}^{jk}_{ls}\widetilde{\overline{R}}{}^{il}_{tq}=
\overline{R}{}^{pr}_{ts}R^{ik}_{lm}\widetilde{\overline{R}}{}^{lj}_{pn}
\widetilde{R}^{nm}_{qr}\,,$$ which, in turn, is a direct consequence of QYBE (\[11\]).
In terms of $\Omega $ and $\overline{\Omega}$ one can define [@Re] an antipode $$S(t^i_j)=\overline{t}{}^i_j\,,\ \ \ S(\overline{t}{}^i_j)=
\Omega^i_mt^m_n\,
\overline{\Omega}{}^n_j\,,\ \ \ S(E_i)=-E_j\,\overline{t}{}^j_i
\label{19}$$ and its inverse $$\overline{S}(t^i_j)=\overline{\Omega}{}^i_m\,\overline{t}{}^m_n\,
\Omega^n_j\,,\ \ \ \overline{S}(\overline{t}{}^i_j)=t^i_j\,,\ \ \
\overline{S}(E_i)=-\overline{S}(t^j_i)E_j\,. \label{20}$$ To confirm the correctness of this definition one can use the following relations: $$\Omega^n_m\,t^m_j\,\overline{t}{}^i_n=\Omega^i_j\,,\ \ \ \ \
\overline{\Omega}{}^n_m\,\overline{t}{}^m_j\,t^i_n
=\overline{\Omega}{}^i_j\,. \label{21}$$ For example, $$S(t^i_k\,\overline{t}{}^k_j)=S(\overline{t}{}^k_j)S(t^i_k)
=\Omega^k_m\,
t^m_n\,\overline{\Omega}{}^n_j\,\overline{t}{}^i_k
=\overline{\Omega}{}^n_j\,\Omega^i_n=\delta ^i_j\,.$$ In its turn, (\[21\]) is deduced from $$\widetilde{R}^{mi}_{jn}\,\overline{t}{}^n_q\,t^p_m
=\widetilde{R}^{pn}_{mq}\,t^m_j\,\overline{t}{}^i_n\,, \label{22}$$ which is equivalent to the second equality in (\[13\]).
Thus, we have completed the construction of the $R$-generated Hopf algebra ${\cal T}$.
Introduce now a similar Hopf algebra, ${\cal U}$, whose generators $\{1,u^i_j,\overline{u}{}^m_n,F^q\}$ obey the relations $$u^i_k\,\overline{u}{}^k_j=\overline{u}{}^i_k\,u^k_j=\delta ^i_j\,,
\label{23}$$ $$R^{ij}_{mn}\,u^m_p\,u^n_q=R^{mn}_{pq}\,u^j_n\,u^i_m\,,\ \ \
R^{ij}_{mn}\,\overline{u}{}^n_p\,\overline{u}{}^m_q
=R^{mn}_{qp}\,\overline{u}{}^i_m\,\overline{u}{}^j_n\,,\ \ \
R^{im}_{nq}\,\overline{u}{}^j_m\,u^n_p
=R^{mj}_{pn}\,u^i_m\,\overline{u}{}^n_q\,, \label{24}$$ $$F^i\,u^j_p=R^{ji}_{mn}\,u^m_p\,F^n\,,\ \ \ \
F^j\,\overline{u}{}^i_p=\overline{R}{}^{mj}_{pn}\,\overline{u}{}^i_m
\,F^n\,, \label{25}$$ $$\Delta(u^i_j)=u^i_k\otimes u^k_j\,,\ \ \
\Delta(\overline{u}{}^i_j)=\overline{u}{}^k_j\otimes\overline{u}{}^i_k
\,,\ \ \ \Delta(F^i)=F^i\otimes 1+u^i_j\otimes F^j\,, \label{26}$$ $$\varepsilon (u^i_j)=\varepsilon (\overline{u}{}^i_j)=\delta ^i_j\,,
\ \ \ \ \ \ \varepsilon (F^i)=0\,, \label{27}$$ $$S(u^i_j)=\overline{u}{}^i_j\,,\ \ \
S(\overline{u}{}^i_j)=\Omega^i_mu^m_n \,\overline{\Omega}{}^n_j\,,\ \ \
S(F^i)=-\overline{u}{}^i_jF^j\,, \label{28}$$ $$\overline{S}(u^i_j)=\overline{\Omega}{}^i_m\overline{u}{}^m_n
\,\Omega^n_j\,,\ \ \ \overline{S}(\overline{u}{}^i_j)=u^i_j\,,\ \ \
\overline{S}(F^i)=-F^j\overline{S}(u^i_j)\,. \label{29}$$
Now we can define a pairing [@Vl] $<\cdot,\cdot>:{\cal U}\otimes
{\cal T}\rightarrow {\cal K}$ as follows (all nonzero brackets of the generators are listed): $$<u^i_j,t^p_q\,>=<\overline{u}{}^i_j,\overline{t}{}^p_q\,>=R^{ip}_{jq}
\,,\ \ <u^i_j,\overline{t}{}^p_q\,>=\widetilde{R}^{ip}_{jq}\,,\ \
<\overline{u}{}^i_j,t^p_q\,>=\overline{R}{}^{ip}_{jq}, \label{30}$$ $$<u^i_j,1>=<\overline{u}{}^i_j,1>=<1,t^i_j>=
<1,\overline{t}{}^i_j>=<F^i,E_j>=\delta^i_j\,. \label{31}$$ This pairing is of the antidual type, i.e. the conditions $$<\alpha\beta,a>=<\alpha\otimes\beta,\Delta(a)>\,,\ \ \
<\Delta(\alpha),a\otimes b>=<\alpha,ba>\,,$$ $$\varepsilon(a)=<1,a>\,,\ \ \ \ \ \varepsilon(\alpha)=<\alpha,1>\,,
\label{32}$$ $$<S(\alpha),a>=<\alpha,\overline{S}(a)>\,,\ \ \
<\overline{S}(\alpha),a>=<\alpha,S(a)>$$ are fulfilled. The proof is straightforward [@Vl] (cf. [@Ma]).
Note that the relation (\[5\]) is recovered as well: $$<S(u^i_j),t^p_q\,>=<\overline{u}{}^i_j,t^p_q\,>
=\overline{R}{}^{ip}_{jq}\equiv (R^{-1})^{ip}_{jq}\,.$$ However, in the present approach, unlike [@Ma; @Vl], an antipode is defined in an explicit way and is invertible by construction.
[**3**]{}. Now we are in a position to transform ${\cal T}\otimes\cal U$ into a quantum double. To achieve this, one has to remove the degeneracy of the pairing (\[30\]), (\[31\]). This can be done [@Ma] by factoring out null bi-ideals in ${\cal T}$ and ${\cal U}$ (the procedure is of course consistent with their Hopf algebra structure as well). After the factorization, ${\cal T}$ and ${\cal U}$ become the antidual pair of Hopf algebras, so the recipe (\[6\]) can be applied to produce the cross-multiplication rules peculiar to the quantum double. They are: $$R^{ij}_{mn}\,u^m_p\,t^n_q=R^{mn}_{pq}\,t^j_n\,u^i_m \,,\ \
R^{ij}_{mn}\,\overline{t}{}^n_q\,\overline{u}{}^m_p
=R^{mn}_{pq}\,\overline{u}{}^i_m\,\overline{t}{}^j_n\,,\ \
R^{im}_{nq}\,\overline{t}{}^j_m\,u^n_p=
R^{mj}_{pn}\,u^i_m\,\overline{t}{}^n_q\,,$$ $$\overline{R}{}^{mj}_{pn}\,\overline{u}{}^i_m\,t^n_q
=\overline{R}{}^{in}_{mq}\,t^j_n\,\overline{u}{}^m_p \,,\ \
u^i_p\,E_q=R^{mn}_{pq}\,E_n\,u^i_m\,,\ \
\overline{u}{}^i_p\,E_q=\overline{R}{}^{in}_{mq}\,E_n\,
\overline{u}{}^m_p\,,
\label{33}$$ $$t^i_p\,F^j=R^{ji}_{mn}\,F^m\,t^n_p\,,\ \
F^i\overline{t}{}^j_q=R^{in}_{mq}\overline{t}{}^j_nF^m\,,\ \
E_jF^i-F^iE_j=t^i_j-u^i_j\,.$$
An interesting fact here is that the role of the factorization procedure seems to be not so great: it only ensures antiduality (non-degenerate pairing) but does not affect the explicit form of the relations (\[33\]). Really, the latter is determined entirely by the recipe (\[6\]) prior to any factorization. So the cross-multiplication relations of the quantum double take their right form even if there is no quantum double!
An explanation of this puzzle is the following: the cross-multiplication structure (\[6\]) on ${\cal T}\otimes{\cal U}$ is not characteristic of the quantum double only. It occurs quite naturally if certain conditions (weaker than the quantum-double ones) are satisfied. To show this is the aim of the following two Propositions.
[**Proposition 1**]{}. Let ${\cal A}$ and ${\cal B}$ be bialgebras and let there exist two pairings,\
$<\cdot,\cdot>:{\cal B}\otimes {\cal A}
\rightarrow {\cal K}$ and $<\!<\cdot,\cdot>\!>:{\cal B}\otimes {\cal A}
\rightarrow {\cal K}$, with the antidual-type properties $$<\alpha\beta,a>=<\alpha\otimes\beta,\Delta(a)>\,,\ \ \
<\Delta(\alpha),a\otimes b>=<\alpha,ba>\,,$$ $$<\!<\alpha\beta,a>\!>=<\!<\beta\otimes\alpha,\Delta(a)>\!>\,,\ \ \
<\!<\Delta(\alpha),a\otimes b>\!>=<\!<\alpha,ab>\!>\,, \label{34}$$ $$<\alpha ,1>=<\!<\alpha ,1>\!>=\varepsilon (\alpha )\,,\ \ \
<\tilde{1},a>=<\!<\tilde{1},a>\!>=\varepsilon (a)\,,$$ and an additional relation $$<\!<_1<_2\Delta(\alpha ),\Delta(a)>\!>_1>_2=
<_1<\!<_2\Delta(\alpha ),\Delta(a)>_1>\!>_2=
\varepsilon (\alpha )\varepsilon (a)\,. \label{35}$$ Then the cross-multiplication rule (cf. (\[6\])) $$\alpha b=\sum \sum<\!<\alpha_{(1)},b_{(1)}>\!><\alpha_{(3)},b_{(3)}>
b_{(2)}\alpha_{(2)} \label{36}$$ establishes the bialgebra structure on ${\cal A}\otimes {\cal B}$.
In (\[34\]) $\tilde{1}$ is the unit of $\cal B$, and $<\!<_1<_2$ in (\[35\]) indicates that $<\!<\cdot,\cdot>\!>$-operation deals with the left multipliers in tensor products; whereas $<\cdot,\cdot>$ with the right ones.
[**Proof**]{}. Fix the bases $\{e_i\}$ in $\cal A$ and $\{e^j\}$ in $\cal
B$. Denoting the structure constants and the pairing tensors (which are in general degenerate) as follows, $$e_i\,e_j=c^k_{ij}\,e_k,\ \ \Delta(e_i)=f_i^{jk}(e_j\otimes e_k),\ \
\varepsilon (e_i)=\varepsilon _i\,,\ \ 1=E^ie_i\,,$$ $$e^ie^j=\tilde{f}{}_k^{ij}e^k\,,\ \ \Delta(e^i)=\tilde{c}{}^i_{jk}
(e^k\otimes e^j)\,,\ \ \varepsilon (e^i)=\tilde{E}^i\,,\ \
\tilde{1}=\tilde{\varepsilon}_i\,e^i\,, \label{37}$$ $$<e^i,e_j\,>=\eta ^i_j\,,\ \ \ \ \
<\!<e^i,e_j\,>\!>=\chi ^i_j\,,$$ we may list the relations between them which are caused by the bialgebra structure of $\cal A$ and $\cal B$, $$c^k_{ij}\,c^m_{kn}=c^m_{ik}\,c^k_{jn}\,,\ \
c^k_{ij}E^j=c^k_{ji}E^j=\delta ^k_i\,,\ \
c^k_{ij}\,\varepsilon _k=\varepsilon _i\,\varepsilon _j\,,$$ $$f_n^{ij}f_m^{nk}=f_m^{in}f_n^{jk}\,,\ \
f_i^{jk}\varepsilon _k=f_i^{kj}\varepsilon _k=\delta ^j_i\,,\ \
f_i^{jk}E^i=E^jE^k\,, \label{38}$$ $$c^k_{ij}\,f_k^{rs}=f_i^{mn}f_j^{pq}\,c^r_{mp}\,c^s_{nq}\,,\ \
E^i\varepsilon _i=1$$ (the same for quantities with a tilde), by the properties (\[34\]), $$\eta ^m_k\tilde{f}{}_m^{ij}=\eta ^i_m\eta ^j_nf_k^{mn}\,,\
\eta ^i_mc^m_{jk}=\eta ^m_j\eta ^n_k\tilde{c}{}^i_{mn}\,,\
\chi ^m_k\tilde{f}{}_m^{ij}=\chi ^i_m\chi ^j_nf_k^{nm}\,,\
\chi ^i_mc^m_{jk}=\chi ^m_j\chi ^n_k\tilde{c}{}^i_{nm}\,,\label{39}$$ $$\eta ^i_jE^j=\chi ^i_jE^j=\tilde{E}^i\,,\ \ \ \ \
\eta ^i_j\,\tilde{\varepsilon}_i=\chi ^i_j\,\tilde{\varepsilon}_i=
\varepsilon _j\,, \label{40}$$ and by the relation (\[35\]), $$\tilde{c}{}^i_{mn}\eta ^m_q\chi ^n_pf_j^{pq}=
\tilde{c}{}^i_{mn}\eta ^n_p\chi ^m_qf_j^{pq}=
\tilde{E}^i\varepsilon _j\,. \label{41}$$
Now (\[36\]) reads $$e^ie_j={\cal P}_{jq}^{ip}\,e_pe^q\,, \ \ \ \ \ \ \ \ {\cal P}_{jq}^{ip}
\equiv \eta ^m_n\,\tilde{c}{}^t_{mq}\,\tilde{c}{}^i_{ts}\,\chi_r^s
\,f_j^{rl}f_l^{pn}\,. \label{42}$$ To be convinced that this makes ${\cal A}\otimes {\cal B}$ a bialgebra, we should verify that the transition from, say, $e^ie_je_k$ to $e_te^n$ can be equally well performed in two different ways, which requires $${\cal P}^{ip}_{jq}\,{\cal P}^{qm}_{kn}c^t_{pm}=c^p_{jk}\,{\cal
P}^{it}_{pn} \,. \label{43}$$ Analogously, $e^ie^je_k\rightarrow e_te^n$ implies $${\cal
P}^{jp}_{kq}\,{\cal P}^{it}_{pm}\,\tilde{f}_n^{mq}=
\tilde{f}_p^{ij}\,{\cal P}^{pt}_{kn}\,, \label{44}$$ and, at last, $\Delta(e^ie_j)=\Delta(e^i)\Delta(e_j)$ means $$\tilde{c}{}^i_{mn}\,f_j^{pq}\,{\cal P}^{nk}_{pl}\,{\cal P}^{mr}_{qs}=
{\cal P}^{ip}_{jq}\,\tilde{c}{}^q_{sl}\,f_p^{kr}\,. \label{45}$$ All the conditions (\[43\])-(\[45\]) are verified by direct, though tedious, calculations with repeated use of (\[38\])–(\[41\]). For example, when proving (\[43\]) or (\[44\]), the $cf=ffcc$ relation from (\[38\]) is applied twice and $cc=cc$ (or $ff=ff$) many times, whereas in the case of (\[45\]) the key property is (\[41\]) accompanied by numerous applications of $cc=cc$ and $ff=ff$.
A minor problem is caused by checking the conditions $$E^j{\cal P}^{ip}_{jq}=E^p\delta ^i_q\,,\ \ \
\tilde{\varepsilon}_i{\cal P}^{ip}_{jq}=\tilde{\varepsilon}_q
\,\delta^p_j\,,\ \ \ \varepsilon _p\,\tilde{E}^q\,{\cal P}^{ip}_{jq}=
\tilde{E}^i\varepsilon _j\,, \label{46}$$ which reflect the properties of unit and counit. Proposition 1 is proved.
It is worth noting an alternative form of (\[42\]), $${\cal E}^{mj}_{in}e^i e_j={\cal F}^{mj}_{in}e_j e^i\,, \label{47}$$ where $${\cal E}^{mj}_{in}=\tilde{c}{}_{ip}^{m}\,\eta ^p_q\,f_n^{qj}\,,\ \ \ \
{\cal F}^{mj}_{in}=\tilde{c}{}_{pi}^{m}\,\eta
^p_q\,f_n^{jq}\,.\label{48}$$ Formula (\[47\]) is related to (\[42\]) through $${\cal
P}_{jq}^{ip}=\overline{\cal E}{}_{mj}^{in}\,{\cal F}_{qn}^{mp}\,,
\label{49}$$ with $$\overline{\cal E}{}^{mj}_{in}=\tilde{c}{}_{ip}^{m}\,\chi^p_q\,
f_n^{qj}\,,\ \ \overline{\cal E}{}_{jn}^{mi}{\cal E}_{ri}^{js}={\cal
E}_{jn}^{mi}\,\overline{\cal E}{}_{ri}^{js}=\delta ^m_r\delta ^s_n\,,
\label{50}$$ and, for completeness, $$\overline{\cal F}{}^{mj}_{in}=\tilde{c}{}_{pi}^{m}\,\chi^p_q\,
f_n^{jq}\,,\ \ \overline{\cal F}{}_{nj}^{im}{\cal F}_{ir}^{sj}={\cal
F}_{nj}^{im}\,\overline{\cal F}{}_{ir}^{sj}=\delta ^m_r\delta ^s_n\,.
\label{51}$$
The principal goal of the proposition proved was to formulate ‘minimal’ requirements (\[35\]) which yet suffice for the cross-multiplication recipe (\[36\]) to be fruitful. Properties of the second pairing, $<\!<\cdot,\cdot>\!>$, as well as (\[35\]) itself, are motivated by the anticipation of an antipode. The following proposition states this explicitly.
[**Proposition 2**]{}. Let $\cal A$ and $\cal B$ be the Hopf algebras and let there exist a pairing $<\cdot,\cdot>:{\cal B}\otimes {\cal A}
\rightarrow {\cal K}$ with the properties (\[34\]) and, in addition, $$<S(\alpha),a>=<\alpha,\overline{S}(a)>\,,\ \ \
<\overline{S}(\alpha),a>=<\alpha,S(a)>\,. \label{52}$$ Then the rule (\[6\]) makes ${\cal A}\otimes {\cal B}$ a Hopf algebra.
[**Proof**]{}. Using the notation $$S(e_i)=\xi ^j_i e_j\,,\ \ \overline{S}(e_i)=\sigma ^j_i e_j\,,\ \
S(e^j)=\tilde{\sigma}^j_i e^i\,,\ \ \overline{S}(e^j)=
\tilde{\xi}^j_i e^i\,, \label{53}$$ we write down the Hopf-algebra properties of $\cal A$ and $\cal B$ as $$\sigma ^k_i \xi ^j_k=\xi ^k_i \sigma ^j_k=\delta ^j_i\,,\ \
E^j\xi ^i_j=E^j\sigma ^i_j=E^i\,,\ \
\varepsilon _i \xi ^i_j=\varepsilon _i\sigma ^i_j=\varepsilon _j\,,$$ $$c^k_{ij}\,\xi ^m_k=c^m_{pq}\,\xi ^q_i\xi ^p_j\,,\
c^k_{ij}\,\sigma ^m_k=c^m_{pq}\,\sigma ^q_i\sigma ^p_j\,,\
f_k^{ij}\xi ^k_m=f_m^{pq}\xi ^i_q\,\xi ^j_p\,,\
f_k^{ij}\sigma ^k_m=f_m^{pq}\sigma ^i_q\,\sigma ^j_p\,, \label{54}$$ $$c^j_{nr}\xi ^r_s f_i^{ns}= c^j_{rn}\xi ^r_s f_i^{sn}=
c^j_{nr}\sigma ^r_s f_i^{sn}=c^j_{rn}\sigma^r_sf_i^{ns}=
E^j\varepsilon _i\,,$$ (the same for quantities with a tilde), and the conditions (\[52\]) as $$\eta ^k_i\,\tilde{\sigma}^j_k=\eta ^j_k\,\sigma ^k_i\,,\ \ \ \ \ \
\eta ^k_i\tilde{\xi}^j_k=\eta ^j_k\,\xi^k_i\,. \label{55}$$ The bialgebra part of the proof is already done in the Proposition 1 because of the following identification: $$<\!<\alpha ,a>\!>\equiv <S(\alpha),a>\,, \ \ {\rm i.e.}\ \
\chi ^j_i=\eta ^j_k\sigma ^k_i=\eta ^k_i\tilde{\sigma}^j_k\,.$$ The conditions (\[35\]) are readily checked, $$<\!<_1<_2\Delta(\alpha),\Delta(a)>\!>_1>_2\equiv <(S\otimes id)\circ
\Delta(\alpha),\Delta(a)>$$ $$=<m\circ(S\otimes id)\circ\Delta(\alpha),a>=\varepsilon (\alpha)
<\tilde{1},a>=\varepsilon (\alpha)\,\varepsilon(a)\,.$$ So it remains to prove that $S(e^ie_j)=S(e_j)S(e^i)$, i.e. $${\cal P}_{jq}^{ip}\,\tilde{\sigma}^q_r\,\xi ^s_p\,{\cal P}_{sm}^{rk}
=\xi ^k_j\,\tilde{\sigma}^i_m\,. \label{56}$$ It can be done applying the relations from the second line in (\[54\]) four times and then, twice, (\[41\]).
One easily observes that our bialgebras (Hopf algebras) $\cal T$ and $\cal U$ in Sect.2 fit the above Propositions. This explains the appearance of the cross-multiplication relations (\[8\]),(\[33\]) prior to factorization. The role of the latter is to produce ortonormalized bases, $$<e^i,e_j>\equiv \eta ^i_j=\delta ^i_j\,,$$ that enables one to rewrite (\[47\]) in the form of quasicocommutativity condition [@Dr] $${\cal R}\Delta(x)=\Delta'(x){\cal R}\,,\ \ \ \ \Delta'\equiv P\circ
\Delta\,,\ \ \ P(a\otimes b)=b\otimes a \label{57}$$ with the universal $\cal R$-matrix $${\cal R}=e_i\otimes e^i\,. \label{58}$$
[**4**]{}. The method described in the present paper creates quantum doubles out of arbitrary invertible Yang-Baxter $R$-matrices taken as an input. However, an output (quasitriangular Hopf algebras) might sometimes appear almost trivial if the factorization involved were ‘rude’ enough to crash down interesting features of original bialgebras. Fortunately, this does not necessarily take place. In [@Vl] (cf. [@Ma]) it is shown how $sl_q(2)$ is recovered by this method. Another illustration is given below.
Let us take as an input the $R$-matrix [@EOW; @Hl; @Hi] $$R=\left(
\begin{array}{cccc}1&q&-q&q^2\\0&1&0&q\\0&0&1&-q\\0&0&0&1
\end{array} \right) \label{59}$$ and consider the homogeneous case of the $R$-generated algebras (without $E$- and $F$-generators), assuming the notation $$T=\left( \begin{array}{cc} a&b\\c&d \end{array} \right)\,,\ \ \ \ \
U=\left( \begin{array}{cc} w&x\\y&z \end{array} \right)\,.$$ To remove the degeneracy of the pairing (\[4\]), we should require $$c=y=0\,,\ \ \ ad=da=wz=zw=1\,. \label{60}$$ Procedure of Sect.2 results in the Hopf algebra with generators $\{1,a,\overline{a},b,w,\overline{w},x\}$ whose multiplicative relations are $$[a,b]=q(a^2-1)\,,\ \ [w,x]=q(w^2-1)\,,\ \
[a,x]=qa(w-\overline{w})\,,$$ $$[w,b]=q(a-\overline{a})w\,,\ \ [b,x]=
q(a+\overline{a})x-q(w+\overline{w})b\,,\ \ aw=wa\,, \label{61}$$ and the corresponding ones for inverse generators. We see that $q$ may be absorbed into $b$ and $x$ (so we actually use the $R$-matrix (\[59\]) with $q=1$ [@DMMZ]). Denoting also $$a=e^g\,,\ \ \ \ w=e^h\,,\ \ \ \ x=-v\,, \label{62}$$ we eventually come to $$T=\left( \begin{array}{cc} e^g&b\\0&e^{-g} \end{array} \right)\,,\ \
\ \ \ U=\left( \begin{array}{cc} e^h&-v\\0&e^{-h} \end{array}
\right)\,.$$ The elements of these matrices form the Hopf algebra $$[g,b]=[h,b]=e^g-e^{-g}\,,\ \ \ [g,v]=[h,v]=e^{-h}-e^h\,,$$ $$[b,v]=(e^g+e^{-g})v+(e^h+e^{-h})b\,,\ \ [g,h]=0\,,$$ $$\Delta(b)=e^g\otimes b+b\otimes e^{-g}\,,\ \ \
\Delta(v)=e^h\otimes v+v\otimes e^{-h}\,, \label{63}$$ $$\Delta(g)=g\otimes 1+1\otimes g\,,\
\Delta(h)=h\otimes 1+1\otimes h\,,\ S^{\pm1}(g)=-g\,,\
S^{\pm1}(h)=-h\,,$$ $$S^{\pm1}(b)=-b\pm e^g\mp e^{-g}\,,\ \ \
S^{\pm1}(v)=-v\mp e^h\pm e^{-h}\,.$$ The pairing relations are the following:
$$<1,1>=<h,b>=<v,g>=1\,,\ \ \ <v,b>=-1\,,$$ $$<1,b>=<1,g>=<h,1>=<v,1>=<h,g>=0\,. \label{64}$$
By construction, the Hopf algebra (\[63\]) has to be a quantum double. In particular, it should possess a universal $\cal R$-matrix. Assuming exponential Ansatz, we can write down several terms of its formal power expansion in $g$ and $h$: $${\cal R}={\rm exp}\{g\otimes v+b\otimes h-\frac{1}{6}(g\otimes hvh
+gbg\otimes h+g^2\otimes (hv+vh)+(gb+bg)\otimes h^2)+\ldots \}\,,
\label{65}$$ where discarded terms are of the fifth order in $g$ and $h$. To check (\[57\]) and the quasitriangularity conditions $$(\Delta\otimes id){\cal R}={\cal R}_{13}{\cal R}_{23}\,,\ \ \
(id\otimes \Delta){\cal R}={\cal R}_{13}{\cal R}_{12} \label{66}$$ for the $\cal R$-matrix (\[65\]), the program FORM [@Ve] has been essentially used.
A detailed study of this and other $R$-generated quasitriangular Hopf algebras is a subject of further investigations.
I am grateful to L.Avdeev, A.Isaev and P.Pyatov for stimulating discussions.
[99]{} L.D.Faddeev, N.Yu.Reshetikhin, L.A.Takhtajan: Algebra i Analiz 1 vol.1\
(1989) 178 (in Russian); English translation: Leningrad Math.J. 1 (1990) 193 S.Majid: Int.J.Mod.Phys.A 5 (1990) 1 A.A.Vladimirov: JINR preprint E2-92-506, Dubna 1992 V.G.Drinfeld: Quantum groups, in Proc. ICM (Berkeley-86) vol.1, p.798, Providence: Amer.Math.Soc. 1987 T.Tjin: Int.J.Mod.Phys.A 7 (1992) 6175 N.Yu.Reshetikhin: Algebra i Analiz 1 vol.2 (1989) 169 (in Russian); English translation: Leningrad Math.J. 1 (1990) J.Hietarinta: Phys.Lett.A 165 (1992) 245 D.Bernard: Phys.Lett.B 260 (1991) 389 M.Schlieker, W.Weich, R.Weixler: Z.Phys.C – Particles and Fields 53 (1992) 79 M.Lüdde: Bonn Univ. preprint BONN-HE-92-9, Bonn 1992 H.Ewen, O.Ogievetsky, J.Wess: Preprint MPI-PAE/PTh 18/91, München 1991 L.Hlavaty: J.Phys.A – Math.Gen. 25 (1992) L63 E.E.Demidov, Yu.I.Manin, E.E.Mukhin, D.V.Zhdanovich: Progr.Theor.Phys. Suppl. 102 (1990) 203 J.A.M.Vermaseren: Symbolic manipulation with FORM, Amsterdam 1991
|
---
abstract: 'The problem of computing the class expansion of some symmetric functions evaluated in Jucys-Murphy elements appears in different contexts, for instance in the computation of matrix integrals. Recently, M. Lassalle gave a unified algebraic method to obtain some induction relations on the coefficients in this kind of expansion. In this paper, we give a simple purely combinatorial proof of his result. Besides, using the same type of argument, we obtain new simpler formulas. We also prove an analogous formula in the Hecke algebra of $(S_{2n},H_n)$ and use it to solve a conjecture of S. Matsumoto on the subleading term of orthogonal Weingarten function. Finally, we propose a conjecture for a continuous interpolation between both problems.'
address: 'LaBRI, Université Bordeaux 1, 351 cours de la Libération, 33 400 Talence, France'
author:
- Valentin Féray
bibliography:
- '../courant.bib'
title: 'On complete functions in Jucys-Murphy elements'
---
Introduction
============
Background
----------
The Jucys-Murphy elements $J_{i}$ lie in the symmetric group algebra ${\mathbb{Z}}[S_{n}]$. Despite their beautiful properties, their definition is very elementary: $$J_i = \sum_{j<i} (j\ i)$$ where $(j\ i)$ is the transposition in $S_n$ exchanging $i$ and $j$. They have been introduced separately by A. Jucys [@Jucys1966; @Jucys1974] and G. Murphy [@Murphy1981] and have played since a quite important role in representation theory. Indeed, they act diagonally on the Young basis of any irreducible representation $V_{\lambda}$: the eigenvalue of $J_{i}$ on an element $e_{T}$ of this basis ($T$ is a standard tableau of shape $\lambda$) is simply given by the content (*i.e.* the difference between the column-index and the row-index) of the box of $T$ containing $i$.
In fact, representation theory of symmetric groups $S_n$ can be constructed entirely using this property (see [@OkVe1996]). We also refer to papers of Biane [@BianeAsymptoticsCharacters] and Okounkov [@Okounkov2000] for nice applications of Jucys-Murphy elements to asymptotic representation theory.
A fundamental property, already observed by Jucys and Murphy, is that elementary symmetric functions evaluated in the $J_{i}$’s have a very nice expression (this evaluation is well-defined because Jucys-Murphy elements commute with each other). More precisely, if ${\kappa}(\sigma)$ denotes the number of cycles of a permutation $\sigma \in S_n$, then $$\label{EqElJM}
e_{k}(J_{1},\dots,J_{n}) = \sum_{\sigma \in S_{n} \atop {\kappa}(\sigma)=n-k} \sigma.$$ As this is a central element in the group algebra, all symmetric functions evaluated in Jucys-Murphy elements are also central. Therefore it is natural to wonder what their class expansion is. In other terms, given some symmetric function $F$, can we compute the coefficients $a^F_\lambda$ defined by: $$F(J_{1},\dots,J_{n}) =\sum_{\lambda \vdash n} a^F_\lambda {\mathcal{C}}_{\lambda},$$ where the sum runs over all partitions of $n$ and ${\mathcal{C}}_{\lambda}$ denotes the sum of all permutations of cycle-type $\lambda$? This problem may seem anecdotal at first sight, but it in fact appears in different domains of mathematics:
- When $F$ is a power sum $p_{k}$, it is linked with mathematical physics via vertex operators and Virasoro algebra (see [@LascouxThibon2001]).
- When $F$ is a complete symmetric function $h_{k}$, the coefficients appearing are exactly the coefficients in the asymptotic expansion of unitary Weingarten functions. The latter is the elementary brick to compute polynomial integrals over the unitary group (see [@NovakJMWeingarten; @ZinnJustinJMWeingarten]).
- The inverse problem (how can we write a given conjugacy class ${\mathcal{C}}_{\lambda}$ as a symmetric function in Jucys-Murphy element) is equivalent to try to express character values as a symmetric function of the contents. This question has been studied in some papers [@CorteelGoupilSchaeffer2004; @LassalleCaractereExplicite] but never using the combinatorics of Jucys-Murphy elements.
Previous and new results
------------------------
As mentioned in the paragraph above, the class expansion of elementary functions in Jucys-Murphy elements is very simple and was first established by A. Jucys. The next result of this kind was obtained by A. Lascoux and J.-Y. Thibon via an algebraic method: they gave the coefficients of the class expansion of power sums in Jucys-Murphy elements as some coefficients of an explicit series [@LascouxThibon2001].
Then S. Matsumoto and J. Novak [@MatsumotoNovakMonomialJM] computed the coefficients of the permutations of maximal absolute length in any monomial function in Jucys-Murphy elements. Their proof is purely combinatorial but does not seem to be extendable to all coefficients. As monomial functions form a linear basis of symmetric functions, one can deduce from their result a formula for the top coefficients for any symmetric function, in particular for complete functions (see also [@MurrayGeneratorsCentreSGA; @CollinsMatsumotoOrthogonalWeingarten]). To be comprehensive, let us add that the authors also obtained all coefficients of cycles in complete symmetric functions using character theory (their approach works for all cycles, not only the ones of maximal length).
Recently, M. Lassalle [@LassalleJM] gave a unified method to obtain some induction relations for the coefficients of the class expansion of several families of symmetric functions in Jucys-Murphy elements. These induction relations allow to compute any coefficient quite quickly. Besides, it is possible to use them to recover the results of A. Jucys, A. Lascoux and J.-Y. Thibon and also the top component of complete symmetric functions. Therefore, the work of M. Lassalle unifies most of the results obtained until now on the subject.
He proves his result with a sophisticated algebraic machinery: he begins by translating the problem in terms of shifted symmetric functions and then introduces some relevant differential operators.
In this paper, we give a simple combinatorial proof of his induction formulas. Our method of proof can also be adapted to find another formula (Theorem \[ThmNewInd\]). The latter is new and quite simple. An example of application is the following: using Matsumoto’s and Novak’s result on cycles, we are able to compute more generating series of coefficients.
Generalizations
---------------
An analogous problem can be considered in the Hecke algebra of the Gelfand pair $(S_{2n},H_n)$ (here, $H_n$ is the hyperoctahedral group seen as a subgroup of $S_{2n}$). Definitions are given in section \[SectDoubleClass\]. In this algebra, it is relevant to consider symmetric functions in odd-indexed Jucys-Murphy elements.
It is a remarkable fact that complete symmetric functions evaluated in these elements are also linked with integrals over groups of matrices, but the complex unitary group should be replaced by the real orthogonal group (see [@ZinnJustinJMWeingarten; @MatsumotoOddJM]).
In paper [@MatsumotoOddJM], S. Matsumoto computed the coefficients of permutations of maximal length in the case of monomial symmetric functions (hence obtaining an analog of his result with J. Novak).
Our new induction formula extends quite easily to this framework. A consequence is a proof of a conjecture of S. Matsumoto (see paragraph \[SubsubsectProofMatsumoto\]).
In fact, one can even define a generalization of the problem with a parameter $\alpha$ which interpolates between both frameworks:
- the class expansion of symmetric functions in Jucys-Murphy elements corresponds to the case $\alpha=1$;
- the analog in the Hecke algebra of $(S_{2n},H_n)$ corresponds to the case $\alpha=2$.
We recall this construction in section \[SectGeneralisation\].
A very interesting point in Lassalle’s method to obtain induction formulas is that it works almost without changing anything with a general parameter $\alpha$ [@LassalleJM section 11]. Unfortunately, we are not (yet) able to extend our work to this general setting. However, computer exploration suggests that some of the results still hold in the general case and we present a conjecture in this direction in section \[SectGeneralisation\].
Organization of the paper
-------------------------
In section \[SectSymGrpAlg\], we present our results in the symmetric group algebra.
Then, in section \[SectDoubleClass\], we look at the analogous problem in the Hecke algebra of $(S_{2n},H_n)$.
Finally, in section \[SectGeneralisation\], we present a conjecture for the continuous deformation between these two models.
Induction relations {#SectSymGrpAlg}
===================
Definitions and notations
-------------------------
The combinatorics of integer partitions is important in this work as they index the conjugacy classes in symmetric groups. A partition $\lambda$ of $n \geq 0$ (we note $\lambda \vdash n$) is a non-increasing finite sequence of positive integers (called parts) of sum $n$. Its number of elements is denoted $\ell(\lambda)$. We use the notation $\lambda \backslash i$ for the partition obtained from $\lambda$ by erasing one part equal to $i$ (we only use this notation when $\lambda$ has at least one part equal to $i$). In a similar fashion, $\lambda \cup i$ is the partition obtained by adding a part equal to $i$ (in an appropriate place such that the sequence remains non-increasing).
Let us denote by $S_{n}$ the symmetric group of size $n$ and by ${\mathbb{Z}}[S_{n}]$ its group algebra over the integer ring. Throughout the paper, the coefficient of a permutation $\sigma \in S_n$ in an element $x \in {\mathbb{Z}}[S_n]$ will be denoted $[\sigma] x$. If this coefficient is non-zero, we say that $\sigma$ is in $x$ (this is a small abuse of language, where we consider $x$ as its support).
The Jucys-Murphy elements $J_{i}$ (for $1\leq i \leq n$) are defined by: $$J_{i} = (1\ i) + (2\ i) + \dots + (i-1\ i)\ \ \in {\mathbb{Z}}[S_{n}].$$
Note that $J_{1}=0$ but we include it in our formulas for aesthetic reasons.
- Jucys-Murphy elements commute with each other.
- If $F$ is a symmetric function, $F(J_{1},J_{2},\dots,J_{n})$ lies in the center of the symmetric group algebra $Z({\mathbb{Z}}[S_{n}])$.
We recall that the cycle-type of a permutation $\sigma$ in $S_{n}$, which we will denote $\operatorname{type}(\sigma)$, is by definition the non-increasing sequence of the lengths of its cycles. This is an integer partition of $n$, which determines the conjugacy class of the permutation in the group $S_{n}$.
A basis of the center of the group algebra $Z({\mathbb{Z}}[S_{n}])$ is given by the sums of the conjugacy classes, that is the family of elements $${\mathcal{C}}_{\lambda} = \sum_{\sigma \in S_{n} \atop \sigma \text{ has cycle-type } \lambda} \sigma,$$ where $\lambda$ runs over all partitions of $n$. Therefore, for any symmetric function $F$, there exists some integer numbers $a^F_{\lambda}$ such that: $$F(J_{1},\dots,J_{n}) = \sum_{\lambda \vdash n} a^F_{\lambda} {\mathcal{C}}_{\lambda}.$$ In other terms, $a^F_{\lambda}$ is the coefficient of any permutation $\sigma$ of type $\lambda$ in $F(J_{1},\dots,J_{n})$.
We will here focus on the case where $F$ is a complete symmetric function (so $a^{h_{k}}_{\lambda}$ will be denoted $a^{k}_{\lambda}$) because of the link with some integrals over unitary groups mentioned in the introduction. Nevertheless, paragraph \[SubsectOtherSym\] is devoted to the case of other symmetric functions.
As an illustration, let us look at the case $k=2$ and $n=3$: $$\begin{aligned}
h_{2}(J_{1},J_{2},J_{3}) &= (1\ 2)^2 + \big((1\ 3) + (2\ 3)\big)^2 + (1\ 2) \cdot \big((1\ 3) + (2\ 3)\big); \\
&= \operatorname{Id}+ 2\operatorname{Id}+ (1\ 2\ 3) + (1\ 3\ 2) + (1\ 2\ 3)+(1\ 3\ 2); \\
&= 3{\mathcal{C}}_{1^3} + 2 {\mathcal{C}}_{3}.\end{aligned}$$ Note that the coefficient of a permutation at the end of the computation does depend only on its cycle-type, although $1$, $2$ and $3$ play different roles in the computation.
In other terms, we have computed the following coefficients: $$a^2_{(1^3)} = 3, \quad a^2_{(2\ 1)} = 0, \quad a^2_{(3)} = 2.$$
A combinatorial proof of Lassalle’s formula {#SubsectLassalleNewProof}
-------------------------------------------
In this paragraph, we give an elementary proof of the following theorem, which has been proved by M. Lassalle [@LassalleJM] using sophisticated algebraic tools.
\[ThmLassalle\] For any partition $\rho$ and integer $k$, one has: $$\begin{aligned}
\label{EqLassalle1} a_{\rho \cup 1}^{k} &= a_\rho^{k} +
\sum_{i=1}^{\ell(\rho)} \rho_{i}
a^{k-1}_{\rho \setminus (\rho_{i}) \cup (\rho_{i}+1)} ; \\
\label{EqLassalle2} \sum_{i=1}^{\ell(\rho)}
\rho_{i} a^{k}_{\rho \setminus (\rho_{i}) \cup (\rho_{i}+1)}
&= \sum_{1\leq i,j \leq \ell(\rho) \atop i \neq j}
\rho_{i} \rho_{j} a^{k-1}_{\rho \setminus (\rho_{i}, \rho_{j})
\cup (\rho_{i} + \rho_{j} +1) } \\
\nonumber & \qquad\qquad + \sum_{i=1}^{\ell(\rho)} \rho_i \sum_{r+s=\rho_{i}+1 \atop r,s \geq 1}
a^{k-1}_{\rho \setminus (\rho_{i}) \cup (r,s) }. \end{aligned}$$
We start from the obvious induction relation $$\label{EqRecH}
h_{k}(J_{1},\dots,J_{n+1}) = h_{k}(J_{1},\dots,J_{n}) + J_{n+1} h_{k-1}(J_{1},\dots,J_{n+1})$$ and we apply to it the following operator: $${\mathbb{E}}: \begin{array}{rcl}
{\mathbb{Z}}[S_{n+1}] & \to & {\mathbb{Z}}[S_{n}] \\
\sigma & \mapsto & \left\{
\begin{array}{l}
\sigma / \{1,\dots,n\} \text{ if }\sigma(n+1) = n+1 ;\\
0 \text{ else.}
\end{array}
\right.
\end{array}$$ Then we look at the coefficient of a permutation $\sigma$ of type $\rho \vdash n$ (in the following, $\sigma'$ is the image of $\sigma$ by the canonical embedding of $S_{n}$ into $S_{n+1}$, which means that we add $n+1$ as fixed point). $$\begin{aligned}
[\sigma] {\mathbb{E}}\big(h_{k}(J_{1},\dots,J_{n+1})\big) &= [\sigma'] h_{k}(J_{1},\dots,J_{n+1}) = a_{\rho \cup 1}^{k}, \label{EqCoefSigmaEHknp1} \\
[\sigma] {\mathbb{E}}\big(h_{k}(J_{1},\dots,J_n)\big) &= [\sigma] h_{k}(J_{1},\dots,J_{n}) = a_\rho^{k}, \label{EqCoefSigmaEHkn} \\
[\sigma] {\mathbb{E}}\big(J_{n+1} h_{k-1}(J_{1},\dots,J_{n+1})\big)&=
[\sigma'] \sum_{j \leq n} (j \ n+1) h_{k-1}(J_{1},\dots,J_{n+1}) \nonumber\\
& =\sum_{j \leq n} [(j \ n+1) \sigma'] h_{k-1}(J_{1},\dots,J_{n+1}) \nonumber\\
& = \sum_{j \leq n} a^{k-1}_{\operatorname{type}( (j \ n+1) \sigma' )}, \nonumber\end{aligned}$$ Let us label the cycles of $\sigma$ with the numbers $1,2,\ldots,\ell(\rho)$ such that the $i$-th cycle of $\sigma$ has length $\rho_i$. It is easy to see that $(j \ n+1) \sigma'$ has exactly the same cycle decomposition as $\sigma$ except that $n+1$ has been added right before $j$. Therefore, if $j$ is in the $i$-th cycle of $\sigma$, then $(j \ n+1) \sigma'$ has cycles of length $\rho_1,\rho_2,\dots,\rho_i+1,\dots,\rho_{\ell(\rho)}$. In other terms, its type is $\rho \setminus (\rho_i) \cup (\rho_i+1)$. As there are $\rho_i$ elements in the $i$-th cycle of $\sigma$, one obtains: $$[\sigma] {\mathbb{E}}\big(J_{n+1} h_{k-1}(J_{1},\dots,J_{n+1})\big) =
\sum_{1 \leq i \leq \ell(\rho)} \rho_i a^{k-1}_{\rho \setminus (\rho_i) \cup (\rho_i+1)}.
\label{EqCoefSigmaEJHk}$$ Putting together equations , , and , we obtain the first part of the theorem.
The second equality is obtained the same way except that we multiply equation by $J_{n+1}$ before applying the operator ${\mathbb{E}}$. One obtains: $$\begin{gathered}
\label{EqEJnp1TRecH}
{\mathbb{E}}\big(J_{n+1} h_{k}(J_{1},\dots,J_{n+1}) \big) = {\mathbb{E}}\big( J_{n+1} h_{k}(J_{1},\dots,J_{n}) \big) \\
+ {\mathbb{E}}\big(J_{n+1}^{2} h_{k-1}(J_{1},\dots,J_{n+1}) \big).\end{gathered}$$ The coefficient of $\sigma$ in the left-hand side has been computed: see equation . Let $\tau$ be a permutation in $h_{k}(J_{1},\dots,J_{n})$. It fixes $n+1$ and, hence, $(j\ n+1) \tau$ can not fix $n+1$ for $j=1,\dots,n$. Therefore, $$\label{EqEJnp1Hkn}
{\mathbb{E}}\big( J_{n+1} h_{k}(J_{1},\dots,J_{n}) \big) = 0.$$ For the last term, we write: $$\begin{gathered}
\label{EqEJ2H}
[\sigma] {\mathbb{E}}\big(J_{n+1}^{2} h_{k-1}(J_{1},\dots,J_{n+1}) \big) \\
= [\sigma'] \sum_{j_{1},j_{2} \leq n} (j_{1} \ n+1)\cdot (j_{2}\ n+1) \cdot
h_{k-1}(J_{1},\dots,J_{n+1}) \\
= \sum_{j_{1},j_{2} \leq n} [(j_{2} \ n+1)\cdot (j_{1}\ n+1) \cdot \sigma']
h_{k-1}(J_{1},\dots,J_{n+1}) \\
= \sum_{j_{1},j_{2} \leq n} a^{k-1}_{\operatorname{type}(
( j_{2} \ n+1)\cdot (j_{1}\ n+1) \cdot \sigma')}.\end{gathered}$$ As before, we label the cycles of $\sigma$. We split the sum in two parts, depending on whether $j_{1}$ and $j_{2}$ are in the same cycle of $\sigma$ or not:
- Suppose that both $j_{1}$ and $j_{2}$ are in the $i$-th cycle of $\sigma$. That implies that $j_{2} = \sigma^{m}(j_{1})$ for some integer $m$ between $1$ and $\rho_{i}$ (eventually $j_1=j_2$, which corresponds to $m=\rho_i$). Then $(j_{1} \ n+1)\cdot (j_{2}\ n+1) \cdot \sigma'$ has the same cycles as $\sigma$ except for its $i$-th cycle, as well as two other cycles: $$\big(j_{1}, \sigma(j_{1}), \dots \sigma^{m-1}(j_{1}) \big) \text{ and } \big(j_{2}, \sigma(j_{2}), \dots \sigma^{\rho_{i}-m-1}(j_{2}), n+1 \big).$$ Thus it has cycle-type $\rho \setminus (\rho_{i}) \cup (m,\rho_{i}-m+1)$. There are $\rho_i$ elements in the $i$-th cycle of $\sigma$ and, hence, $\rho_i$ possible values for $j_1$. For each value of $j_1$, there is exactly one value of $j_2$ corresponding to each value of $m$ between $1$ and $\rho_i$. Therefore, one has: $$\begin{aligned}
\sum_{j_{1},j_{2} \leq n \atop j_1 \sim_\sigma j_2}
a^{k-1}_{\operatorname{type}(( j_{2} \ n+1)\cdot (j_{1}\ n+1) \cdot \sigma')}
&= \sum_{i \leq \ell(\rho)} \rho_i \sum_{m =1}^{\rho_i}
a^{k-1}_{\rho \setminus (\rho_{i}) \cup (m,\rho_{i}-m+1)} \\
&= \sum_{i \leq \ell(\rho)} \rho_i \sum_{r+s = \rho_i+1 \atop r,s \geq 1}
a^{k-1}_{\rho \setminus (\rho_{i}) \cup (r,s)},
\end{aligned}$$ where $j_1 \sim_\sigma j_2$ means that $j_1$ and $j_2$ are in the same cycle of $\sigma$.
- Let us suppose now that $j_{1}$ and $j_{2}$ are respectively in the $i_1$-th and $i_2$-th cycles of $\sigma$ with $i_1 \neq i_2$. In this case $(j_{1} \ n+1)\cdot (j_{2}\ n+1) \cdot \sigma'$ has the same cycles as $\sigma$ except for its $i_1$-th and $i_2$-th cycles, as well as one new cycle: $$\big(j_{1}, \sigma(j_{1}), \dots \sigma^{\rho_{i_{1}}-1}(j_{1}), n+1,
j_{2}, \sigma(j_{2}), \dots \sigma^{\rho_{i_{2}}-1}(j_{2}) \big).$$ Thus $(j_{1} \ n+1)\cdot (j_{2}\ n+1) \cdot \sigma'$ has cycle-type $\rho \setminus (\rho_{i_{1}}, \rho_{i_{2}}) \cup (\rho_{i_{1}}+\rho_{i_{2}} + 1)$. As there are $\rho_{i_1}$ (resp. $\rho_{i_2}$) elements in the $i_1$-th (resp. $i_2$-th) cycle of $\sigma$, one obtains: $$\begin{aligned}
\sum_{j_{1},j_{2} \leq n \atop j_1 \nsim_\sigma j_2}
a^{k-1}_{\operatorname{type}( ( j_{2} \ n+1)\cdot (j_{1}\ n+1) \cdot \sigma')}
&= \sum_{i_1,i_2 \leq \ell(\rho) \atop i_1 \neq i_2} \rho_{i_1} \rho_{i_2}
a^{k-1}_{\rho \setminus (\rho_{i_{1}}, \rho_{i_{2}}) \cup (\rho_{i_{1}}+\rho_{i_{2}} + 1)},
\end{aligned}$$ where $j_1 \nsim_\sigma j_2$ means that $j_1$ and $j_2$ are not in the same cycle of $\sigma$.
Finally, $$\begin{gathered}
\label{EqEJ2H}
[\sigma] {\mathbb{E}}\big(J_{n+1}^{2} h_{k-1}(J_{1},\dots,J_{n+1}) \big) = \\
\sum_{i \leq \ell(\rho)} \rho_i \sum_{r+s = \rho_i+1 \atop r,s \geq 1}
a^{k-1}_{\rho \setminus (\rho_{i}) \cup (r,s)}
+ \sum_{i_1,i_2 \leq \ell(\rho) \atop i_1 \neq i_2} \rho_{i_1} \rho_{i_2}
a^{k-1}_{\rho \setminus (\rho_{i_{1}}, \rho_{i_{2}}) \cup (\rho_{i_{1}}+\rho_{i_{2}} + 1)}.\end{gathered}$$ Putting together equations , , and , we obtain the second part of the theorem.
This theorem allows to compute inductively the coefficients $a_\rho^k$, see [@LassalleJM end of page 13].
New relations {#SubsectNewInd}
-------------
In this paragraph, we prove new induction relations on the coefficients $a^k_\rho$, using the same kind of method as above.
\[ThmNewInd\] For any partition $\rho$ and positive integers $k,m$ one has: $$\label{EqNewInd}
a^{k}_{\rho \cup (m)} = \delta_{m,1} a^k_{\rho}
+\sum_{1 \leq i \leq \ell(\rho)} \rho_i
a^{k-1}_{\rho \setminus (\rho_{i}) \cup (\rho_{i}+m)}
+ \sum_{r+s=m \atop r,s \geq 1} a^{k-1}_{\rho \cup (r,s)}.$$
The case $m=1$ corresponds to equation and has already been proved.
Suppose $m>1$. Once again, we begin with equation and we will look at the coefficient of some permutation $\sigma$ on both sides.
Let $n=|\rho|+m-1$ and $\sigma$ be a permutation in $S_{n+1}$ of type $\rho \cup (m)$ such that $n+1$ is in a cycle of $\sigma$ of length $m$ (in particular, as $m>1$, $n+1$ is not a fixed point of $\sigma$). By definition, $$[\sigma] h_{k}(J_{1},\dots,J_{n+1}) = a^{k}_{\rho \cup (m)}.
\label{EqCoef231}$$ Besides, as all permutations in $h_{k}(J_{1},\dots,J_{n})$ fix $n+1$, but not $\sigma$, one has: $$[\sigma] h_{k}(J_{1},\dots,J_{n}) = 0.
\label{EqCoef232}$$ We shall now compute $$\begin{aligned}
[\sigma] J_{n+1} h_{k-1}(J_{1},\dots,J_{n+1}) &=
[\sigma] \sum_{j \leq n} (j\ n+1) h_{k-1}(J_{1},\dots,J_{n+1}) \\
&= \sum_{j \leq n} [(j\ n+1) \sigma] h_{k-1}(J_{1},\dots,J_{n+1}) \\
&= \sum_{j \leq n} a^{k-1}_{\operatorname{type}( (j\ n+1) \sigma)}.
\end{aligned}$$ As before, we label the cycles of $\sigma$: the cycle containing $n+1$ gets the label $\ell(\rho)+1$, the others are labelled such that the $i$-th cycle has length $\rho_i$ (for $1\leq i\leq \ell(\rho)$). We distinguish two cases:
- Suppose that $j$ is in the $\ell(\rho)+1$-th cycle of $\sigma$ (as $n+1$). This implies that $j=\sigma^h(n+1)$ for some $h$ between $1$ and $m-1$ (as $j \neq n+1$, $h$ can not be equal to $m$). Then $(j\ n+1) \sigma$ has the same cycles than $\sigma$ except for its $\ell(\rho)+1$-th cycle, as well as two new cycles: $$\big( n+1, \sigma(n+1), \dots, \sigma^{h-1}(n+1) \big)
\text{ and } \big(j, \sigma(j), \dots, \sigma^{m-h-1}(j) \big).$$ Thus its cycle-type is ${\rho \cup (h,m-h)}$. Exactly one value of $j$ corresponds to each integer $h$ between $1$ and $m-1$. One has: $$\sum_{j \leq n \atop j \sim_\sigma n+1} a^{k-1}_{\operatorname{type}( (j\ n+1) \sigma)}
= \sum_{h=1}^{m-1} a^{k-1}_{\rho \cup (h,m-h)}
= \sum_{r+s=m \atop r,s \geq 1} a^{k-1}_{\rho \cup (r,s)}.$$
- Otherwise, $j$ is in the $i$-th cycle of $\sigma$ for some $i \leq \ell(\rho)$ (in particular, it is not in the same cycle as $n+1$). In this case, $(j\ n+1) \sigma$ has the same cycles than $\sigma$ except for its $i$-th and $\ell(\rho)+1$-th cycles, as well as one new cycle: $$\big( j, \sigma(j), \dots, \sigma^{\rho_i-1}(j),
n+1, \sigma(n+1),\dots, \sigma^{m-1}(n+1) \big).$$ Thus its cycle-type is $\rho \setminus (\rho_{i}) \cup (\rho_{i}+m)$. As there are $\rho_i$ elements in the $i$-th cycle of $\sigma$ for each $i$, one obtains: $$\begin{aligned}
\sum_{j \leq n \atop j \nsim_\sigma n+1} a^{k-1}_{\operatorname{type}( (j\ n+1) \sigma)}
&= \sum_{1 \leq i \leq \ell(\rho)} \rho_i
a^{k-1}_{\rho \setminus (\rho_{i}) \cup (\rho_{i}+m)}.
\end{aligned}$$
Finally, $$[\sigma] J_{n+1} h_{k-1}(J_{1},\dots,J_{n+1}) =
\sum_{r+s=m \atop r,s \geq 1} a^{k-1}_{\rho \cup (r,s)} \\
+ \sum_{1 \leq i \leq \ell(\rho)} \rho_i
a^{k-1}_{\rho \setminus (\rho_{i}) \cup (\rho_{i}+m)}.
\label{EqCoef233}$$ The theorem follows from equations , , and .
This type of case distinctions, depending on whether some elements are in the same cycle or not, is quite classical and leads often to the same kind of induction relations, called *cut-and-join* equations: see for instance [@GouldenJacksonCutAndJoin].
This theorem implies Theorem \[ThmLassalle\]. Indeed, equation can be written as a linear combination of specializations of equation , but the converse is not true.
Our new induction relation allows to compute $a_\rho^k$ by induction over $|\rho|$ and $k$ in several different ways. Indeed, a given partition $\lambda$ can be written as $\rho \cup (m)$ in several different ways. It is not evident [*a priori*]{} that the final result does not depend on this choice. This relies on the initial conditions: $$a^1_\rho=\begin{cases}
1 \text{ if }\rho=2 1^i\text{ for some $i$;}\\
0 \text{ else.}
\end{cases}$$
Taking care of the dependence in $n$
------------------------------------
As mentioned by Lassalle [@LassalleJM paragraph 2.7], the coefficients $a^k_{\rho \cup 1^{n- |\rho|}}$, seen as functions of $n$, have a very nice structure. More precisely, let us define $c^{k}_{\lambda}$, where $\lambda$ is a partition, by induction on $|\lambda|$ by the formula: $$\label{EqLinkAC}
a^k_{\rho} = \sum_{i = 0}^{m_{1}(\rho)} c^k_{\bar{\rho} \cup 1^i} \binom{m_{1}(\rho)}{i},$$ where $m_{1}(\rho)$ is the number of parts equal to $1$ in $\rho$ and $\bar{\rho}$ is obtained from $\rho$ by erasing its parts equal to $1$.
The interesting fact now is that $c^k_{\rho}$ is equal to $0$ as soon as $|\rho| - \ell(\rho) + m_{1}(\rho)$ is bigger than $k$, while, for a given $k$, one has infinitely many non-zero $a^k_{\rho}$ (this fact is explained in paragraph \[SubsectPartialJM\]). As a consequence, coefficients $c$ are convenient to compute simultaneously the class expansion of $h_k(J_1,\dots,J_n)$ for all positive integers $n$ (the integer $k$ being fixed): see Example \[ExCalculC\] at the end of this paragraph.
Using equation , one can translate Theorems \[ThmLassalle\] and \[ThmNewInd\] into relations over the $c$’s, but it is rather technical (see [@LassalleJM section 12]). We prefer here to explain the combinatorial meaning of the $c$’s and derive directly relations over the $c$’s using this interpretation.
### Algebra of partial permutations
A good tool for that are the partial permutations introduced by Ivanov and Kerov in [@IvanovKerovPartialPermutations]. Let ${\mathcal{B}}_{\infty}$ be the following ${\mathbb{Z}}$-algebra:
- A partial permutation is a couple $(d,\sigma)$ where $d$ is a finite set of positive integers and $\sigma$ a permutation of $d$. As a ${\mathbb{Z}}$-module, ${\mathcal{B}}_{\infty}$ is the set of infinite linear combinations of partial permutations.
- the product on partial permutations is given by: $$\label{EqProdPartialPerm}
(d,\sigma) \cdot (d',\sigma') =
(d \cup d', \tilde{\sigma} \cdot \tilde{\sigma'}),$$ where $\tilde{\sigma}$ (resp. $\tilde{\sigma'}$) is the canonical continuation of $\sigma$ (resp. $\sigma'$) to $d \cup d'$ (*i.e.* we add fixed points, we will use this notation throughout the paper). It extends to ${\mathcal{B}}_{\infty}$ by biliearity: $$\left( \sum_{(d,\sigma)} c_{d,\sigma} (d,\sigma) \right) \cdot
\left( \sum_{(d',\sigma')} c_{d',\sigma'} (d',\sigma') \right)
= \! \sum_{(d,\sigma), (d',\sigma')} \! c_{d,\sigma} c_{d',\sigma'}
(d,\sigma) \cdot (d',\sigma').$$ It is easy to see that in the formula above, only a finite number of term can contribute to the coefficient of a given partial permutation $(d'',\sigma'')$ (indeed, the indices of such terms must fulfill $d,d' \subset d''$). Therefore the right-hand side is a well-defined element of ${\mathcal{B}}_\infty$.
The infinite symmetric group $S_{\infty}$ acts naturally on ${\mathcal{B}}_{\infty}$: if $\tau$ belong to $S_{\infty}$, that is $\tau$ is a permutation of ${\mathbb{N}}^{\star}$ with finite support, we define $$\tau \bullet (d,\sigma) = (\tau(d), \tau \sigma \tau^{-1}).$$ The invariants by the action of $S_{\infty}$ form a subalgebra ${\mathcal{A}}_{\infty}$ of ${\mathcal{B}}_{\infty}$. As explained in [@IvanovKerovPartialPermutations § 6], a basis of this subalgebra is $$\big({\mathcal{PC}}_{\lambda}\big)_{\lambda \text{ partition}} \text{ where }{\mathcal{PC}}_{\lambda}= \!\! \sum_{d \subset {\mathbb{N}}^\star, \ |d| = |\lambda| \atop \sigma \in S_{d},\ \text{ cycle-type}(\sigma)=\lambda} \!\! (d,\sigma).$$ The nice property of this construction is that, for each $n$, there exists a morphism $\varphi_{n}$ from ${\mathcal{B}}_{\infty}$ to the symmetric group algebra ${\mathbb{Z}}[S_{n}]$ defined by: $$\varphi_{n}(d,\sigma) = \left\{
\begin{array}{l}
\tilde{\sigma} \text{ if }d \subset \{1,\dots,n\} ;\\
0 \text{ else.}
\end{array} \right.$$ These morphisms restrict to morphisms ${\mathcal{A}}_{\infty} \to Z({\mathbb{Z}}[S_{n}])$. The image of vectors of the basis is given by [@IvanovKerovPartialPermutations equation (4.3)]: $$\varphi_{n}({\mathcal{PC}}_{\lambda}) =
\binom{n -|\lambda|+m_{1}(\lambda)}{m_{1}(\lambda)}
{\mathcal{C}}_{\lambda \cup 1^{n-|\lambda|}}.$$ We shall need a last property of the algebra ${\mathcal{B}}_\infty$. Let us define, for a partial permutation $(d,\sigma)$ its degree to be $$\deg(d,\sigma) = |d| - \#\text{ cycles of } \sigma +
\#\text{ fixed points of } \sigma.$$ We consider the subspace $({\mathcal{B}}_\infty)_{\leq \delta}$ to be the set of infinite linear combinations of partial permutations of degree smaller or equal to $\delta$.
The decomposition $\displaystyle
{\mathcal{B}}_\infty= \bigcup_{\delta \geq 1} ({\mathcal{B}}_\infty)_{\leq \delta}$ defines an algebra filtration.
Consider $\deg'$ defined by $$\deg'(d,\sigma) = |d| - \#\text{ cycles of } \sigma,$$ $\deg'$ is the minimal number of factors needed to write $\sigma$ (or $\tilde{\sigma}$) as a product of transpositions. It is known to define a filtration of ${\mathbb{Z}}[S_n]$ and hence of ${\mathcal{B}}_\infty$ (see [@IvanovKerovPartialPermutations equation (10.3)]).
We have to prove that if $(\pi,f)=(\sigma,d) \cdot (\tau,e)$, then $$\deg(\pi,f)\leq \deg(\sigma,d) + \deg(\tau,e).$$ We make an induction on the number $m_1$ of fixed points of $\pi$.
If $m_1=0$, then $$\deg(\pi,f) = \deg'(\pi,f) \leq \deg'(\sigma,d) + \deg'(\tau,e)
\leq \deg(\sigma,d) + \deg(\tau,e).$$
Otherwise, let $i \in f$ be a fixed point of $\pi$. We consider the linear operator $F_i$ $$F_i : \begin{array}{rcl}
{\mathcal{B}}_\infty &\longrightarrow & {\mathcal{B}}_\infty \\
(\sigma,d) & \longmapsto & \begin{cases}
(\sigma_{\backslash i},d \backslash \{i\}) \text{ if } i \in d\\
(\sigma,d) \text{ else,}
\end{cases}
\end{array}$$ where $\sigma_{\backslash i}$ is the permutation obtained by erasing $i$ in the expression of $\sigma$ as a product of cycles of disjoint supports. Equivalenty, by definition, $\sigma_{\backslash i}(j)=\sigma(j)$ if $j \neq \sigma^{-1}(i)$ and $\sigma_{\backslash i}(\sigma^{-1}(i))=\sigma(i)$. It is immediate to check that $\deg(F_i(\sigma,d))=\deg(\sigma,d) - 1$ unless $i$ is in a cycle of length $2$ in $\sigma$, in which case $\deg(F_i(\sigma,d))=\deg(\sigma,d)$.
Note that $F_i$ is *not* an algebra morphism. However, as $i \in \operatorname{Fix}(\pi)$, one has: $$\label{EqFiQuasiMorphism}
F_i(\pi,f)=F_i(\sigma,d) \cdot F_i(\tau,e).$$ Let us explain why this holds. First, it is obvious that $$(d \backslash \{i\}) \cup (e \backslash \{i\})= (d \cup e) \backslash \{i\} = f \backslash \{i\}.$$ Then, if $j \neq i,\tau^{-1}(i)$, then $\tau_{\backslash i}(j)=\tau(j)$ and thus $$\sigma_{\backslash i}\big(\tau_{\backslash i}(j)\big)
=\sigma_{\backslash i}\big(\tau(j)\big)
=\sigma\big(\tau(j)\big) = \pi(j)
= \pi_{\backslash i}(j)$$ because $\sigma(\tau(j))=\pi(j)$ is different from $i$ (indeed, $\pi(i)=i$ and $j \neq i$). Finally, one only has to check that: $$\sigma_{\backslash i} \big(\tau_{\backslash i} (\tau^{-1}(i)) \big)
= \pi_{\backslash i} (\tau^{-1}(i)).$$ But $\tau_{\backslash i} (\tau^{-1}(i)) =\tau(i)$ and, as $\sigma(\tau(i))=\pi(i)=i$, the left-hand side is equal to $\sigma_{\backslash i}(\tau(i))=\sigma(i)$. But, as $i$ is a fixed point of $\pi$, the permutation $\pi_{\backslash i}$ is simply $\pi |_{f \backslash \{i\}}$ and thus the right-hand side is equal to $\pi(\tau^{-1}(i))=\sigma(i)$. This ends the proof of equation .
As $F_i(\pi,f)$ has one less fixed point than $\pi,f$, we can apply the induction hypothesis and one has: $$\deg(F_i(\pi,f)) \leq \deg(F_i(\sigma,d)) + \deg(F_i(\tau,e)).$$ As mentioned above: $$\begin{aligned}
\deg(F_i(\pi,f)) &= \deg(\pi,f) +1; \\
\deg(F_i(\sigma,d)) &= \deg(\sigma,d) +1 -\delta_1; \\
\deg(F_i(\tau,e)) = \deg(\tau,e) +1 -\delta_2,
\end{aligned}$$ where $\delta_1$ (resp. $\delta_2$) is equal to $1$ if $i$ is in a cycle of length $2$ in $\sigma$ (resp. $\tau$) and $0$ else.
If one of the $\delta$’s is equal to $0$, one has $$\begin{gathered}
\deg\left( (\pi,f) \right) = \deg(F_i(\pi,f)) +1
\leq \deg(F_i(\sigma,d)) + \deg(F_i(\tau,e)) +1\\
\leq \deg(\sigma,d) + \deg(\tau,e)
\end{gathered}$$ and the proof is over in this case. So the only case we have to study is when $i$ is in cycles of length $2$ in $\sigma$ and $\tau$. Of course, as $\sigma(\tau(i))=i$, both $\sigma(i)$ and $\tau(i)$ are equal to the same number $j$. In this case, we have: $$\begin{aligned}
F_j(F_i(\pi,f))&=F_j(F_i(\sigma,d)) \cdot F_j(F_i(\tau,e));\\
\deg(\pi,f)&= \deg\big(F_j(F_i(\pi,f)))+2; \\
\deg(\sigma,d)&= \deg\big(F_j(F_i(\sigma,d)))+1; \\
\deg(\tau,e)&= \deg\big(F_j(F_i(\tau,e)))+1
\end{aligned}$$ and we can conclude by induction.
In their paper, V. Ivanov and S. Kerov considered a large family of filtrations on ${\mathcal{B}}_\infty$ [@IvanovKerovPartialPermutations Proposition 10.3], but it does not contain this one.
### Complete functions in partial Jucys-Murphy elements {#SubsectPartialJM}
It has been observed in [@FerayPartialJM Section 2] that, if we define natural analogs of Jucys-Murphy elements in ${\mathcal{B}}_{\infty}$ by: $$X_{i} = \sum_{j < i} \big(\{j,\ i\},\ (j\ i)\big) \quad \text{for }i \geq 1,$$
- then they still commute with each other;
- besides, the evaluation $F(X_{1}, X_{2}, X_{3}, \dots)$ of any symmetric function $F$ in the infinite sequence of partial Jucys-Murphy elements is well-defined and lies in ${\mathcal{A}}_{\infty}$.
Therefore there exist coefficients $c^k_{\lambda}$ such that $$h_{k}(X_{1},X_{2},X_{3}, \dots) = \sum_{\lambda} c^k_{\lambda} {\mathcal{PC}}_{\lambda}.$$ In other terms, $c^k_{\lambda}$ is the coefficient of any partial permutation $(d,\sigma)$ with $|d|=|\lambda|$ and $\sigma$ of cycle-type $\lambda$ in $h_{k}(X_{1},X_{2},X_{3}, \dots)$. Applying $\varphi_{n}$, one obtains: $$\begin{aligned}
h_{k}(J_1,\dots,J_n) &= \sum_{\lambda} c^k_{\lambda}
\binom{n -|\lambda|+m_{1}(\lambda)}{m_{1}(\lambda)}
{\mathcal{C}}_{\lambda \cup 1^{n-|\lambda|}} \\
&= \sum_{\rho \vdash n} \left(
\sum_{\lambda \text{ such that} \atop \lambda \cup 1^{n-|\lambda|} = \rho}
c^k_{\lambda} \binom{n -|\lambda|+m_{1}(\lambda)}{m_{1}(\lambda)} \right)
{\mathcal{C}}_\rho \\
&= \sum_{\rho \vdash n} \left( \sum_{i=1}^{m_1(\rho)} c^k_{\bar{\rho} \cup 1^i}
\binom{m_{1}(\rho)}{i} \right) {\mathcal{C}}_\rho\end{aligned}$$ Therefore, the numbers $c^k_\lambda$ fulfill equation and this definition is equivalent to the one given at the beginning of the subsection. Note that with this construction, it is obvious that the $c$’s are non-negative integers (fact which was observed numerically by Lassalle, private communication).
The fact that $c^k_{\rho}$ is equal to $0$ as soon as $|\rho| - \ell(\rho) + m_{1}(\rho)$ is bigger than $k$ is also natural because each $X_i$ is in $({\mathcal{B}}_\infty)_{\leq 1}$ and hence $h_{k}(X_{1},X_{2},X_{3}, \dots)$ lies in $({\mathcal{B}}_\infty)_{\leq k}$. This can of course be generalized to any symmetric function. In terms of $a$’s, using equation , this implies the following property:
Let $\rho$ be a partition and $F$ a symmetric function of degree $k$. The function $t \mapsto a^F_{\rho \cup 1^t}$ is a polynomial in $t$ of degree smaller or equal to $$k-(|\rho|-\ell(\rho)).$$
The fact that this function is a polynomial is already known [@MatsumotoNovakMonomialJM Theorem 4.4], but not the bound on the degree.
Besides, we can obtain induction relations on the $c$’s with the same kind of argument we used for the $a$’s:
\[ThmIndC\] For any partition $\rho$ and positive integers $m$ and $k$, one has $$\begin{aligned}
c^k_{\rho \cup 1} &= \sum_{i} \rho_i c^{k-1}_{\rho \setminus (\rho_i) \cup (\rho_i+1)} ;\\
c^k_{\rho \cup 2} &=\sum_{i} \rho_i c^{k-1}_{\rho \setminus (\rho_i) \cup (\rho_i+2)}+ c^{k-1}_{\rho \cup (1,1)} + 2 c^{k-1}_{\rho \cup (1)} + c^{k-1}_{\rho} ;\\
c^k_{\rho \cup m} &=\sum_{i} \rho_i c^{k-1}_{\rho \setminus (\rho_i) \cup (\rho_i+m)}+\sum_{r+s=m \atop r,s \geq 1} c_{\rho \cup (r,s)} + 2 c^{k-1}_{\rho \cup (m-1)} \text{ if }m \geq 3.\end{aligned}$$
Let $n+1=|\rho| + m$ and fix a partial permutation $(d,\sigma)$ with:
- $d = \{1,\dots,n+1\}$;
- $\sigma$ has cycle-type $\rho \cup (m)$ and $n+1$ is in a cycle of length $m$.
Let us look at the coefficient $c^k_{\rho \cup m}$ of $(d,\sigma)$ in $h_{k}(X_{1}, X_{2},\dots)$. As $n+1$ is the biggest element in $d$, it implies that every monomials in the $X_{i}$’s contributing to the coefficient of $(d,\sigma)$ contains no $X_{i}$ with $i>n+1$ and contains at least one $X_{n+1}$. Thus: $$\begin{aligned}
c^k_{\rho \cup m} &= [(d,\sigma)] h_{k}(X_{1}, X_{2},\dots) = [(d,\sigma)] X_{n+1} h_{k-1}(X_{1},\dots,X_{n+1}) ; \\
&= [(d,\sigma)] \sum_{j < n+1}
\sum_{\nu} \sum_{(d',\tau),\ |d'| = |\nu| \atop \text{cycle-type}(\tau)=\nu}
c^{k-1}_{\nu} \cdot (d' \cup \{j, n+1\}, (j\ n+1) \tilde{\tau});\\
&= \sum_{j < n+1}
\sum_{(d',\tau) \atop \text{cond. }(1)} c^{k-1}_{\operatorname{type}(\tau)},\end{aligned}$$ where condition $(1)$ is the equality $(d' \cup \{j, n+1\}, (j\ n+1) \tilde{\tau})=(d,\sigma)$. For a given integer $j$ between $1$ and $n$, we have to determine which sets $d'$ and permutations $\tau \in S_{d'}$ fulfill $d' \cup \{j, n+1\} = d$ and $(j\ n+1) \tilde{\tau} = \sigma$. Of course, one must have $\tilde{\tau}=(j\ n+1)\sigma$. As in the previous paragraphs, we make a case distinction:
- If $j$ is not in the same cycle of $\sigma$ as $n+1$, then they are in the same cycle of $\tilde{\tau}$. In particular, neither $j$ nor $n+1$ are fixed points of $\tilde{\tau}$, so both belong to $d'$. Therefore, necessarily, $d'=d$. The discussion on the possible cycle-types of $\tau=\tilde{\tau}$ is exactly the same than in paragraph \[SubsectNewInd\] and one has: $$\sum_{j < n+1 \atop j \nsim n+1} \sum_{(d',\tau) \atop \text{cond. }(1)}
c^{k-1}_{\operatorname{type}(\tau)}
= \sum_{1 \leq i \leq \ell(\rho)} \rho_i
c^{k-1}_{\rho \setminus (\rho_{i}) \cup (\rho_{i}+m)}.$$
- If $j$ is in the same cycle of $\sigma$ as $n+1$ (this implies $m>1$), we write $j=\sigma^h(n+1)$. If $d'=d$, then $\tau=\tilde{\tau}=(j\ n+1)\sigma$ and its possible cycle-types has been discussed in paragraph \[SubsectNewInd\], so one has: $$\sum_{j < n+1 \atop j \sim n+1}
\sum_{(d',\tau) \atop \text{cond. $(1)$ and }d'=d}
c^{k-1}_{\operatorname{type}(\tau)}
= \sum_{r+s=m \atop r,s \geq 1} c^{k-1}_{\rho \cup (r,s)}.$$ But, in this case, $d'$ is not necessarily equal to $d$. Indeed, when $h=1$, the permutation $\tilde{\tau}$ has $n+1$ as a fixed point. If $m>2$, $j$ can not be a fixed point in this case so $j \in d'$. Therefore $d'=d$ or $d'=d \backslash \{n+1\}$. In the last case, $\tau$ is a permutation of cycle-type $\rho \cup (m-1)$. A similar phenomenon happens when $h=m-1$: $j$ is a fixed point of $\tilde{\tau}$, but not $n+1$, so $d'$ can be equal to $d \backslash \{j\}$ and the corresponding permutation $\tau$ has cycle-type $\rho \cup (m-1)$. Therefore, if $m>2$, one has: $$\sum_{j < n+1 \atop j \sim n+1}
\sum_{(d',\tau) \atop \text{cond. $(1)$}}
c^{k-1}_{\operatorname{type}(\tau)}
= \sum_{r+s=m \atop r,s \geq 1} c^{k-1}_{\rho \cup (r,s)}
+ 2 c^{k-1}_{\rho \cup (m-1)}.$$ If $m=2$, the only possible value of $j$ is $\sigma(n+1)$ and, in this case, $\tilde{\tau}$ fixes both $j$ and $n+1$. Therefore $d'$ can be equal either to $d$, $d \backslash \{n+1\}$, $d \backslash \{j\}$ or $d \backslash \{j,n+1\}$. It is easy to see that the cycle-types of the corresponding permutations $\tau$ are respectively $\rho \cup (1,1)$, $\rho \cup (1)$, $\rho \cup (1)$ and $\rho$. Thus, for $m=2$, $$\sum_{j < n+1 \atop j \sim n+1}
\sum_{(d',\tau) \atop \text{cond. $(1)$}}
c^{k-1}_{\operatorname{type}(\tau)}=
c^{k-1}_{\rho \cup (1,1)} + 2 c^{k-1}_{\rho \cup (1)} + c^{k-1}_{\rho}.$$
Summing the different contributions in the different cases, we obtain our theorem.
\[ExCalculC\] Here are the non-zero values of $c^k_\rho$ for small values of $k$ ($k \leq 3$). It is immediate that $c^1_{(2)}$ is equal to $1$, while all other $c^1_\mu$ are $0$. Then Theorem \[ThmIndC\] allows to compute: $$\begin{aligned}
c^2_{(1,1)}&= 1\cdot c^1_{(2)}=1 ;\\
c^2_{(2,2)}&= 2 c^1_{(4)} + c^1_{(2,1,1)} + 2 c^1_{(2,1)} + c^1_{(2)} = 1 ;\\
c^2_{(3)} &= 2 c^1_{(2,1)} + 2 c^1_{(2)} = 2;\\
c^3_{(2)} &= c^2_{(1,1)} = 1;\\
c^3_{(2,1)} &= 2 c^2_{(3)} = 4;\\
c^3_{(2,1,1)} &= 2 c^2_{(3,1)} + c^2_{(2,2)} = 1; \\
c^3_{(2,2,2)} &= c^2_{(2,2)}=1; \\
c^3_{(3,2)} & = c^2_{(3)} = 2; \\
c^3_{(4)} & = c^2_{(2,2)} + 2 c^2_{(3)}=5.\end{aligned}$$ Using equation , we can compute all coefficients $a^k_\rho$ for $k=2,3$ and we find the following class expansion (true for any $n\geq 1$): $$\begin{aligned}
h_2(J_1,\ldots,J_n) & = \delta_{n\geq 3}\ 2 {\mathcal{C}}_{(3,1^{n-3})} + \delta_{n \geq 4} \ {\mathcal{C}}_{(2,2,1^{n-4})} + \binom{n}{2} {\mathcal{C}}_{1^n};\\
h_3(J_1,\ldots,J_n) & = \delta_{n\geq 4}\ 5 {\mathcal{C}}_{(4,1^{n-4})} + \delta_{n \geq 5} \ 2 {\mathcal{C}}_{(3,2,1^{n-5})} + \delta_{n \geq 6} \ {\mathcal{C}}_{(2,2,2,1^{n-6})}\\
& \qquad + \delta_{n \geq 2} \ \left(\binom{n-2}{2} + 4 \binom{n-2}{1} + \binom{n-2}{0}\right) {\mathcal{C}}_{2,1^{n-2}}.\end{aligned}$$ This kind of results could also have been obtained with Theorem \[ThmNewInd\] but the computation is a little harder (it involves discrete integrals of polynomials).
Generating series for some coefficients
---------------------------------------
S. Matsumoto and J. Novak have computed, using character theory, the following generating function [@MatsumotoNovakMonomialJM Theorem 6.7].
For any integer $n \geq 2$, one has: $$\sum_{k} a^{k}_{(n)} z^k = \frac{\operatorname{Cat}_{n-1} z^{n-1}}{(1-1^2 z^2)(1-2^2 z^2)\dots(1-(n-1)^2 z^2)},$$ where $\operatorname{Cat}_{n-1} = \frac{1}{n}\binom{2(n-1)}{n-1}$ is the usual Catalan number.
As $a^{k}_{(n)}=c^k_{(n)}$, the same result holds on the $c$’s. Unfortunately, we are not able to find a proof of their formula [*via*]{} Theorem \[ThmIndC\], but the latter can be used to derive new results of the same kind.
For instance, with $\rho=(n-1)$ and $m=1$, our induction relation writes as $c_{(n-1,1)}^k=(n-1) \cdot c_{(n)}^{k-1}$ and thus $$\begin{gathered}
\sum_{k} c^{k}_{(n-1,1)} z^k = z \sum_{k} (n-1) c^{k-1}_{(n)} z^{k-1} \\
= \frac{(n-1) \operatorname{Cat}_{n-1} z^n}{(1-1^2 z^2)(1-2^2 z^2)\dots(1-(n-1)^2 z^2)}.\end{gathered}$$ In terms of $a$’s, this result implies: $$\begin{aligned}
\nonumber \sum_{k} a^{k}_{(n-1,1)} z^k &= \sum_{k} \left(c^k_{(n-1,1)} + c^k_{(n-1)}\right) z^k \\
&= \frac{(n-1) \operatorname{Cat}_{n-1} z^{n} + (1-(n-1)^2 z^2) \operatorname{Cat}_{n-2} z^{n-2}}{(1-1^2 z^2)(1-2^2 z^2)\dots(1-(n-1)^2 z^2)}. \end{aligned}$$ This expression is simpler than the one obtained by Matsumoto and Novak for the same quantity [@MatsumotoNovakMonomialJM Proposition 6.9] and their equivalence is not obvious at all.
If we want to go further and compute other generating series, one has to solve linear systems. For instance, denoting $F_{\mu}= \sum_{k} c^k_{\mu} z^k$, Theorem \[ThmIndC\] gives: $$\begin{aligned}
F_{(n-2,1,1)} &= z \left((n-2) F_{(n-1,1)} + F_{(n-2,2)}\right);\\
F_{(n-2,2)} &= z \left((n-2) F_{(n)} + F_{(n-2,1,1)} + F_{(n-2,1)} + F_{(n-2)} \right) .\end{aligned}$$ After resolution, one has: $$\begin{aligned}
F_{(n-2,1,1)} &= \frac{z^2 \left( n (n-2) F_{(n)} + z (n-2) F_{(n-1)} + F_{(n-2)} \right) }{1-z^2} ; \\
F_{(n-2,2)} &= \frac{ z \left( (n-2) F_{(n)} + F_{(n-2,1)} + F_{(n-2)} \right) + z^2 (n-2) F_{(n-1,1)} }{1-z^2}.\end{aligned}$$ Using the results above, one can deduce an explicit generating series for the $c$’s which can be easily transformed into series for the $a$’s.
Other symmetric functions {#SubsectOtherSym}
-------------------------
Even if we have focused so far on complete symmetric functions, our method works also with power-sums.
The induction equation should be replaced by: $$\begin{aligned}
p_{k}(J_{1},\dots,J_{n+1}) &= p_{k}(J_{1},\dots,J_{n}) + J_{n+1} p_{k-1}(J_{1},\dots,J_{n+1})\\
& \qquad -J_{n+1} p_{k-1}(J_{1},\dots,J_{n}).\end{aligned}$$ Then, using similar arguments to the ones of paragraph \[SubsectNewInd\], one gets the following induction relation: $$\begin{aligned}
a^{p_k}_{\rho \cup (m)} &= \delta_{m,1} a^{p_k}_{\rho}
+\sum_{1 \leq i \leq \ell(\rho)} \rho_i
a^{p_{k-1}}_{\rho \setminus (\rho_{i}) \cup (\rho_{i}+m)} \\
&\quad
+ \sum_{r+s=m \atop r,s \geq 1} a^{p_{k-1}}_{\rho \cup (r,s)}
-\delta_{m>1}\ a^{p_{k-1}}_{\rho \cup (m-1)}.
\end{aligned}$$
Unfortunately, we are not able to deal with a linear basis of the symmetric function ring (as the coefficients $a^F_{\lambda}$ depend linearly on $F$, this would solve the problem for all symmetric functions).
Analogs in the Hecke algebra of $(S_{2n},H_n)$ {#SectDoubleClass}
==============================================
In this section, we consider a slightly different problem, which happens to be the analog of the one of the previous section. It was first considered recently by P. Zinn-Justin [@ZinnJustinJMWeingarten] and S. Matsumoto [@MatsumotoOddJM] in connection with integrals over orthogonal groups.
Hecke algebra of $(S_{2n},H_n)$
-------------------------------
The results of this section are quite classical. A good survey, with a more representation-theoretical point of view, can be found in I.G. Macdonald’s book[@McDo Chapter 7].
Let us consider the symmetric group of even size $S_{2n}$, whose elements are seen as permutations of the set $\{1,\bar{1},\ldots,n,\bar{n}\}$. It contains the hyperoctahedral group which is the subgroup formed by permutations $\sigma \in S_{2n}$ such that $\overline{\sigma(i)}=\sigma(\bar{i})$ (by convention, $\overline{\bar{i}}=i$). We are interested in the double cosets $H_{n} \backslash S_{2n} / H_{n}$, *i.e.* the equivalence classes for the relation: $$\sigma \equiv \tau \text{ if and only if }\exists\ h,h' \in H_{n} \text{ s.t. } \sigma = h \tau h'.$$
Conjugacy classes in the symmetric group algebra can be characterized easily using cycle-types. We recall a similar result for the double cosets: they are characterized *via* coset-types.
\[DefCosetType\] Let $\sigma$ be a permutation of $S_{2n}$. Consider the following graph $G_{\sigma}$:
- its $2n$ vertices are labelled by $\{1,\bar{1},\ldots,n,\bar{n}\}$;
- we put a solid edge between $i$ and $\bar{i}$ and a dashed one between $\sigma(i)$ and $\sigma(\bar{i})$ for each $i$.
Forgetting the types of the edges, we obtain a graph with only vertices of degree $2$. Thus, it is a collection of cycles. Moreover, due to the bicoloration of edges, it is easy to see that all these cycles have an even length.
We call coset-type of $\sigma$ the partition $\mu$ such that the lengths of the cycles of $G_{\sigma}$ are equal to $2\mu_{1}, 2\mu_{2}, \dots$
Let $n=4$ and $\sigma$ be the following permutation: $$1 \mapsto 3,\ \bar{1} \mapsto 1,\ 2 \mapsto \bar{4},\ \bar{2} \mapsto \bar{3},\ 3 \mapsto \bar{2},\ \bar{3} \mapsto 2,\ 4\mapsto 4,\ \bar{4} \mapsto \bar{1}.$$ The corresponding graph $G_{\sigma}$ is drawn on figure \[FigGSigma\].
$$\includegraphics[width=5cm]{exGsigma}$$
This graph is the disjoint union of one cycle of length $6$ ($1,3,\bar{3},\bar{4},4,\bar{1}$) and one cycle of length $2$ ($2,\bar{2}$). Thus the coset-type of $\sigma$ is the integer partition $(3,1)$.
[@McDo section 7.1] Two permutations are in the same double coset if and only if their coset-types are the same.
If $\mu$ is a partition of $n$, we denote $${\mathcal{C}}^{(2)}_{\mu}=\sum_{\sigma \in S_{2n} \atop \text{coset-type}(\sigma)=\mu} \sigma \ \in {\mathbb{Z}}[S_{2n}].$$ It is immediate that the elements ${\mathcal{C}}^{(2)}_{\mu}$, when $\mu$ runs over partitions of $n$ span linearly a subalgebra $Z_n^{(2)}$ of ${\mathbb{Z}}[S_{2n}]$. Equivalently, one can define $Z_n^{(2)}$ as the algebra of functions on $S_{2n}$, invariant by left and right multiplication by an element of $H_{n}$, endowed with the convolution product $$f \star g(\sigma) = \sum_{\tau_1, \tau_2 \in S_{2n} \atop \tau_{1} \tau_{2}=\sigma}
f(\tau_{1}) g(\tau_{2}).$$ One can prove using representation theory [@McDo section 7.2] that this algebra is commutative (in other terms, $(S_{2n},H_{n})$ is a Gelfand pair).
Odd Jucys-Murphy elements
-------------------------
In this section we will look at symmetric functions in odd-indexed Jucys-Murphy elements in $S_{2n}$. Rewriting as permutations on the set $\{1,\bar{1},2,\bar{2},\dots,n,\bar{n}\}$ (ordered by $1<\bar{1}<2<\bar{2}<\dots<n<\bar{n}$), these elements are: $$J^{(2)}_{i} = \sum_{j=1,\bar{1},\dots,i-1,\overline{i-1}} (j\ i).$$ They were considered by P. Zinn-Justin [@ZinnJustinJMWeingarten] and then S. Matsumoto [@MatsumotoOddJM].
Let us consider also the following element in ${\mathbb{Q}}[S_{2n}]$: $$p_{n}= \sum_{h \in H_{n}} h.$$ Then the following result holds, which may be seen as an analog of the fact that symmetric functions in Jucys-Murphy elements are central in the symmetric group algebra.
If $F$ is a symmetric function, then: $$x_{n,F} := F(J^{(2)}_{1}, \dots, J_{n}^{(2)}) p_{n} = p_{n} F(J_{1}^{(2)},\dots,J_{n}^{(2)}).$$ Moreover $x_{n,F}$ belongs to the algebra $Z_n^{(2)}$.
The first step is to prove by induction that $$e_{k}(J_{1}^{(2)},\dots,J_{n}^{(2)}) p_{n} = p_{n} e_{k}(J_{1}^{(2)},\dots,J_{n}^{(2)}) = \sum_{\mu \vdash n \atop |\mu| - \ell(\mu) = k} {\mathcal{C}}^{(2)}_{\mu}.$$ The result follows for all $F$ by multiplication and linear combination. See [@ZinnJustinJMWeingarten Proposition 3] and [@MatsumotoOddJM Proposition 3.1] for details.
Inspired by the results of section \[SectSymGrpAlg\], we may look at the class expansion of $x_{n,F}$, *i.e.* the coefficients $b^{F}_{\mu}$ such that: $$F(J_{1}^{(2)},\dots,J_{n}^{(2)}) p_{n} = \sum_{\mu \vdash n} b^{F}_{\mu} {\mathcal{C}}^{(2)}_{\mu}.$$ As seen in the sketch of proof for the proposition above, the $b$’s are easy to compute in the case of elementary functions.
In the following paragraph, we will establish some induction relations for the $b$’s in the case of complete symmetric functions. We focus on this case (and thus use the short notation $b^k_{\mu}=b^{h_{k}}_{\mu}$) because these coefficients appear in the computation of the asymptotic expansion of some integrals over the orthogonal group [@MatsumotoOddJM Theorem 7.3].
A simple induction relation
---------------------------
In this paragraph, using the same method as in subsection \[SubsectNewInd\], we prove the following induction formula for the $b$’s.
\[ThmNewInd2\] For any partition $\rho$ and positive integers $k$ and $m$, one has: $$\label{EqNewInd2}
b^{k}_{\rho \cup (m)} = \delta_{m,1} b^k_{\rho} +
2 \sum_{1 \leq i \leq \ell(\rho)} \rho_i
b^{k-1}_{\rho \setminus (\rho_{i}) \cup (\rho_{i}+m)}
+ \sum_{r+s=m \atop r,s \geq 1} b^{k-1}_{\rho \cup (r,s)}
+ (m-1) b^{k-1}_{\rho \cup (m)}.$$
As before, the starting point of our proof is an induction relation on complete symmetric functions: $$\begin{gathered}
h_{k}(J_{1}^{(2)},\dots,J_{n}^{(2)},J_{n+1}^{(2)}) = h_{k}(J_{1}^{(2)},\dots,J_{n}^{(2)})\\
+ J^{(2)}_{n+1} h_{k-1}(J_{1}^{(2)},\dots,J_{n}^{(2)},J_{n+1}^{(2)}).\end{gathered}$$ Multiplying both sides by $p_{n+1}$, one has: $$\begin{gathered}
\label{EqRecH2}
h_{k}(J_{1}^{(2)},\dots,J_{n}^{(2)},J_{n+1}^{(2)}) \cdot p_{n+1}=
h_{k}(J_{1}^{(2)},\dots,J_{n}^{(2)}) \cdot p_{n+1} \\
+ J^{(2)}_{n+1} h_{k-1}(J_{1}^{(2)},\dots,J_{n}^{(2)},J_{n+1}^{(2)}) \cdot p_{n+1}.\end{gathered}$$
Let us begin with the case $m=1$. We choose a permutation $\sigma \in S_{2n}$ of coset-type $\rho$ and we denote $\sigma'$ its image by the canonical embedding $S_{2n} \hookrightarrow S_{2n+2}$. It has coset-type $\rho \cup (1)$. By definition, $$[\sigma'] h_{k}(J_{1}^{(2)},\dots,J_{n}^{(2)},J_{n+1}^{(2)}) p_{n+1} =
b^{k}_{\rho \cup 1} \label{EqCoef331}$$ For the second term, we write: $$\begin{gathered}
h_{k}(J_{1}^{(2)},\dots,J_{n}^{(2)}) \cdot p_{n+1} =
h_{k}(J_{1}^{(2)},\dots,J_{n}^{(2)}) \cdot p_n \\
\cdot \left(1 + (n+1\ \overline{n+1}) +
\sum_{i=1,\bar{1},\dots,n,\bar{n}} (n+1\ i) (\overline{n+1}\ \bar{i}) \right).\end{gathered}$$ Notice that $h_{k}(J_{1}^{(2)},\dots,J_{n}^{(2)}) p_{n}$ lies in the algebra ${\mathbb{Z}}[S_{2n}] \subset {\mathbb{Z}}[S_{2n+2}] $ and hence is a linear combination of permutations fixing $n+1$ and $\overline{n+1}$. For such permutations $\tau$, neither $\tau (n+1\ \overline{n+1})$ nor $\tau (n+1\ i) (\overline{n+1}\ \bar{i})$ can be equal to $\sigma'$ (these two permutations do not fix $n+1$ and $\overline{n+1}$). Therefore, $$[\sigma'] h_{k}(J_{1}^{(2)},\dots,J_{n}^{(2)})\cdot p_{n+1}
= [\sigma] h_{k}(J_{1}^{(2)},\dots,J_{n}^{(2)}) \cdot p_{n} = b^k_{\rho}.
\label{EqCoef332}$$
We still have to compute: $$\begin{gathered}
\label{EqTechniqueCoef}
[\sigma'] J^{(2)}_{n+1}
h_{k-1}(J_{1}^{(2)},\dots,J_{n}^{(2)},J_{n+1}^{(2)}) \cdot p_{n+1} \\
= \sum_{j = 1,\bar{1},\dots,n,\bar{n}} [(n+1\ j) \sigma']
h_{k-1}(J_{1}^{(2)},\dots,J_{n}^{(2)},J_{n+1}^{(2)}) \cdot p_{n+1} \\
= \sum_{j = 1,\bar{1},\dots,n,\bar{n}}
b^{k-1}_{\text{coset-type}((n+1\ j) \sigma')}.\end{gathered}$$ Let us look at the coset-type of $(n+1\ j) \sigma'$. Denote by $d_{j}$ (resp. $d_{n+1}$) the other extremity of the dashed edge of extremity $j$ (resp. $n+1$) in $G_{\sigma'}$ (see definition \[DefCosetType\]). Then the graph $G_{(n+1\ j) \sigma'}$ has exactly the same edges as $G_{\sigma'}$, except for $(j,d_{j})$ and $(n+1,d_{n+1})$, which are replaced by $(j,d_{n+1})$ and $(n+1,d_{j})$.
As $(n+1,\overline{n+1})$ is a loop of length $2$ in $G_{\sigma'}$, if we assume that $j$ was in a loop of size $2 \rho_{i}$, then these two loops are replaced by a loop of size $2 \rho_{i} + 2$ in $G_{(n+1\ j) \sigma'}$ (it is a particular case of the phenomenon drawn on Figure \[FigJoinLoop\]).
$$\begin{array}{c}
\includegraphics[width=5cm]{joinloopbefore}
\end{array}
\rightarrow
\begin{array}{c}
\includegraphics[width=5cm]{joinloopafter}
\end{array}$$
Therefore $(n+1\ j) \sigma'$ has coset-type $\rho \backslash \rho_{i} \cup (\rho_{i} +1)$. As there are $2\rho_i$ elements in the $i$-th loop of $G_\sigma'$, one obtains: $$\label{EqCoef333}
[\sigma'] J^{(2)}_{n+1} h_{k-1}(J_{1}^{(2)},\dots,J_{n}^{(2)},J_{n+1}^{(2)})
\cdot p_{n+1} = 2 \sum_{1 \leq i \leq \ell(\rho)}
\rho_i b^{k-1}_{\rho \setminus (\rho_{i}) \cup (\rho_{i}+1)}$$ Putting together equations , , and , we obtain the case $m=1$ of the theorem.
Let us consider now the case $m>1$. We choose a permutation $\sigma \in S_{2n+2}$ of coset-type $\rho \cup (m)$ such that $n+1$ is in a loop of size $2m$ in $G_{\sigma}$. As $m>1$, this implies that $\overline{\sigma^{-1}(n+1)} \neq \sigma^{-1}(\overline{n+1})$. On the other hand, if $\tau$ lies in ${\mathbb{Z}}[S_{2n}] \subset {\mathbb{Z}}[S_{2n+2}]$ and $i=1,\bar{1},\dots,n,\bar{n}$, one has: $$\begin{aligned}
\big( \tau (i\ n+1) (\overline{i}\ \overline{n+1}) \big)^{-1} (n+1) &= i ;\\
\big( \tau (i\ n+1) (\overline{i}\ \overline{n+1}) \big)^{-1} (\overline{n+1}) &= \overline{i}.\end{aligned}$$ Thus, $\sigma$ can not be written as $\tau (i\ n+1) (\overline{i}\ \overline{n+1})$ with the conditions above. It can not be equal to $\tau$ or written as $\tau (n+1\ \overline{n+1})$ either. Therefore, $$[\sigma] h_{k}(J_{1}^{(2)},\dots,J_{n}^{(2)}) p_{n+1} = 0$$ As a consequence, one has: $$\begin{gathered}
b^k_{\rho \cup (m)} =
[\sigma] h_{k}(J_{1}^{(2)},\dots,J_{n}^{(2)},J_{n+1}^{(2)})\cdot p_{n+1}\\
= [\sigma] J^{(2)}_{n+1} h_{k-1}(J_{1}^{(2)},\dots,J_{n}^{(2)},J_{n+1}^{(2)})
\cdot p_{n+1} \\
= \sum_{j = 1,\bar{1},\dots,n,\bar{n}}
b^{k-1}_{\text{coset-type}( ( n+1\ j) \sigma')}.\end{gathered}$$ and we have to look at the possible coset types of $(n+1\ i) \sigma$ (equation is still true).
Let us number the loops of the graph $G_\sigma$ with the integers $1,2,\dots,\ell(\rho)+1$ such that the $i$-th loop has length $2 \rho_i$ for $i \leq \ell(\rho)$ and the $\ell(\rho)+1$-th loop is the one containing $n+1$. As before, the graph $G_{(n+1\ j) \sigma}$ is obtained from $G_{\sigma}$ by replacing edges $(j,d_{j})$ and $(n+1,d_{n+1})$ by $(j,d_{n+1})$ and $(n+1,d_{j})$. We distinguish three cases:
“join”
: If $j$ lies in the $i$-th loop of $G_{\sigma}$, then $G_{(n+1\ j) \sigma}$ is obtained from $G_{\sigma}$ by erasing its $i$-th and $\ell(\rho)+1$-th loops and replacing them by a loop of size $2 (\rho_{i} +m)$ (see figure \[FigJoinLoop\]). In this case, $(n+1\ i) \sigma$ has coset-type $\rho \backslash \rho_{j} \cup (\rho_{j} +m)$.
As there are $2 \rho_i$ elements in the $i$-th loop of $G_\sigma$, one obtains: $$\sum_{j=1,\bar{1},\dots,n,\bar{n} \atop j \nsim_{G_\sigma} n+1}
b^{k-1}_{\text{coset-type}( ( n+1\ i) \sigma')}
= 2 \sum_{1 \leq i \leq \ell(\rho)} \rho_i
b^{k-1}_{\rho \setminus (\rho_{i}) \cup (\rho_{i}+m)},$$ where $j \nsim_{G_\sigma} n+1$ means that $j$ and $n+1$ lie in different loops of $G_\sigma$.
“twist”
: If $j$ lies in the $\ell(\rho)+1$-th loop of $G_{\sigma}$ and if the distance between $j$ and $n+1$ is odd, then $G_{(n+1\ i) \sigma}$ is obtained from $G_{\sigma}$ by the transformation drawn in figure \[FigTwist\]. In particular, in this case, $(n+1\ j) \sigma$ has the same coset-type as $\sigma$, that is $\rho \cup (m)$.
As $j$ can not be equal to $\overline{n+1}$, there are $m-1$ possible values for $j$ in this case. Thus, $$\sum_{j=1,\bar{1},\dots,n,\bar{n} \atop {j \sim_{G_\sigma} n+1
\atop d_G(j,n+1) \text{ odd}}}
b^{k-1}_{\text{coset-type}( ( n+1\ j) \sigma')}
= (m-1) b^{k-1}_{\rho \cup (m)}.$$
$$\begin{array}{c}
\includegraphics[width=3.5cm]{twistbefore}
\end{array}
\rightarrow
\begin{array}{c}
\includegraphics[width=3.5cm]{twistafter}
\end{array}$$
“cut”
: We consider now the case where $j$ lies in the $\ell(\rho)+1$-th loop of $G_{\sigma}$ and the distance between $j$ and $n+1$ is even. We choose an arbitrary orientation of the $\ell(\rho)+1$-th loop of $G_{\sigma}$ (we keep the same for all $j$ in this situation) and we denote $2h$ ($1\leq h \leq m-1$) the distance between $n+1$ and $j$ when following the loop along this direction. Then $G_{(n+1\ j) \sigma}$ is obtained from $G_{\sigma}$ by erasing its $\ell(\rho)+1$-th loop and replacing it by two loops of length $2h$ and $2(m-h)$.(see figure \[FigCutLoop\]). Thus, in this case, $(n+1\ j) \sigma$ has coset-type $\rho \cup (h,m-h)$.
There is exactly one integer $j$ for each integer $h$ between $1$ and $m-1$, so: $$\begin{gathered}
\sum_{j=1,\bar{1},\dots,n,\bar{n} \atop {j \sim_{G_\sigma} n+1
\atop d_G(j,n+1) \text{ even}} }
b^{k-1}_{\text{coset-type}( ( n+1\ j) \sigma')}
= \sum_{h=1}^{m-1} b^{k-1}_{\rho \cup (h,m-h)}
= \sum_{r+s=m \atop r,s \geq 1} b^{k-1}_{\rho \cup (r,s)}
\end{gathered}$$
$$\begin{array}{c}
\includegraphics[width=3.5cm]{cutloopbefore}
\end{array}
\rightarrow
\begin{array}{c}
\includegraphics[width=3.5cm]{cutloopafter}
\end{array}$$
Putting the different cases together, one has $$\begin{gathered}
b^k_{\rho \cup (m)}
= 2 \sum_{1 \leq i \leq \ell(\rho)} \rho_i b^{k-1}_{\rho \setminus (\rho_{i}) \cup (\rho_{i}+m)} + \sum_{r+s=m \atop r,s \geq 1} b^{k-1}_{\rho \cup (r,s)} + (m-1) b^{k-1}_{\rho \cup (m)},
\end{gathered}$$ which is exactly what we wanted to prove.
As in section \[SectSymGrpAlg\], define coefficients $d^k_{\rho}$ as solution of the sparse triangular system $$\label{EqLinkBD}
b^k_{\rho} = \sum_{i = 0}^{m_{1}(\rho)} d^k_{\bar{\rho} \cup 1^i}
\binom{m_{1}(\rho)}{i}.$$ Then, for a given $k$, only finitely many $d^k_\rho$ are non-zero (see [@MatsumotoOddJM Theorem 8.4]). But, unfortunately, we have no combinatorial interpretation in this case to obtain directly induction relations on $d$. This raises the question of the existence of a partial Hecke algebra of $(S_{2n},H_n)$, out of the scope of this article.
As in the framework of the symmetric group algebra (paragraph \[SubsectOtherSym\]), the method extends easily to power-sum symmetric functions. More precisely, the following induction relation could be proved with similar arguments: $$\begin{gathered}
b^{p_{k}}_{\rho \cup (m)} = \delta_{m,1} b^{p_k}_{\rho} +
2 \sum_{1 \leq i \leq \ell(\rho)} \rho_i
b^{p_{k-1}}_{\rho \setminus (\rho_{i}) \cup (\rho_{i}+m)}\\
+ \sum_{r+s=m \atop r,s \geq 1} b^{p_{k-1}}_{\rho \cup (r,s)}
+ (m-1) b^{p_{k-1}}_{\rho \cup (m)} - \delta_{m>1}\ b^{p_{k-1}}_{\rho \cup (m-1)}.
\end{gathered}$$
Subleading term
---------------
The induction relation proved in the previous paragraph is a good tool to study the leading and subleading terms of $h_{k}(J_{1}^{(2)},\dots,J_{n}^{(2)}) p_n$, that is the coefficients $b^k_{\rho}$ with $|\rho| - \ell(\rho) = k$ or $k-1$. Indeed, an immediate induction shows that if the degree condition $|\rho| - \ell(\rho) \leq k$ is not satisfied, then $b^k_{\rho} =0$. We can also recover the following result proved by S. Matsumoto [@MatsumotoOddJM Theorem 5.4].
\[PropDominant\] If $\rho$ is a partition and $k$ an integer such that $|\rho| - \ell(\rho) = k$, then $$b^k_{\rho} = \prod \operatorname{Cat}_{\rho_{i}-1}.$$
But our induction allows us to go further and to compute the subleading term (case $|\rho| - \ell(\rho) = k-1$), proving this way a conjecture of S. Matsumoto [@MatsumotoOddJM Conjecture 9.4] corresponding to the case where $\rho$ is a hook.
Before stating and proving our result (in paragraph \[SubsubsectProofMatsumoto\]), we need a few definitions and basic lemmas on the total area of Dyck paths (paragraph \[SubsubsectArea\]).
### Area of Dyck paths {#SubsubsectArea}
If $I=(i_{1},\dots,i_{r})$ is a weak composition (*i.e.* a sequence of non-negative integers), let us define ${\mathcal{P}}_I$ as the set of Dyck paths of length $k=i_{1}+ \dots + i_{r}$ whose height after $i_{1}$, $i_{1}+i_{2}$, …steps is zero (such a path is the concatenation of Dyck paths of lengths $i_{1}$, $i_2$,…).
If $C$ is a subset of Dyck paths of a given length, denote by ${\mathfrak{A}}_{C}$ the sum over the paths $c$ in $C$ of the area ${\mathfrak{A}}_c$ under $c$. In the case $C={\mathcal{P}}_{I}$, we shorten the notation and denote ${\mathfrak{A}}_I={\mathfrak{A}}_{{\mathcal{P}}_{I}}$.
For a weak composition $I=(k)$ of length $1$, the set ${\mathcal{P}}_I$ is the set of all Dyck paths of length $k$. In this case, the area ${\mathfrak{A}}_k$ has a closed form, which has been computed by D. Merlini, R. Sprugnoli, and M. C. Verri in [@CatalanPathArea]: $${\mathfrak{A}}_k = 4^k - \binom{2k+1}{k}.$$
The general case can be deduced easily, thanks to the following lemma:
Let $C_{1}$ and $C_{2}$ be subsets of the set of Dyck paths of length $2m$ and $2n$, respectively. Define $C \simeq C_{1} \times C_{2}$ to be the set of Dyck paths of length $2(m+n)$ which are the concatenation of a path in $C_{1}$ and a path in $C_{2}$. Then $${\mathfrak{A}}_{C} = {\mathfrak{A}}_{C_{1}} \cdot |C_{2}| + |C_{1}| \cdot {\mathfrak{A}}_{C_{2}}.$$
The area under a concatenation $c_1 \cdot c_2$ of two Dyck paths $c_1$ and $c_2$ is clearly equal to the sum of the areas under $c_1$ and $c_2$. Therefore: $$\begin{gathered}
{\mathfrak{A}}_C=\sum_{c_1 \in C_1 \atop c_2 \in C_2} {\mathfrak{A}}_{c_1}+{\mathfrak{A}}_{c_2}
=\sum_{c_1 \in C_1 \atop c_2 \in C_2} {\mathfrak{A}}_{c_1}
+ \sum_{c_1 \in C_1 \atop c_2 \in C_2} {\mathfrak{A}}_{c_2} \\
= |C_2| \sum_{c_1 \in C_1} {\mathfrak{A}}_{c_1} + |C_1|\sum_{c_2 \in C_2} {\mathfrak{A}}_{c_2}
= {\mathfrak{A}}_{C_{1}} \cdot |C_{2}| + |C_{1}| \cdot {\mathfrak{A}}_{C_{2}}.
\qedhere \end{gathered}$$
A classical result in enumerative combinatorics, perhaps the most widely known, states that the cardinality of ${\mathcal{P}}_{(k)}$ is given by the Catalan number $\operatorname{Cat}_k := \frac{1}{k+1} \binom{2k}{k}$.
An immediate induction using the lemma above gives the following corollary.
For any weak composition $I$ of length $r$, one has: $${\mathfrak{A}}_I = \sum_{j=1}^{r} {\mathfrak{A}}_{i_{j}} \prod_{k \neq j} \operatorname{Cat}_{i_k}.$$
One will also need the following induction relation in the next paragraph.
\[LemArea\] If $m$ is a positive integer, one has: $${\mathfrak{A}}_{m-1} = (m-1) \operatorname{Cat}_{m-1} + \sum_{r+s=m \atop r,s \geq 1} \left( {\mathfrak{A}}_{r-1} \operatorname{Cat}_{s-1} + {\mathfrak{A}}_{s-1} \operatorname{Cat}_{r-1} \right).$$
This is a consequence of the usual first return decomposition of Dyck paths. Indeed, let $c$ be a Dyck path of length $2(m-1)$. We denote $2r$ the $x$-coordinate of the first point where the path touches the $x$-axis and $s=m-r$. Then $c$ is the concatenation of one climbing step, a Dyck path $c_{1}$ of length $2(r-1)$, a down step and a Dyck path $c_{2}$ of length $2(s-1)$ and this decomposition is of course bijective.
$$\includegraphics[width=6cm]{Catalan}$$
The area under $c$ is the sum of the areas under $c_{1}$ and $c_{2}$, plus $2r-1$ (see figure \[FigCatalan\]). So we write: $$\begin{gathered}
{\mathfrak{A}}_{m-1} = \sum_{r+s=m \atop r,s \geq 1}
\left[ \sum_{c_1 \in {\mathcal{P}}_{r-1} \atop c_2 \in {\mathcal{P}}_{s-1}}
{\mathfrak{A}}_{c_1} + {\mathfrak{A}}_{c_2}+(2r-1) \right] \\
= \sum_{r+s=m \atop r,s \geq 1}
\left[ \sum_{c_1 \in {\mathcal{P}}_{r-1} \atop c_2 \in {\mathcal{P}}_{s-1}} {\mathfrak{A}}_{c_1}
+ \sum_{c_1 \in {\mathcal{P}}_{r-1} \atop c_2 \in {\mathcal{P}}_{s-1}} {\mathfrak{A}}_{c_2}
+ \sum_{c_1 \in {\mathcal{P}}_{r-1} \atop c_2 \in {\mathcal{P}}_{s-1}} (2r-1) \right] \\
= \sum_{r+s=m \atop r,s \geq 1}
\big[ |{\mathcal{P}}_{s-1}| \sum_{c_1 \in {\mathcal{P}}_{r-1}} {\mathfrak{A}}_{c_1}
+ |{\mathcal{P}}_{r-1}| \sum_{c_2 \in {\mathcal{P}}_{s-1}} {\mathfrak{A}}_{c_2}
+ |{\mathcal{P}}_{s-1}| \cdot |{\mathcal{P}}_{r-1}| \cdot (2r-1) \big] \\
= \sum_{r+s=m \atop r,s \geq 1} \big[ {\mathfrak{A}}_{r-1} \operatorname{Cat}_{s-1} +
{\mathfrak{A}}_{s-1} \operatorname{Cat}_{r-1} + (2r-1) \operatorname{Cat}_{s-1} \operatorname{Cat}_{r-1} \big]. \end{gathered}$$ The last part of the sum may be symmetrized in $r$ and $s$: $$\begin{gathered}
\sum_{r+s=m \atop r,s \geq 1} (2r-1) \operatorname{Cat}_{s-1} \operatorname{Cat}_{r-1} = \sum_{r+s=m \atop r,s \geq 1} \frac{1}{2} (2r-1 + 2s -1) \operatorname{Cat}_{s-1} \operatorname{Cat}_{r-1}\\
= (m-1) \sum_{r+s=m \atop r,s \geq 1} \operatorname{Cat}_{s-1} \operatorname{Cat}_{r-1} = (m-1) \operatorname{Cat}_{m-1},\end{gathered}$$ which ends the proof of the lemma.
### Proof of a conjecture of Matsumoto {#SubsubsectProofMatsumoto}
Computing the subleading term of $h_{k}(J_{1}^{(2)},\dots,J_{n}^{(2)})\cdot p_n$ consists in computing the coefficient $b_\mu^k$ with $k=|\mu| - \ell(\mu)+1$. Therefore, for a partition $\mu$, we denote: $${\text{SD}}_\mu = b_\mu^{|\mu|-\ell(\mu)+1}.$$ We also denote by $\mu-\bm{1}$ the sequence $\mu_1-1,\mu_2-1,\dots,\mu_{\ell(\mu)}-1$. As some terms of the sequence can be equal to 0, this is not necessarily a partition, but it is a weak composition.
\[ThmSubLeading2\] Let $\mu$ be a partition. Then $${\text{SD}}_{\mu} = {\mathfrak{A}}_{\mu-\bm{1}}.$$
Let $\mu$ be a partition and $k=|\mu| - \ell(\mu)+1$.
Suppose that we write $\mu = \rho \cup (m)$, for some partition $\rho$ and integer $m$. We will write Theorem \[ThmNewInd2\] for $\rho$ and $m$. $$\label{EqNewInd2}
b^{k}_{\rho \cup (m)} = \delta_{m,1} b^k_{\rho} +
2 \sum_{1 \leq i \leq \ell(\rho)} \rho_i
b^{k-1}_{\rho \setminus (\rho_{i}) \cup (\rho_{i}+m)}
+ \sum_{r+s=m \atop r,s \geq 1} b^{k-1}_{\rho \cup (r,s)}
+ (m-1) b^{k-1}_{\rho \cup (m)}.$$
If $m=1$, the partition $\rho$ fulfills $$|\rho| - \ell(\rho) = (|\mu|-1) - (\ell(\mu)-1) = k-1$$ and thus $b^k_\rho={\text{SD}}_\rho$.
For any $i \leq \ell(\rho)$, the partition $\lambda = \rho \setminus (\rho_{i}) \cup (\rho_{i}+m)$ fulfills $$|\lambda|-\ell(\lambda) = |\mu| - (\ell(\mu) -1) = k$$ and therefore, thanks to the degree condition, $b_{\lambda}^{k-1} = 0.$
In a similar way, as $|\mu| - \ell(\mu)=k-1$, the coefficient $b^{k-1}_\mu$ is simply given by Proposition \[PropDominant\]: $$b_{k-1}^\mu = \prod_{i=1}^{\ell(\mu)} \operatorname{Cat}_{\mu_i-1}=\operatorname{Cat}_{m-1}
\prod_{i=1}^{\ell(\rho)} \operatorname{Cat}_{\rho_i-1}.$$
For the last term, for any $r,s \geq 1$ with $r+s=m$, the partition $\lambda=\rho \cup (r,s)$ fulfills: $$|\lambda|-\ell(\lambda) = |\mu| - (\ell(\mu) +1) = k-2.$$ Therefore, the coefficient $b_{\lambda}^{k-1}$ corresponds to a subleading term and $b_{\lambda}^{k-1}={\text{SD}}_\lambda$.
Finally, Theorem \[ThmNewInd2\] becomes in this case: $$\label{EqIndSD}
{\text{SD}}_{\rho \cup (m)} = \delta_{m,1} {\text{SD}}_\rho +
(m-1) \operatorname{Cat}_{m-1} \prod_{i=1}^{\ell(\rho)} \operatorname{Cat}_{\rho_i-1}
+ \sum_{r,s \geq 1 \atop r+s=m} {\text{SD}}_{\rho \cup (r,s)}$$
This equation gives an induction relation on the coefficient ${\text{SD}}_\rho$. We will prove that ${\text{SD}}_\mu={\mathfrak{A}}_{\mu-\bm{1}}$ by a double induction, first on the size $n$ of the partition $\mu$ and then on the smallest part of $\mu$
For $n=1$, one has only the partition $\mu=(1)$ and ${\text{SD}}_{(1)}=b^1_{(1)}=0={\mathfrak{A}}_0$.
Fix now some $n>1$ and suppose that the theorem is true for all partitions of size smaller than $n$.
If $\mu = \rho \cup (1)$ is a partition of $n$ with smallest part equal to $1$, then, by equation and the induction hypothesis, one has: $${\text{SD}}_{\mu} = {\text{SD}}_\rho = {\mathfrak{A}}_{\rho-1}={\mathfrak{A}}_{\mu-\bm{1}}.$$
Let $\mu$ be a partition of $n$ with smallest part $m>1$ and suppose that ${\text{SD}}_\mu={\mathfrak{A}}_{\mu-\bm{1}}$ for all partitions of $n$ with smallest part $m'<m$. We write $\mu=\rho \cup (m)$ ([*i.e.*]{} $\rho=\mu \backslash m$). By equation , $${\text{SD}}_{\mu} = (m-1) \operatorname{Cat}_{m-1} \prod \operatorname{Cat}_{\rho_{i}-1}
+ \sum_{r+s=m \atop r,s \geq 1} {\text{SD}}_{\rho \cup (r,s)}.$$ By induction, $$\begin{gathered}
{\text{SD}}_{\rho \cup (r,s)} = {\mathfrak{A}}_{(\rho \cup (r,s)) -1}
= \operatorname{Cat}_{r-1} \operatorname{Cat}_{s-1} \left(\sum_{i} {\mathfrak{A}}_{\rho_{i} - 1} \prod_{j \neq i} \operatorname{Cat}_{\rho_{j}-1}\right) \\ +
{\mathfrak{A}}_{s-1} \operatorname{Cat}_{r-1} \prod_{i} \operatorname{Cat}_{\rho_{i} - 1} +
{\mathfrak{A}}_{r-1} \operatorname{Cat}_{s-1} \prod_{i} \operatorname{Cat}_{\rho_{i} - 1} .\end{gathered}$$ If we make the substitution in the previous equation, we obtain: $$\begin{gathered}
{\text{SD}}_{\mu} =\left(\sum_{r,s \geq 1 \atop r+s=m} \operatorname{Cat}_{r-1} \operatorname{Cat}_{s-1} \right) \left(\sum_{i} {\mathfrak{A}}_{\rho_{i} - 1}
\prod_{j \neq i} \operatorname{Cat}_{\rho_{j}-1}\right) \\
+ \left((m-1)\operatorname{Cat}_{m-1} + \sum_{r+s=m \atop r,s \geq 1} \big[ {\mathfrak{A}}_{s-1} \operatorname{Cat}_{r-1} + {\mathfrak{A}}_{r-1} \operatorname{Cat}_{s-1}\big]\right) \prod_{i} \operatorname{Cat}_{\rho_{i} - 1} .\end{gathered}$$ Therefore, using both Lemma \[LemArea\] and the classical induction on Catalan number $\sum_{r+s=m} \operatorname{Cat}_{r-1} \operatorname{Cat}_{s-1}=\operatorname{Cat}_{m-1}$, one has: $${\text{SD}}_{\mu} = \sum_{i} {\mathfrak{A}}_{\mu_{i} - 1}
\prod_{j \neq i} \operatorname{Cat}_{\mu_{j}-1} = {\mathfrak{A}}_{\mu - \bm{1}}.$$ Finally, for any partition $\mu$, one has ${\text{SD}}_{\mu} ={\mathfrak{A}}_{\mu-\bm{1}}$, which is exactly what we wanted to prove.
S. Matsumoto established a deep connection between the coefficients $b^{k}_{\mu}$ and the asymptotic expansion of orthogonal Weingarten functions [@MatsumotoOddJM Theorem 7.3]. In particular, Theorem \[ThmSubLeading2\] gives the subleading term of some matrix integrals over orthogonal group when the dimension of the group goes to infinity.
Towards a continuous deformation? {#SectGeneralisation}
=================================
The questions studied in sections \[SectSymGrpAlg\] and \[SectDoubleClass\] may seem quite different at first sight but there exists a continuous deformation from one to the other.
We denote by ${\mathcal{Y}}_{n}$ the set of Young diagrams (or partitions) of size $n$. For any $\alpha > 0$, we consider two families of functions on ${\mathcal{Y}}_{n}$.
- First, we call $\alpha$-content of a box of the Young diagram $\lambda$ the quantity $\alpha(j-1) - (i-1)$, where $i$ is its row index and $j$ its column index. If $A^{(\alpha)}_{\lambda}$ stands for the multiset of the $\alpha$-contents of boxes of$\lambda$, one can look at the evaluation of complete symmetric functions $h_{k}(A^{(\alpha)}_{\lambda})$.
- Second, we consider Jack polynomials, which is the basis of symmetric function ring indexed by partitions and depending of a parameter $\alpha$ (they are deformations of Schur functions). The expansion of Jack polynomials on the power sum basis $$J_{\lambda}^{(\alpha)} = \sum_\mu \theta^{(\alpha)}_{\mu}(\lambda) p_{\mu}$$ defines a family $\theta^{(\alpha)}_{\mu}$ of functions on ${\mathcal{Y}}_n$ (we use the same normalization and notation as in [@McDo Chapter 6] for Jack polynomials).
The functions $\theta^{(\alpha)}_{\mu}$, when $\mu$ runs over the partitions of $n$, form a basis of the algebra $Z_{n}$ of functions over ${\mathcal{Y}}_{n}$.
As the cardinality of this family corresponds to the dimension of the space, it is enough to prove that it spans $Z_n$. Let $f$ be a function on ${\mathcal{Y}}_n$.
For a fixed $\alpha$, Jack polynomials form a basis of symmetric functions, therefore there exist some coefficients $d_{\mu,\lambda}^{(\alpha)}$ such that: $$p_\mu =\sum_\lambda d_{\mu,\lambda}^{(\alpha)} J_\lambda^{(\alpha)}.$$ Let us define the scalar: $$c_\mu=\sum_\lambda d_{\mu,\lambda}^{(\alpha)} f(\lambda).$$ Then one has: $$\sum_\mu c_\mu \theta^{(\alpha)}_{\mu}(\lambda)
= \sum_{\mu,\nu} \left( d_{\mu,\nu}^{(\alpha)} \theta^{(\alpha)}_{\mu}(\lambda) \right) f(\nu)
= f(\lambda),$$ where the last equality comes from the fact that the matrices $(\theta^{(\alpha)}_{\mu}(\lambda))$ and $(d_{\mu,\lambda}^{(\alpha)})$ are by definition inverse of each other.
Finally, any function $f$ on ${\mathcal{Y}}_n$ can be written as a linear combination of $\theta^{(\alpha)}_{\mu}$.
This proposition is also a consequence of the fact that suitably chosen normalizations of $\theta^{(\alpha)}_{\mu}$, when $\mu$ runs over all partitions, form a linear basis of the algebra of $\alpha$-shifted symmetric functions (see [@LassalleJackMultirectangular Section 3]). However, such a sophisticated tool is not needed when $n$ is fixed.
The proposition implies the existence of some coefficients $a^{k,(\alpha)}_{\mu}$ such that: $$h_k(A_\lambda^{(\alpha)}) = \sum_\mu a^{k,(\alpha)}_\mu \theta^{(\alpha)}_\mu(\lambda),$$
For $\alpha=1$, using the action of Jucys-Murphy element on the Young basis [@Jucys1966] and the discrete Fourier transform of $S_{n}$, one can see that $a^{k,(1)}_\mu = a^{k}_\mu$.
For $\alpha=2$, using the identification between Jack polynomials for this special value of the parameter and zonal polynomials for the Gelfand pair $(S_{2n},H_{n})$ [@McDo Chapter 7], as well as the spherical expansion of $h_{k}(J_{1}^{(2)},\dots,J_{n}^{(2)}) p_n$ established by S. Matsumoto [@MatsumotoOddJM Theorem 4.1], one has $a^{k,(2)}_\mu = b^{k}_\mu$.
It is natural to wonder if there are results similar to Theorems \[ThmNewInd\] and \[ThmNewInd2\] in the general setting. Computer exploration using Sage [@sage] leads to the following conjecture:
\[ConjGenAlpha\] The coefficients $a_\rho^{k,(\alpha)}$ fulfill the linear relation: for any $m \geq 2$, $$a_{\rho \cup (m)}^{k,(\alpha)} = \sum_{r+s=m \atop r,s \geq 1}
a^{k-1,(\alpha)}_{\rho \cup (r,s)} + \alpha \sum_{1\leq i \leq \ell(\rho)}
\rho_i a^{k-1,(\alpha)}_{\rho \backslash \rho_i \cup (\rho_i + m)} +
(\alpha - 1) \cdot (m - 1)\ a_{\rho \cup (m)}^{k-1,(\alpha)}. \label{conj}$$
Unfortunately, as we do not have a combinatorial description of the basis $\theta_\mu^{(\alpha)}$ in the algebra $Z_n$, we are not able to prove it. With Lassalle’s algebraic approach, one can prove a generalization of Theorem \[ThmLassalle\] (see [@LassalleJM Section 11]) which is weaker than Conjecture \[ConjGenAlpha\]. Nevertheless, his formula is sufficient to compute inductively the $a_\rho^{k,(\alpha)}$ and has been used in our numerical exploration.
In the author’s opinion, this conjecture is a hint towards the existence of combinatorial constructions for other values of the parameter $\alpha$ (like the conjectures of papers [@GouldenJacksonMatchingJackConjecture; @LassalleJackMultirectangular; @LassalleJackFreeCumulants]).
Acknowledgments {#acknowledgments .unnumbered}
===============
This article has been partially written during a research visit to University of Waterloo. The author would like to thank Ian Goulden for his hospitality there. He also thanks Michel Lassalle, Sho Matsumoto, Jonathan Novak and Amarpreet Rattan for stimulating discussions on the subject.
|
---
abstract: 'The variation of the specific intensity across the stellar disc is essential input parameter in surface brightness reconstruction techniques such as Doppler imaging, where the relative intensity contributions of different surface elements are important in detecting starspots. We use [phoenix]{} and [atlas]{} model atmospheres to model lightcurves derived from high precision (S/N $\simeq$ 5000) HST data of the eclipsing binary SV Cam (F9V + K4V), where the variation of specific intensity across the stellar disc will determine the contact points of the binary system lightcurve. For the first time we use $\chi^2$ comparison fits to the first derivative profiles to determine the best-fitting model atmosphere. We show the wavelength dependence of the limb darkening and that the first derivative profile is sensitive to the limb-darkening profile very close to the limb of the primary star. It is concluded that there is only a marginal difference ($<$ 1$\sigma$) between the $\chi^2$ comparison fits of the two model atmospheres to the HST lightcurve at all wavelengths. The usefulness of the second derivative of the light-curve for measuring the sharpness of the primary’s limb is investigated, but we find that the data are too noisy to permit a quantitative analysis.'
bibliography:
- 'iau\_journals.bib'
- 'master.bib'
- 'ownrefs.bib'
title: 'Hubble Space Telescope Observations of SV Cam: II. First Derivative Lightcurve Modelling using [phoenix]{} and [atlas]{} Model Atmospheres'
---
\[firstpage\]
stars: activity, stars: spots binaries: eclipsing, stars:atmospheres,methods:numerical
Introduction
============
Limb darkening effects in stellar atmospheres have important implications throughout stellar astrophysics where a determination of the surface brightness distribution is important. Recent work using Doppler imaging and micro-lensing events has shown that commonly-used analytical limb darkening laws fail to match stellar observations at the limb of the star [@thurl04; @barnes04aephe]. Other micro-lensing results [@fields03] show that the intensity predictions from model atmospheres are discrepant in the case of a K-giant.
Surface brightness reconstruction techniques such as Doppler imaging and eclipse mapping rely on the information content of surface areas with different distances from the rotation axis (see review by @camerondoppler01). To detect starspots Doppler imaging uses the relative intensity contributions, calculated from model atmospheres, of the different surface elements. To reconstruct an accurate surface brightness distribution it is essential to know how parameters, such as the limb darkening, can alter the intensity values across the stellar disc.
High inclination eclipsing binary systems can be used as probes to determine the variation of specific intensity at the stellar limb. If limb darkening showed a smooth transition in specific intensity at the limb of the star, the contact points of eclipses would appear less abrupt and slightly displaced in phase relative to models with limb darkening laws derived from plane-parallel atmospheres, where the cutoff is very sharp. The sharp cutoff in plane parallel atmospheres results from the optical depth of the rays being infinite at the limb.
In November 2001 we were awarded 9 orbits of HST/STIS time to eclipse-map the inner face of the F9V primary of the totally eclipsing binary SV Cam. SV Cam (F9V+K4V) is a synchronously rotating RS CVn binary with a period of 0.59d. We obtained spectrophotometric lightcurves of 3 primary eclipses with a signal-to-noise ratio of 5000. The first analysis of these data, by @jeffersem05, determined the radii of the primary and secondary stars. When the resulting lightcurve was subtracted from the observed data, the residual lightcurve showed strong peaks at phases of contact. @jeffersem05 then showed that these mismatches are reduced significantly, but not eliminated, when a polar cap and a reduction in the photospheric temperature, to synthesise high spot coverage, are imposed on the image.
As there is a significant temperature difference between the primary and secondary stars, the secondary star acts as a dark occulting disc as it eclipses the primary star. The variation of brightness as a function of phase reflects the degree of limb darkening on the primary star. In this paper we determine the best fitting model atmosphere by fitting the models to the brightness variations as the secondary scans the inner face of the primary star, using the first and second derivatives of the HST in 10 wavelength bands. We discuss the implications these results have on results from Doppler Imaging.
Model atmospheres
=================
In this paper, two well established stellar atmosphere codes are used; the [phoenix]{} model atmosphere code [@hauschildt99] that uses spherical atmospheres and the [atlas]{} model atmosphere code [@kurucz94cdrom] that uses plane parallel atmospheres.
[atlas]{}
---------
We use the plane-parallel ATLAS9 model atmospheres from the Kurucz CD-ROMS [@kurucz94cdrom]. We integrate the intensity values over the wavelength range of our observations 2900Å to 5700Å. We use temperature models from 3500K to 6500K, with 250K interval, across 17 limb angles. The limb angle $\mu$ is defined by $\mu$=cos$\theta$, where $\theta$ is the angle between the line of sight and the normal vector of the local surface element. The treatment of convection is based on the mixing length theory with approximate overshoot (ref. [@castelli97]) with a mixing length to scale height ratio of 1.25. The variation of specific intensity, i.e. at $\mu$=1,as a function of wavelength and limb angle is shown in Figure \[centintwav\].
[phoenix]{}
-----------
The general input physics set-up of the [phoenix]{} model atmosphere code is discussed in @hauschildt99. The main advantage of using this code is that it is based on spherical geometry (spherical radiative transfer) LTE rather than traditional plane-parallel structure. NLTE effects are considered to be insignificant in this application.
The synthetic spectra are based on an extension of the grid of PHOENIX model atmospheres described by @allard00. This extended grid includes surface gravities larger than $\log(g) > 3.0$ needed for main sequence stars. These models are as described by @hauschildt99, but include an updated molecular line list. The models are computed in spherical geometry with full atomic and molecular line blanketing using solar elemental abundances. In these models, the stellar mass is 0.5 M$_\odot$ and the convection treatment assumes a mixing-length to pressure scale height ratio of 2 with no overshooting. There are 117 synthetic spectra in total. The effective temperature runs from 2700K to 6500 K in 100K steps at three surface gravities: $\log(g)$ = 4.0, 4.5, and 5.0. The wavelength resolution of these synthetic spectra is 1Å.
The variation of specific intensity as a function of limb angle is shown in Figure \[centintwav\]. The difference between the [phoenix]{} and the [atlas]{} model atmospheres at the limb of the star results primarily from the effects of spherical geometry of the [phoenix]{} model. In models using spherical geometry, there is a finite optical depth for rays close to the limb, while with plane parallel models the optical depth of the rays is infinite at the limb providing significant intensities down to $\mu$=0. Other contributing effects include overshooting and the mixing length ratio.
In addition to the limb effects, Fig. \[centintwav\] shows that [atlas]{} intensity profiles are overall brighter than [phoenix]{} intensity profiles. We attribute this difference to the use of overshooting in the [atlas]{} models. [atlas]{} models with overshooting have been shown to have weaker limb darkening in the blue relative to models without overshooting, for instance, overshooting models have been shown to better fit solar limb darkening observations than non-overshooting models [@castelli97]. A similar result holds for Procyon (F5 IV-V); ref. [@aufdenberg05] for a detailed study of the effects of 1-D and 3-D convection on limb darkening.
\[centintwav\]
Model lightcurves
-----------------
We use the eclipse-mapping code DoTS [@cameron97dots] to model the primary eclipse lightcurve for each model atmosphere, and to determine which model atmosphere best fits the complete HST lightcurve. The input data to each model comprises: the variation of the specific intensity as a function of limb angle, as shown in Figure \[centintwav\] for both model atmospheres considered; the primary and secondary radii; a reduced primary photospheric temperature and polar spot. Gravity darkening is also included according to the description of [@cameron97dots].
We include a reduced photospheric temperature to account for a stellar surface that is peppered with small spots which are too small to be resolved through eclipse mapping. Following the results of @jeffersem05 we include a polar cap in the modelled lightcurve to optimise the fit of the lightcurve to the data. The method for modelling the photometric lightcurve to include a reduced photospheric temperature and a polar cap is described in Appendix A for the [atlas]{} model atmosphere. The binary system parameters are summarised in Table \[t-param\]. The lightcurve solutions for the two model atmospheres are in good agreement.
HST Observations
================
----------- ---------------------- ----------- ---------- -------------- ------------ --
[Visit]{} [Obs. Date]{} [UT]{} [UT]{} [Exposure]{} [No of]{}
[Start]{} [End]{} [Time(s)]{} [Frames]{}
1 [01 November 2001]{} 20:55:56 01:00:17 30 165
2 [03 November 2001]{} 14:34:29 18:38:21 30 165
3 [05 November 2001]{} 09:49:17 13:52:22 30 165
----------- ---------------------- ----------- ---------- -------------- ------------ --
: HST Observations of SV Cam, where the exposure time is per frame.
\[obsl\]
Three primary eclipses of SV Cam were observed by the HST, using the Space Telescope Imaging Spectrograph with the G430L grating. The observations used 9 spacecraft orbits and spanned 5 days at 2 day intervals from 1-5 November 2001 as shown in Table 1. Summing the recorded counts over the observed wavelength range 2900Å to 5700Å yields a photometric lightcurve. Figure \[visit\] shows the photometry of the 3 eclipses observed during the 9 orbits. The observations have a cadence of 40s and a photometric precision of 0.0002 magnitudes (S:N 5000) per 30s exposure. The observations and the data reduction method are explained in greater detail in @jeffersem05.
\[t-param\]
\[visit\]
------------- ------- ------ ------------- --------------- -------------------- -------------------- --------------
Model Grid Log g L/H Fitted Spotted Primary Secondary Polar Spot
Temp (K) Temp. (K) Radius (r$_\odot$) Radius (r$_\odot$) (degrees)
[phoenix]{} 4.5 2.0 6038$\pm$58 5935$\pm$28 K 1.235 $\pm$ 0.003 0.727 $\pm$ 0.003 46.5 $\pm$ 8
[atlas]{} 4.5 1.25 5972$\pm$59 5840$\pm$53 K 1.241 $\pm$ 0.003 0.729 $\pm$ 0.002 45.7 $\pm$ 9
------------- ------- ------ ------------- --------------- -------------------- -------------------- --------------
Wavelength dependence of limb darkening
=======================================
\[1to10\]
\[1n10\]
\[fd1to10\]
\[fnld\]
\[fd2n10\]
\[phx-fd1n10\]
\[pp-fd1n10\]
The variation of the HST lightcurve with wavelength was determined by phasing the three primary eclipses together and dividing each spectrum into 10 bands of equal flux, as shown in Figure \[1to10\]. The increased fluctuations at shorter wavelengths are consistent with the presence of signatures bright magnetic activity, as they are strongest in the bluest wavelength band. The fluctuations at blue wavelengths are not a short-timescale phenomenon, but rather a mismatch in flux levels arising from a change in the star’s UV flux from second to third visit.
The curvature of the eclipse profile between second and third contact illustrates clearly that limb darkening increases to wards shorter wavelengths. The wavelength dependence of limb-darkening is revealed by the models collected by @claret00ldc4, but more fundamentally by direct observations of the Sun, for example @neckel94. Both the brightness variation and the variation of the limb darkening with wavelength are illustrated by plotting the bluest and reddest wavelength bands, as shown in Figure \[1n10\].
First Derivative Profiles of Lightcurves
========================================
Basic properties
----------------
As the primary star dominates the light of the binary system, the cooler secondary acts as a dark occulting disc scanning across the equatorial region of the primary during eclipse. The variation of the specific intensity as a function of limb angle across the primary star shows the degree of limb darkening.
To determine the optimally fitting model atmosphere to the HST lightcurves it is necessary to examine the degree of curvature in the eclipse profile. We use the first derivative, with respect to phase, of the lightcurves to determine the rate of change of the eclipsed flux with phase. This is the first time that this method has been applied to photometric data of an eclipsing binary system. The numerical derivative is computed using the Interactive Data Language (IDL) routines [deriv]{} and [derivsig]{} which employ 3-point Lagrangian interpolation.
The profiles of the first derivative for each of the 10 HST wavebands are shown in Figure \[fd1to10\]. The first contact point is at phase 0.906$\pm$0.003, the second contact point is at phase 0.977$\pm$0.003, the third contact point is at 1.023$\pm$0.003, and the fourth contact point is at 1.093$\pm$0.003. The gradient of the first derivative profile between second and third contact points is indicative of the degree of limb darkening in the lightcurve. In the first derivative of the model with no limb darkening, this region the profile is flat (Figure \[fnld\]). The variation of limb darkening with wavelength is illustrated in Figure \[fd2n10\], where the first derivative lightcurves from the longest wavelength (5596Å) and the second shortest wavelength (3707Å) are plotted between the second and third contact points. The first derivative profile of the shortest wavelength contains too much deviation, caused by a bright feature on the primary’s surface, to provide a clear example.
The variation of the first derivative profile with wavelength is also visible in the profile using the [phoenix]{} model atmosphere, as shown in Figure \[phx-fd1n10\]. In contrast to this Figure \[pp-fd1n10\] shows the same first derivative profiles based on the [atlas]{} model atmosphere. In the [atlas]{} first derivative profiles there is more variation between the two wavelength bands than in the [phoenix]{} model atmosphere. The variation with wavelength relates directly to the variation of specific intensity with wavelength as previously shown in Figure \[centintwav\].
First derivative lightcurve fitting
-----------------------------------
The first derivative profiles of the HST observations, in each of the 10 wavelength bands (Figure \[fd1to10\]), are fitted using reduced $\chi^2$ comparison fits to the first derivatives of the [phoenix]{} and [atlas]{} model atmospheres of the same wavelength as the observations. In this analysis the larger error bars are those of the models, given the large uncertainties shown in Table 2, and the finite number of planer elements used in the modelling of the lightcurves. We estimate these errors to be of the order of 1%.
The fits to the observations are determined using (i) the totally eclipsed section of the lightcurve between second and third contact points only, and (ii) the region of the eclipse between the first and fourth contact. Figure \[chifit\], and Figure \[chifitall\], respectively show the results for case (i) and (ii) at the wavelength 4535Å. The $\chi^2$ values for all of the lightcurves are summarised in Table \[tchires\].
The results show that between first and fourth contact [phoenix]{} gives the best fit, except for the wavelength band centred at 3198Å and 3707Å. However, the best fitting model in the region of total eclipse between second and third contact shows a wavelength dependence. For wavebands centred at 4313Å, 4539Å, and 4743Å provides the best fit, while [phoenix]{} is the best fitting model atmosphere at shorter and longer wavelengths.
Wavelength [phoenix]{}(1) [atlas]{}(1) [phoenix]{}(2) [atlas]{}(2)
------------ ---------------- -------------- ---------------- --------------
3198 367.27 374.69 285.64 286.85
3707 8.41 8.82 7.73 7.66
4052 5.82 7.19 6.12 6.39
4313 5.70 5.49 4.90 5.26
4539 4.84 4.93 4.81 5.11
4753 5.00 4.21 5.04 4.81
4952 6.08 5.57 4.56 4.83
5169 4.60 5.56 4.53 5.11
5384 4.50 4.55 4.91 5.25
5598 5.09 5.53 6.62 7.06
: Best fitting reduced $\chi^2$ values for lightcurves fitted in the regions in the primary eclipse profile between (1) first and fourth contact, and (2) second and third contact. This table is graphically shown in Figures \[chi1t4\].
\[tchires\]
\[chifit\]
\[chifitall\]
\[sderiv\]
\[chi1t4\]
Second derivative lightcurve fitting
====================================
As with taking the first derivative of the lightcurve to determine the rate of change of the slope in the eclipse profile, we now determine the rate of change of the first derivative profile, i.e. the second derivative of the eclipse profile. The signal-to-noise ratio of the observed data set was insufficient to determine useful second derivative profiles for each of the 10 HST lightcurves. Instead we take the second derivative of the complete HST lightcurve and fit [phoenix]{} and [atlas]{} model atmospheres centred at 4670Å. We fit the second derivative profile between first and fourth contact as between second and third contact we would only fit numerical noise.
The best fitting model atmosphere is [phoenix]{} with a relative $\chi^2$ of 7.02, while the [atlas]{} model atmosphere has a relative $\chi^2$ of 8.73. The fit to the second derivative profile for the [phoenix]{} and the [atlas]{} model atmospheres are shown in Figure \[sderiv\]. The jitter in the models is caused by numerical noise arising from the finite number of planar surface elements used to model the star in the synthesis code.
Discussion and Conclusions
==========================
The best-fitting geometric parameters determined using [phoenix]{} and [atlas]{} model atmospheres are in good agreement. As shown in Figure \[centintwav\], the predicted specific intensity for the [atlas]{} models is greater at the limb than for [phoenix]{} models, which will consequently make the primary star bigger. To compensate for this the best-fitting temperature of the primary star is slightly cooler than fitted using [phoenix]{} models. In this analysis we solved the geometric system parameters of the lightcurve using complete lightcurve rather than solving the parameters individually for each of the 10 sub-lightcurves. From Figure \[centintwav\], we would expect the best-fitting radii, solved using [phoenix]{} and [atlas]{}, to be closer at redder wavelengths than at bluer wavelengths, but would not differ enough to alter the conclusions of this analysis.
We have shown the wavelength dependence of limb darkening by sub-dividing the HST lightcurve into 10 bands of equal flux. The variation of flux between first and fourth contact shows that the limb darkening decreases towards longer wavelengths, confirming published limb darkening values, for example by @claret00ldc4, and as observed on the Sun [@neckel94] and interferometrically in K-giants by [@mozurkewich03]. The splitting of the HST lightcurve into 10 wavelength bands also highlights the presence of a time variable bright bright feature, possibly an active region or plage on the surface of the primary star, visible in the bluest wavelength band (3198Å). The temporal variation of the bright feature is of the order of 3 days as it comes into view on the primary star between the second and third HST visit.
The ratio of the temperatures of the two stars has the effect that the secondary star acts as a dark occulting disc that scans the surface of the primary star. During partial eclipse (i.e. between first and second, and third and fourth contacts) the curvature of the lightcurve provides information about the variation of the specific intensity with limb angle of the primary star. During total eclipse the curvature of the lightcurve provides information on the variation of specific intensity as a function of limb angle. The first derivative profile for each of the 10 HST wavelength bands clearly indicates the change in slope as a function of phase. Figure \[fd2n10\] shows the wavelength variation of the gradient of the first derivative profile of the HST lightcurves centred at 3707Å and 5596Å. The slope of the shorter wavelength is steeper than for the longer wavelength, consistent with the limb darkening decreasing towards longer wavelengths.
The best fitting model atmosphere is determined using $\chi^2$ comparison fit. The first derivative profile of the modelled lightcurves, generated with [phoenix]{} and [atlas]{} model atmospheres, was fitted to the HST lightcurves. The results show that the majority of the results differ by less than 1$\sigma$ making the differences between the two models largely insignificant.
Surface brightness reconstruction techniques such as Doppler imaging, and eclipse mapping rely on the information content of surface areas with different distances from the rotation axis (see review by @camerondoppler01). To detect starspots Doppler imaging uses the relative intensity contributions, calculated from model atmospheres, to represent the different surface elements. To reconstruct an accurate surface brightness distribution it is essential to know how parameters, such as the limb darkening, can alter the intensity values across the stellar disc. To date there are many surface brightness images reconstructed using Doppler imaging and eclipse mapping techniques on stars with similar spectral types to SV Cam. Examples include: He699, G2V [@jeffers02], AE Phe G0V+F8V [@barnes04aephe], Lu Lup, G2V [@donati00rxj1508] and R58, G2V [@marsden05], reconstructed using [ atlas]{} plane-parallel model atmospheres. The lightcurve modelling results of this work clearly show that there is no distinguishable difference between the two models using the high signal-to-noise SV Cam observations.
The first derivative profiles show a small excess in the observed flux at phase $\approx$ 0.9825 (just after second contact), compared with the fitted model atmospheres. In contrast, there is a slight decrease in the observed flux at phase 1.015, i.e. just before third contact. The increase and decrease of the first derivative at these points could be evidence for an additional emission. As this light excess is located just before the second contact point, and the reverse just before the third contact point, it could indicate that the very edge of the secondary’s limb is transparent to the light of the primary star.
The signal-to-noise ratio of this data set was not high enough to determine useful second derivative profiles for each of the 10 HST lightcurves. Instead we take the second derivative of the complete HST lightcurve and fit [phoenix]{} and [atlas]{} model atmospheres centred at 4670Å to the observed lightcurve. The jitter in the models is caused by numerical noise arising from the finite number of planar surface elements used to model the star in the synthesis code. We fit the lightcurve between first and fourth contact as there is insufficient structure to fit between second and third contact. The results show that the [phoenix]{} model atmosphere code gives a marginally better fit at the limb of the star. However, [phoenix]{} does not provide an exact fit, which could indicate that the observed cut-off in the limb intensity is steeper than predicted. This could explain why @jeffersem05 could not completely remove the strong discontinuities in the observed minus computed residual.
Acknowledgments {#acknowledgments .unnumbered}
===============
The authors would like to thank J.R.Barnes for useful discussions. SVJ acknowledges support from a PPARC research studentship and a scholarship from the University of St Andrews while at St Andrews University, and currently acknowledges support at OMP from a personal Marie Curie Intra-European Fellowship within the 6$^{th}$ European Community Framework Programme.
JPA was funded in part by a Harvard-Smithsonian CfA Postdoctoral Fellowship and in part under contract with the Jet Propulsion Laboratory (JPL) funded by NASA through the Michelson Fellowship Program. JPL is managed for NASA by the California Institute of Technology.
Determination of the percentage of spot coverage and polar cap temperature for ATLAS model atmospheres
======================================================================================================
In this appendix we describe the method used to determine the reduced photospheric temperature due to the presence of many unresolvable spots and the polar cap size, as quoted in Table \[t-param\] for the ATLAS model atmospheres. In a related paper, @jeffersem05 showed that in order to fit the HST lightcurve of SV Cam it was necessary to reduce the photospheric temperature, to mimic the peppering of the primary star’s surface with small starspots, and to include a polar cap. In this appendix we determine the best-fitting lightcurve solution to the SV Cam lightcurve using [atlas]{} model atmosphere. It is important to do this to not to introduce an inherent bias to the results of this paper.
Unresolved spot coverage
------------------------
In this section determine the unresolved spot coverage, following the method of [@jefferspc05]. We assume that the unresolved spot coverage comprises many small spots peppering the primary star, and a polar cap which is not possible to resolve from a photometric lightcurve.
### Temperature Fitting
In the method of [@jefferspc05] the combination of the SV Cam lightcurve and the Hipparcos parallax is used to determine the primary and secondary temperatures. Knowing the radii of the two stars we can evaluate the flux contribution from the secondary star relative to that of the primary star. The best-fitting combination of primary and secondary temperatures is determined using $\chi^2$ minimisation, where a scaling factor $\gamma$ is included to ensure that the shape of the spectrum is fitted rather than the absolute flux levels. The resulting $\chi^2$ landscape plot is shown in Figure \[cont\_at\]. The minimum value occurs at 5973$\pm$31K and 4831$\pm$103K for the primary and secondary stars respectively.
\[cont\_at\]
The temperature of the primary star is then determined by isolating the primary star’s spectrum. To achieve this we subtracted a spectrum outside of the primary eclipse from one inside of the eclipse which results in the spectrum of the primary star but with the radius of the secondary star. The temperature is fitted using $\chi^2$ minimisation, where the minimum temperature is determined by a parabolic fit. This results in a minimum primary temperature of 5872$\pm$59K (Figure \[pri\_temp\_atls\]), with the errors determined by setting $\Delta\chi^2$=1.
\[pri\_temp\_atls\]
### Fractional Starspot Coverage
Following the conclusions of [@jefferspc05] we attribute the missing flux to be indicative of small unresolvable spots on the primary star’s surface. The dark starspot filling factor is given by:
$$\alpha = 1 - \gamma
\label{e-alpha}$$
where $\alpha$ is the fractional starspot coverage and $\gamma$ is the scaling factor. The interpolated scaling factor for the primary temperature is determined as shown in Figure \[pri\_scal\_atls\]. The scaling factor is 0.76, resulting in a fractional coverage of dark starspots of 24%.
\[pri\_scal\_atls\]
### Polar Cap
The determination of the spot coverage fraction only accounts for the flux deficit in the eclipsed equatorial latitudes of the primary star. Extending the 24% spot coverage to the entire surface of the primary star (as described by [@jefferspc05]) we find that there is an additional 13.5% flux deficit. The binary eclipse-mapping code DoTs is used to model artificial polar spots on the surface of SV Cam, to include effects resulting from the star being a sphere and not a disc, limb and gravity darkening and spherical oblateness. We model the fractional decrease in stellar flux as a function of polar spot size and determine the polar spot radius to be 43.5$\pm$6$^\circ$ (Fig. \[pcap\_atls\]).
\[pcap\_atls\]
Lightcurve Modelling
--------------------
To compare ATLAS and PHOENIX model atmospheres we need to determine the best-fitting binary system parameters to the observed SV Cam lightcurve for each model atmosphere separately to avoid introducing a an inherent bias to our results. We include the presence of high unresolvable spot coverage and polar caps in the lightcurve fit by using the method of [@jeffersem05]. Such spot coverage has been shown by that method to have a significant impact on the binary system parameters.
### Reduced Photospheric Temperature
The peppering of small starspots poses a limitation on image reconstruction techniques from photometric lightcurves such as Maximum Entropy eclipse mapping [@jeffersfs05]. To model the presence of many dark unresolvable starspots we decrease the apparent photospheric temperature of the star. To determine the reduction in the photospheric temperature of the star we model starspot distributions equating to 1.8%, 6.1%, 18%, 48% and 100% of the stellar surface on an immaculate SV Cam. For each model the initial photospheric temperature is 5904K and the spot temperature is 4400K. Each of these starspot distributions is modelled as a photometric lightcurve. Using the Maximum Entropy eclipse mapping method we determine the best-fitting temperature to each model lightcurve using a $\chi^2$ grid-search method. A quadratic fit to the best-fitting temperatures show that for a starspot coverage of 24%, the apparent photospheric temperature is 5840K (Figure \[spotredats\]).
\[spotredats\]
### Polar Cap
We include a polar cap in our analysis to verify the results in the previous section, following the method of [@jeffersem05]. The polar spot is assumed to be at 4500K, circular, centred at the pole, and is in addition to the peppered spot distribution as described above. For each polar spot size (from 40$^\circ$ to 50$^\circ$) the minimum primary and secondary radii are determined using a $\chi^2$ contour map. These minimum $\chi^2$ values are plotted as a function of polar spot size in Figure \[chipcap\]. The best fitting polar cap size, 45.7$^\circ$ was determined from the minimum of a quadratic function fitted to these points. The grid search of radii is repeated using a fixed value for the polar spot size. The results for the $\chi^2$ minimisation are summarised in Table \[t-param\] shown as a contour map in Figure \[pcaprad\].
Summary
-------
We have shown in this section that the best fitting binary system parameters determined using the [atlas]{} model atmosphere are in agreement with those determined using [phoenix]{} model atmospheres [@jefferspc05]. This further shows that at there is no significant difference between [phoenix]{} and [atlas]{} model atmospheres at the F9V+K4V spectral types. The results are summarised in Table \[t-param\].
\[lastpage\]
|
---
abstract: 'In this paper, we present **IVO: Inverse Velocity Obstacles** an ego-centric framework that improves the real time implementation. The proposed method stems from the concept of velocity obstacle and can be applied for both single agent and multi-agent system. It focuses on computing collision free maneuvers without any knowledge or assumption on the pose and the velocity of the robot. This is primarily achieved by reformulating the velocity obstacle to adapt to an ego-centric framework. This is a significant step towards improving real time implementations of collision avoidance in dynamic environments as there is no dependency on state estimation techniques to infer the robot pose and velocity. We evaluate IVO for both single agent and multi-agent in different scenarios and show it’s efficacy over the existing formulations. We also show the real time scalability of the proposed methodology.'
author:
- 'P. S. Naga Jyotish'
- Yash Goel
- 'A. V. S. Sai Bhargav Kumar'
- 'K. Madhava Krishna'
bibliography:
- 'air.bib'
title: 'IVO: Inverse Velocity Obstacles for Real Time Navigation '
---
Introduction
============
Autonomous navigation has gained a lot of attention in the recent years. They find applications in the fields like self-driving cars, crowd simulations, rescue operations, payload transferring etc. All these applications require a collision avoidance scheme for a safe navigation of the system to the goal. There have been quite a few approaches like [@van2008reciprocal][@van2011reciprocal] which present collision avoidance schemes but are computationally complex due to the non-convex nature of collision avoidance constraint. Also these schemes generally estimate whether the agent is on collision course with the other participants based on the states of the agent and the participants. A slight variance in the state estimation can lead to false detection which keeps propagating and can lead to system failure. In this paper, we present a novel methodology for collision avoidance that removes the reliance on the state of the robot. Our approach stems from the concepts of Velocity Obstacle [@fiorini1998motion] and ego-centric motion planning.
Contribution and Main Results
-----------------------------
The principal contribution of the present work is the construction of efficient collision avoidance scheme for autonomous navigation called Inverse Velocity Obstacles (IVO). Our approach is a variant of Velocity Obstacle method presented in [@fiorini1998motion], which is widely used technique for collision avoidance in a dynamic environment. Our method inherits all the salient features and incorporates capability to handle the uncertainty in collision detection that occur due to the error in state estimation. This is achieved by implementing the algorithm in an ego-centric framework. Due to the very nature of the implementation, it can be easily extended to multi-agent collision avoidance problem by implicitly assigning each agent with the same collision avoidance scheme. We also show that the low computational complexity and lower noise in collision detection of the approach significantly improves the chances for real time implementations as there is dependency on the state estimation techniques for inferring the self states of each agent.
On implementation side, we show the efficacy of Inverse Velocity Obstacles method by evaluating it in various scenarios for both single and multi-agent systems. Our simulations show that even for the agents as high as 50 can generate safe motions. We also show the variance of false collision detection is reduced significantly compared to a Velocity Obstacle approach. We have also show the real time potential of the presented approach by implementing it on real drone and also the approach can be easily parallelized as each agent computation is independent.
Layout of the paper
-------------------
The rest of the paper is organized as follows, Section \[rel\_work\] presents a brief overview of the previous works. Section \[back\] reviews the concepts of Velocity Obstacle. In Section \[IVO\] we present our approach, Inverse Velocity Obstacles and derive its formulation. Section \[Nav\_Agents\] describes the implementation details for the navigation of single and multi-agent systems. In Section \[res\] we evaluate our method in different scenarios and demonstrate the performance in real time. We conclude our work in Section \[concl\].
Related Work {#rel_work}
============
In this section, we present an overview of the approaches on collision avoidance and navigation in dynamic environment. Quite a few approaches [@borenstein1991vector], [@faverjon1987local], [@fox1997dynamic], [@kanehiro2008local] assume that the obstacles are static and plan for the control to avoid the collision. In case of moving obstacles these replan based on the updated positions of the obstacles. But these fail to generate the safe trajectories around the fast moving obstacles. In [@fulgenzi2007dynamic], [@de1994avoidance], [@hsu2002randomized], [@martinez2009collision] the future position of obstacles are computed by extrapolating with the current velocity to handle high velocities. But these approaches cannot handle the reactive nature of the other agents. Many works like [@pettre2006real], [@treuille2006continuum], [@sud2008real], [@gayle2007reactive] have focused on crowd simulation in which each agent considers the other agents as obstacles and navigates independently.
Centralized planning scheme on a given configuration space in the case of multiple agents is presented in [@lavalle1998optimal], [@sanchez2002using]. These works majorly focus on optimal coordination and cannot be scaled up for real time implementation. A method called Velocity Obstacle based on velocity is presented in [@fiorini1998motion] for moving obstacles which provides the robot a condition to avoid collision with obstacle with a known velocity. A variant called Recursive Velocity Obstacles [@kluge2004reflective] is proposed, which considers the reactive behaviour of the other participants. However, this approach leads to the oscillations of the agents which sometimes may not converge. To address issue a extension to the Velocity obstacle called Reciprocal Velocity Obstacle (RVO)[@van2008reciprocal] is presented, where both the agents which are on the course of collision select the velocities that bring them outside the RVO which is generated by the other agent. But this requires the knowledge of current pose and velocity of the obstacle which might bottleneck the update rates during real time implementation. They are several other extensions of Velocity Obstacle like [@singh2013reactive][@kumar2018novel].
To address this in this paper, we present an ego-centric based framework called Inverse Velocity Obstacles (IVO), which does not require the knowledge of robot’s pose and velocity. This eliminates the state estimation layer reducing the computational time (for state estimation) and false collision detection which aids in real time implementation.
Preliminaries {#back}
=============
Velocity Obstacle
-----------------
In this section, we briefly review the original concept of Velocity Obstacle and analyze its behaviour in in the presence of state, actuation and perception uncertainties.
### Definition
Consider a mobile robot (our agent) and an obstacle, both taking the shape of a disc of radius $R_A$ and $R_B$ respectively, be denoted by $A$ and $B$. The velocity obstacle for robot $A$ induced by obstacle $B$, denoted by $VO_{A|B}$, is the set of velocities of $A$ which can result in a collision with $B$ at some point in the future. Let $C_A$ and $C_B$ represent the centres of $A$ and $B$ respectively. The robot and obstacle are geometrically modified such that the robot takes the form of a point object and the obstacle grows its radius to $R_A+R_B$. If $B$ is a static obstacle, a cone can be constructed with the vertex on $A$ and the edges touching $B$ as shown in the figure \[fig:VO\]. This cone represents the set of velocities of $A$ which lead to a collision. In case the obstacle is in motion, it is assumed to be static by taking a relative velocity of $A$.
![Velocity obstacle for agent $A$ induced by obstacle $B$[]{data-label="fig:VO"}](img/VO.pdf){width="1\linewidth"}
### Implementation problems
The obvious assumption from the definition of the velocity obstacle is that we need to track the velocity of the robot along with the position and velocity of the obstacle. In case of planning trajectories on a global frame, we also need to track the positions of robot and obstacle with respect to a global frame. Though we can plan trajectories in robot’s frame, this still needs us to have an estimation of the velocity of the robot. Generally, we take the instantaneous velocity from a sensor. This accounts for an additional noise in estimation of the velocity of the robot apart from the noise we end up having in the states of the obstacle. Other prominent methods include state estimation using SLAM which is not as reliable as the feed from the sensor since SLAM methods tend to break when complex maneuvers are involved.
Inverse Velocity Obstacle {#IVO}
=========================
In this section, we propose a new concept of “Inverse Velocity Obstacle” to minimize the uncertainty in collision detection during the planning phase. This integrates into our optimization framework which provides controls leading to collision free and smooth trajectories.
Definition
----------
The idea is simple - Instead of assuming that the obstacle is stationary, we assume that the robot is stationary and get a relative velocity vector for the obstacle. At this point, our robot is stationary at the origin (since we are in an ego-frame). We also make the obstacles point objects and grow the radius of the robot to $R_A+R_B$. Now, we find a relative velocity for our robot (which is stationary in the relative frame) which is outside the collision cone. A simple case is demonstrated in the figure \[fig:ivo-explanation\], where $\textbf{x}_i(t) = [x_i(t) \ \ y_i(t)]^T$ and $\textbf{v}_i(t) = [\dot{x_i} \ \ \dot{y_i}]^T$. We show that we can calculate the relative velocity of the obstacle as seen by the agent using the ego-centric observation of the obstacle by the agent at two consecutive time instance, here $t$ and $t + \delta$, as shown in \[eq:rel-velo\]
$$\begin{bmatrix}
\dot{x^r_o} \\ \dot{y^r_o}
\end{bmatrix} = \begin{pmatrix}\begin{bmatrix}
x^r_o(t+\delta) \\ y^r_o(t+\delta)
\end{bmatrix} - \begin{bmatrix}
x^r_o(t) \\ y^r_o(t)
\end{bmatrix}\end{pmatrix}/\delta
\label{eq:rel-velo}$$
For any time instance, $t$ suppose the global position of the obstacle moving with velocity $\textbf{v}_o$ and agent moving with velocity $\textbf{v}_r$ be $\textbf{x}_o(t)$ and $\textbf{x}_r(t)$ respectively. At the next time instance, the global positions of the obstacle and agent will be $\textbf{x}_o(t+\delta)$ and $\textbf{x}_r(t+\delta)$ respectively. The ego-centric observations of the obstacle by the agent for these instances is $\textbf{x}^r_o(t)$ and $\textbf{x}^r_o(t+\delta)$ for agent frame $\textbf{F}_t$ and $\textbf{F}_{t+\delta}$ respectively.
So, the global position of the obstacle at first instance is
$$\textbf{x}_o(t) = {^g_t}{\textbf{T}}.{\textbf{x}}{^r_o}(t)$$
$$\textbf{x}_o(t) = {\textbf{x}}{^r_o}(t) + {\textbf{x}}{_r}(t)$$
Similarly for the second instance we have $$\textbf{x}_o(t+\delta) = {^g_{t+\delta}}{\textbf{T}}.{\textbf{x}}{^r_o}(t+\delta)$$ $$\textbf{x}_o(t+\delta) = {\textbf{x}}{^r_o}(t+\delta) + {\textbf{x}}{_r}(t+\delta) \\$$ $$\textbf{x}_o(t+\delta) = {\textbf{x}}{^r_o}(t+\delta) + {\textbf{x}}{_r}(t) + \textbf{v}_r*\delta \\$$
Therefore the obstacle velocity in the global frame is $$\textbf{v}_o = (\textbf{x}_o(t+\delta) - \textbf{x}_o(t))/\delta$$ $$\textbf{v}_o = ({\textbf{x}}{^r_o}(t+\delta) - {\textbf{x}}{^r_o}(t) + \textbf{v}_r*\delta)/\delta \\$$
And hence the relative velocity of the obstacle with respect to the agent is $$\textbf{v}{_o^r} = \textbf{v}_o - \textbf{v}_r$$ $$\textbf{v}{_o^r} = ({\textbf{x}}{^r_o}(t+\delta) - {\textbf{x}}{^r_o}(t) + \textbf{v}_r*\delta)/\delta - \textbf{v}_r$$ $$\textbf{v}{_o^r} = ({\textbf{x}}{^r_o}(t+\delta) - {\textbf{x}}{^r_o}(t))/\delta$$
![$\textbf{x}_r$, $\textbf{v}_r$ denote the position and velocity of the agent while $\textbf{x}_o$ and $\textbf{v}_o$ denote the position and velocity of the obstacle in global frame. $\textbf{x}^r_o$ and $\textbf{v}^r_o$ denote the position and velocity of the obstacle as seen from the agent’s frame (agent is at origin and stationary in this frame).[]{data-label="fig:ivo-explanation"}](img/ivo-exp.pdf){width="1\linewidth"}
Now, we write the collision cone using inverse velocity obstacles,
$$f = \frac{(\textbf{r}^T \textbf{v})^2}{||\textbf{v}||^2} - ||\textbf{r}||^2 + (R_A+R_B)^2
\label{eq:vo}$$
$$\textbf{r} = \begin{bmatrix}
x^r_o(t) \\
y^r_o(t)
\end{bmatrix},
\textbf{v} = \begin{bmatrix}
\dot{x^r_o(t)} \\
\dot{y^r_o(t)}
\end{bmatrix}$$
Navigating agents {#Nav_Agents}
=================
Single Agent
------------
Let us start with the case of a single agent that follows a holonomic motion model and obstacles that do not have a complex behaviour but move with some constant velocity. Now, consider the following optimization with variables as $\textbf{u} = [u_x \ \ u_y]^T$ which represent the controls to the agent at a time instant $t$. The goal position in the agent’s frame is denoted by $\textbf{g}^r$ and $\textbf{u}$ is the control given to the agent, which in this case is the change in the velocity. $\textbf{r}$ and $\textbf{v}$ represent the position and velocity of the obstacle as seen by the agent. The smoothing factor $\lambda$ can be adjusted based on the requirement. Let us assume that the maximum attainable velocity of the agent is $\textbf{v}_{max}$.
\[eq:single-agent-goal\] $$\min_{u_x, u_y} J = ||\textbf{v}_{desired} - (\textbf{v}_{r}+\textbf{u})||^2 + \lambda||\textbf{u}||^2
% \min_{u_x, u_y} J = ||\textbf{g}^r - \textbf{u}||^2 + \lambda||\textbf{u}||^2$$ $$\textbf{v}_{desired} = \frac{\textbf{g}^r}{|\textbf{g}^r|}*v_{max}$$ $$f(.) \leq 0: \frac{(\textbf{r}^T \textbf{v})^2}{||\textbf{v}||^2} - ||\textbf{r}||^2 + (R_A+R_B)^2 \leq 0$$ $$\textbf{g}^r = \begin{bmatrix}
g^r_x\\
g^r_y
\end{bmatrix}, u = \begin{bmatrix}
u_x\\
u_y
\end{bmatrix}$$ $$\textbf{r} = \begin{bmatrix}
x^r_o(t) \\
y^r_o(t)
\end{bmatrix},
\textbf{v} = \begin{bmatrix}
\dot{x^r_o(t)} - u_x \\
\dot{y^r_o(t)} - u_y
\end{bmatrix}$$
The collision avoidance constraint, $f(.)$, exists for every possible pair of agent and obstacle within the sensor range of the agent. In section \[section:results-single-agent\], we experimentally show that this formulation is valid and the agent successfully avoids the obstacles and reaches the goal.
Multiple Agents
---------------
Let us consider $n$ agents that use the optimization routine mentioned in equation \[eq:single-agent-goal\]. In this case, the obstacles may not necessarily move with constant velocity. For the sake of simplicity, we assume that every agent moves with some instantaneous velocity $dt$. Now, we scale the single agent problem to $n$ agents by considering every other agent to be an obstacle. Following this idea, a navigation algorithm for multi-agent scenario is described in Algorithm \[algo:multiagent\].
$x^r_j(t) \gets \text{Position of an obstacle in agent's frame}$ $\dot{x^r_j}(t)\gets (x^r_j(t) - x^r_j(t-dt)) \cdot dt$ $R_j \gets \text{Radius of the obstacle}$ $c_{avoid}(j) \gets f(x^r_j(t), \dot{x^r_j}(t), R_j)$ $\textbf{u}_i \gets \min\limits_{u_x, u_y} J$ $\textbf{u} = [\textbf{u}_1 \ \textbf{u}_2 \ldots \textbf{u}_n]^T$
In section \[section:results-multiagent\], we experimentally show that the algorithm works for multiple agents with large values of $n$.
Experimental Results {#res}
====================
To evaluate the performance of the presented methodology we have tested in both single agent and multi agent scenarios. All the simulations are performed on Intel i7 processor @ 3.2 GHz clock speed. The methodology is also validated on a real quadrotor. For this we used Parrot Bebop2. The detailed videos of all the simulations and real time implementations are available at \[[this link](https://sites.google.com/view/inverse-velocity-obstacle/)\].
Single agent {#section:results-single-agent}
------------
First we validate our formulation in a single agent case. Figure (\[fig:single-agent\]) shows the scenario where single agent is among five dynamic obstacles. All the participants in the environment are of same radius and have same speed limits. As can be seen the agent executes safe trajectories to avoid all the obstacles and reaches the goal. The computation time for each cycle in this scenario is around 10ms making it achieve an update rate of 100Hz.
[0.5]{} ![Blue disc represents the agent while rest are the obstacles with simple behaviour[]{data-label="fig:single-agent"}](img/results/single_agent/1.png "fig:"){width="\textwidth"} \[SA:a\]
[0.505]{} ![Blue disc represents the agent while rest are the obstacles with simple behaviour[]{data-label="fig:single-agent"}](img/results/single_agent/2.png "fig:"){width="\textwidth"}
\[SA:b\]
[0.5]{} ![Blue disc represents the agent while rest are the obstacles with simple behaviour[]{data-label="fig:single-agent"}](img/results/single_agent/3.png "fig:"){width="\textwidth"}
\[SA:c\]
[0.5]{} ![Blue disc represents the agent while rest are the obstacles with simple behaviour[]{data-label="fig:single-agent"}](img/results/single_agent/4.png "fig:"){width="\textwidth"}
\[SA:d\]
[0.5]{} ![Blue disc represents the agent while rest are the obstacles with simple behaviour[]{data-label="fig:single-agent"}](img/results/single_agent/5.png "fig:"){width="\textwidth"}
\[SA:e\]
[0.5]{} ![Blue disc represents the agent while rest are the obstacles with simple behaviour[]{data-label="fig:single-agent"}](img/results/single_agent/6.png "fig:"){width="\textwidth"}
\[SA:f\]
Multiple agents {#section:results-multiagent}
---------------
In this section, we evaluated the performance of our Inverse Velocity Obstacles in a multi-agent collision scenario. We first evaluate for a 6 agent scenario in an antipodal case. All the agents are of same radius and have same speed and acceleration limits. Figure \[fig:6-agents\] shows the scenario. All the agents plan independently considering all the other participants as potential obstacles. As can be seen all the agents generate safe motions avoiding each other and reach the goal. The computational time for each cycle in this scenario is 15ms with update rates of 66Hz.
[0.5]{} ![Multi agent scenario: 6 agents[]{data-label="fig:6-agents"}](img/results/6_agents/1.png "fig:"){width="\textwidth"}
[0.5]{} ![Multi agent scenario: 6 agents[]{data-label="fig:6-agents"}](img/results/6_agents/2.png "fig:"){width="\textwidth"}
[0.5]{} ![Multi agent scenario: 6 agents[]{data-label="fig:6-agents"}](img/results/6_agents/3.png "fig:"){width="\textwidth"}
[0.5]{} ![Multi agent scenario: 6 agents[]{data-label="fig:6-agents"}](img/results/6_agents/4.png "fig:"){width="\textwidth"}
[0.5]{} ![Multi agent scenario: 6 agents[]{data-label="fig:6-agents"}](img/results/6_agents/5.png "fig:"){width="\textwidth"}
[0.5]{} ![Multi agent scenario: 6 agents[]{data-label="fig:6-agents"}](img/results/6_agents/6.png "fig:"){width="\textwidth"}
Next, we increased the number of agents in the same scenario with same settings to validate how IVO scales when the agents grow. Figure \[fig:10-agents\] presents the scenario with 10 agents that is evaluated. The computational time increases with the increase in the number of agents and for this scenario it is around 15ms for each cycle and has the update rates close to 50Hz. Even though the computational time is increasing with the increase in the number of agents, the update rates are high enough for aiding in a easy real time implementation.
[0.5]{} ![Multiagent scenario: 10 agents[]{data-label="fig:10-agents"}](img/results/10_agents/1.jpg "fig:"){width="\textwidth"}
[0.505]{} ![Multiagent scenario: 10 agents[]{data-label="fig:10-agents"}](img/results/10_agents/2.jpg "fig:"){width="\textwidth"}
[0.5]{} ![Multiagent scenario: 10 agents[]{data-label="fig:10-agents"}](img/results/10_agents/3.jpg "fig:"){width="\textwidth"}
[0.5]{} ![Multiagent scenario: 10 agents[]{data-label="fig:10-agents"}](img/results/10_agents/4.jpg "fig:"){width="\textwidth"}
[0.5]{} ![Multiagent scenario: 10 agents[]{data-label="fig:10-agents"}](img/results/10_agents/5.jpg "fig:"){width="\textwidth"}
[0.5]{} ![Multiagent scenario: 10 agents[]{data-label="fig:10-agents"}](img/results/10_agents/6.jpg "fig:"){width="\textwidth"}
Additional simulation results are available at [https://sites.google-.com/view/inverse-velocity-obstacle](https://sites.google.com/view/inverse-velocity-obstacle).
Real time Experiments
---------------------
In this section, we evaluate the performance of Inverse Velocity Obstacles in real time implementation. For this we used Parrot Bebop2 quadrotor which accepts the yaw, pitch, roll angles as the control input. We have also developed a PID controller for the velocity control. This is integrated on top of the inbuilt controller for better performance for the validation of the algorithm as our algorithm is developed in velocity control space. This lets us pass velocity as a control command to the drone. We used April Tags [@olson2011tags] of the family Tag36h11 for better state estimation of the other participants in the environment. We have completely bypassed the self state estimation module as our framework does not need the agents self state for collision detection and avoidance. Figures(\[real:a\])-(\[real:f\]) show the snapshots of the real time implementation of the proposed method on the quadrotor in a dynamic environment.
[0.5]{} ![Real time implementation with one dynamic obstacle[]{data-label="fig:single-agent-real"}](img/results/1_real/1.png "fig:"){width="\textwidth"}
[0.505]{} ![Real time implementation with one dynamic obstacle[]{data-label="fig:single-agent-real"}](img/results/1_real/2.png "fig:"){width="\textwidth"}
[0.5]{} ![Real time implementation with one dynamic obstacle[]{data-label="fig:single-agent-real"}](img/results/1_real/3.png "fig:"){width="\textwidth"}
[0.5]{} ![Real time implementation with one dynamic obstacle[]{data-label="fig:single-agent-real"}](img/results/1_real/4.png "fig:"){width="\textwidth"}
[0.5]{} ![Real time implementation with one dynamic obstacle[]{data-label="fig:single-agent-real"}](img/results/1_real/5.png "fig:"){width="\textwidth"}
[0.5]{} ![Real time implementation with one dynamic obstacle[]{data-label="fig:single-agent-real"}](img/results/1_real/6.png "fig:"){width="\textwidth"}
Comparisons with Velocity Obstacle for Collision Detection
----------------------------------------------------------
In this section we compare the presented approach with Velocity Obstacle and show that the collision detection for IVO is more reliable compared to the traditional Velocity Obstacle. For this the equation \[eq:vo\], is re-written in terms of controls in the following manner, $$\label{eq:vel-cone-alg}
f = c_1\dot{x_r}^2 + c_2\dot{y_r}^2 + c_3\dot{x_r}\dot{y_r} + c_4\dot{x_r} + c_5\dot{y_r} + c_6$$
Similarly, the original Velocity Obstacle equation is rearranged into equation \[eq:vel-cone-alg\]. In a real time scenario, the coefficients $c_i$ take the form of a random variable. This introduces randomness into each coefficient due to the uncertainties in the state, actuation and perception.
$$c_i = \alpha P_i(x_r, y_r, x_o, y_o, \dot{x_o}, \dot{y_o})$$
$P_i(.)$ denotes the PDF of $c_i$. The advantage with IVO is that the random variables need not depend on $x_r$ and $y_r$. In figure \[fig:pdf-cdf\], we compare the probability distributions of the error in collision cone for velocity obstacle as well as inverse velocity obstacle. The noise in agent and obstacle states were assumed to be Gaussian distributions with zero mean. The distributions clearly show a reduction in the noise. The 99% confidence region for inverse velocity obstacle is between 0 and 0.14 error range while it is between -0.03 and 0.56 error range for velocity obstacle. This provides a better scope for dealing with the noise just by increasing the radius of the obstacle.
[0.5]{} ![Distributions for collision cone[]{data-label="fig:pdf-cdf"}](img/pdf.pdf "fig:"){width="\textwidth"}
[0.5]{} ![Distributions for collision cone[]{data-label="fig:pdf-cdf"}](img/cdf.pdf "fig:"){width="\textwidth"}
![Computational time for different number of obstacles[]{data-label="fig:computational-time"}](img/results/computational-time.pdf){width="\linewidth"}
Conclusion {#concl}
==========
In this paper, we presented a new concept called Inverse Velocity Obstacles, for the safe navigation of autonomous agents in dynamic environments. In contrast to the previous works, we developed an ego-centric framework which eliminates the reliance on robot’s state for collision detection. This also decreases the computational complexity improving the real time implementation. The formulation presented is a natural extension of Velocity Obstacle and is easy to implement. We have also applied this to multi-agent navigation and we show its efficacy to generate natural paths for systems as high as 50 agents in very tight environment.
Our further work includes investigating the Inverse Velocity Obstacle application in the domains like crowd simulations and rescue works. Also we are exploring to extending the method to handle non-parametric uncertainty that arises due to perception and localization error.
|
---
abstract: 'We give a complete classification of the reductive symmetric pairs $(G,H)$ for which the homogeneous space $(G \times H)/\operatorname{diag}H$ is real spherical in the sense that a minimal parabolic subgroup has an open orbit. Combining with a criterion established in \[T. Kobayashi–T. Oshima, Adv. Math. 2013\], we give a necessary and sufficient condition for a reductive symmetric pair $(G,H)$ such that the multiplicities for the branching law of the restriction of any admissible smooth representation of $G$ to $H$ have finiteness/boundedness property.'
author:
- 'Toshiyuki KOBAYASHI [^1] and Toshihiko MATSUKI [^2]'
title: 'Classification of finite-multiplicity symmetric pairs\'
---
[**[Keywords:]{}**]{}branching law, restriction of representation, reductive group, real spherical variety, symmetric pair, admissible representations.
[**[MSC 2010;]{}**]{}primary 22E46; secondary 14M15, 53C35.
Introduction and statement of main results
==========================================
A complex manifold $X_{\mathbb{C}}$ with action of a complex reductive group $G_{\mathbb{C}}$ is called [*[spherical]{}*]{} if a Borel subgroup of $G_{\mathbb{C}}$ has an open orbit in $X_{\mathbb{C}}$. In the real setting, in search of a good framework for global analysis on homogeneous spaces which are broader than the usual ([*[e.g.]{}*]{} symmetric spaces), we advocated in [@xtoshi95] the importance of an analogous notion for real reductive groups $G$ and proposed to call:
\[def:realsp\] [ We say a smooth manifold $X$ with $G$-action is [*[real spherical]{}*]{} if a minimal parabolic subgroup $P_G$ of $G$ has an open orbit in $X$. ]{}
In the case where $G$ acts transitively on $X$, $P_G$ has finitely many orbits in $X$ if $X$ is real spherical (see [@xtoshitoshima Remark 2.5 (4)] and references therein).
Suppose that $H$ is a closed subgroup which is reductive in $G$. Let $P_H$ be a minimal parabolic subgroup of $H$.
\[[[@xtoshitoshima]]{}\] \[def:pp\]
We say the pair $(G,H)$ satisfies (PP) if one of the following four equivalent conditions is satisfied.
1. $(G \times H)/\operatorname{diag}H$ is real spherical as a $(G \times H)$-space.
2. $G/P_H$ is real spherical as a $G$-space.
3. $G$ has an open orbit in $G/P_G \times G/P_H$ via the diagonal action.
4. There are finitely many $G$-orbits in $G/P_G \times G/P_H$ via the diagonal action.
The above four equivalent conditions are determined only by the Lie algebras ${\mathfrak {g}}$ and ${\mathfrak {h}}$ of the Lie groups $G$ and $H$, respectively. Therefore we also say that the pair $({\mathfrak {g}}, {\mathfrak {h}})$ of Lie algebras satisfies (PP).
A natural question is to find all the pairs $({\mathfrak {g}}, {\mathfrak {h}})$ of real reductive Lie algebras satisfying (PP) when ${\mathfrak {h}}$ is maximal reductive in ${\mathfrak {g}}$.
We say $(G,H)$ is a [*[reductive symmetric pair]{}*]{} if $H$ is an open subgroup of the fixed point subgroup $G^{\sigma}$ of some involutive automorphism $\sigma$ of $G$. Reductive symmetric pairs $(G,H)$ give typical examples of maximal reductive subalgebras ${\mathfrak {h}}$ in ${\mathfrak {g}}$, and provide important setups in branching laws of the restriction $G \downarrow H$.
The main goal of this paper is to establish a complete classification of reductive symmetric pairs $(G,H)$ having the geometric condition [[(PP)]{}]{}.
\[thm:1.1\] Suppose $(G,H)$ is a reductive symmetric pair. Then the following two conditions are equivalent:
1. $(G,H)$ satisfies [[(PP)]{}]{}, namely, $(G \times H)/\operatorname{diag}H$ is real spherical.
2. The pair $({\mathfrak{g}},{\mathfrak{h}})$ of the Lie algebras is isomorphic (up to outer automorphisms) to the direct sum of the following pairs:
1. [[Trivial case:]{}]{} ${\mathfrak{g}}={\mathfrak{h}}$.
2. [[Abelian case:]{}]{} ${\mathfrak {g}}={\mathbb{R}}$, ${\mathfrak {h}}=\{0\}$.
3. [[Compact case:]{}]{} ${\mathfrak {g}}$ is the Lie algebra of a compact simple Lie group.
4. [[Riemannian symmetric pair:]{}]{} ${\mathfrak {h}}$ is the Lie algebra of a maximal compact subgroup $K$ of a non-compact simple Lie group $G$.
5. [[Split rank one case ($\operatorname{rank}_{{\mathbb{R}}}G=1$):]{}]{}
1. $({\mathfrak{o}}(p+q,1),
{\mathfrak{o}}(p)+{\mathfrak{o}}(q,1))$ $(p+q \ge 2)$.
2. $({\mathfrak{su}}(p+q,1),
{\mathfrak{s}}({\mathfrak {u}}(p)+{\mathfrak{u}}(q,1)))$ $(p+q \ge 1)$.
3. $({\mathfrak{sp}}(p+q,1),
{\mathfrak{sp}}(p)+{\mathfrak{sp}}(q,1))$ $(p+q \ge 1)$.
4. $({\mathfrak{f}}_{4(-20)},
{\mathfrak{o}}(8,1))$.
6. [[Strong Gelfand pairs and their real forms:]{}]{}
1. $({\mathfrak{sl}}(n+1,{\mathbb{C}}),
{\mathfrak{gl}}(n,{\mathbb{C}}))$ $(n\ge 2)$.
2. $({\mathfrak{o}}(n+1,{\mathbb{C}}),
{\mathfrak{o}}(n,{\mathbb{C}}))$ $(n\ge 2)$.
3. $({\mathfrak{sl}}(n+1,{\mathbb{R}}),
{\mathfrak{gl}}(n,{\mathbb{R}}))$ $(n\ge 1)$.
4. $({\mathfrak{su}}(p+1,q),{\mathfrak{u}}(p,q))$ $(p+q\ge 1)$.
5. $({\mathfrak{o}}(p+1,q),{\mathfrak{o}}(p,q))$ $(p+q\ge 2)$.
7. $({\mathfrak{g}}, {\mathfrak{h}})=
({\mathfrak{g}}'+{\mathfrak{g}}', \operatorname{diag} {\mathfrak{g}}')$ [[Group case:]{}]{}
1. ${\mathfrak{g}}'$ is the Lie algebra of a compact simple Lie group.
2. $({\mathfrak{o}}(n,1)+{\mathfrak{o}}(n,1), \operatorname{diag}
{\mathfrak{o}}(n,1))$ $(n \ge 2)$.
8. [[Other cases:]{}]{}
1. $({\mathfrak{o}}(2n, 2),
{\mathfrak{u}}(n,1))$ $(n \ge 1)$.
2. $({\mathfrak{su}}^{\ast}(2n+2),
{\mathfrak{su}}(2)+{\mathfrak{su}}^{\ast}(2n)+\mathbb{R})$ $(n\ge 1)$.
3. $({\mathfrak{o}}^{\ast}(2n+2),
{\mathfrak{o}}(2)+{\mathfrak{o}}^{\ast}(2n))$ $(n\ge 1)$.
4. $({\mathfrak{sp}}(p+1,q),
{\mathfrak{sp}}(p,q)+{\mathfrak{sp}}(1))$.
5. $({\mathfrak{e}}_{6(-26)},
{\mathfrak{so}}(9,1)+{\mathbb{R}})$.
In the above description of the classification, we do not intend to write irreducible symmetric pairs in an exclusive way. Indeed some of the above pairs are isomorphic to each other when ${\mathfrak {g}}$ is of small dimension. For instance, (E1) with $(p,q)=(4,1)$ is isomorphic to (H2) with $n=1$, namely, $$({\mathfrak{o}}(5,1), {\mathfrak{o}}(4)+{\mathfrak{o}}(1,1))
\simeq
({\mathfrak{su}}^{\ast}(4),
{\mathfrak{su}}(2)+{\mathfrak{su}}^{\ast}(2)+{\mathbb{R}}).$$
\[rem:Dynkin\] [[ It would be interesting to give a complete list of the pairs $({\mathfrak {g}}, {\mathfrak {h}})$ of reductive Lie algebras having the property (PP) by dropping the assumption that $({\mathfrak {g}}, {\mathfrak {h}})$ is a symmetric pair. (Cf. Dynkin [@Dynkin] for the description of maximal reductive Lie algebras in simple Lie algebras over ${\mathbb{C}}$.) In view of the classification in Theorem \[thm:1.1\] it is plausible that there are not many non-symmetric pairs $({\mathfrak {g}}, {\mathfrak {h}})$ satisfying (PP) if $H$ is noncompact. ]{}]{}
Next we also consider another property, to be denoted by (BB), which is stronger than (PP). For this, suppose further that $G$ is an algebraic reductive group and $H$ is a reductive subgroup defined algebraically over ${\mathbb{R}}$. Let $G_{\mathbb{C}}$ be a complex Lie group with Lie algebra ${\mathfrak{g}}_{\mathbb{C}}={\mathfrak{g}}\otimes_{\mathbb{R}}{\mathbb{C}}$, and $H_{\mathbb{C}}$ a subgroup of $G_{\mathbb{C}}$ with complexified Lie algebra ${\mathfrak{h}}_{\mathbb{C}}={\mathfrak{h}}\otimes_{\mathbb{R}}{\mathbb{C}}$. Let $B_G$ and $B_H$ be Borel subgroups of $G_{\mathbb{C}}$ and $H_{\mathbb{C}}$, respectively.
\[def:BB\]
We say the pair $(G,H)$ (or the pair $({\mathfrak{g}}, {\mathfrak{h}})$) satisfies [[(BB)]{}]{} if one of the following equivalent conditions is satisfied:
1. $(G_{\mathbb{C}} \times H_{\mathbb{C}})/
\operatorname{diag}H_{\mathbb{C}}$ is spherical as a $(G_{\mathbb{C}} \times H_{\mathbb{C}})$-space.
2. $G_{\mathbb{C}}/B_H$ is spherical as a $G_{\mathbb{C}}$-space.
3. $G_{\mathbb{C}}$ has an open orbit in $G_{\mathbb{C}}/B_G \times G_{\mathbb{C}}/B_H$ via the diagonal action.
4. There are finitely many $G_{\mathbb{C}}$-orbits in $G_{\mathbb{C}}/B_G \times G_{\mathbb{C}}/B_H$ via the diagonal action.
It follows from [@xtoshitoshima Lemmas 4.2 and 5.3] that we have the implication $$\text{(BB)} \Rightarrow \text{(PP)}.$$
Among the pairs $({\mathfrak {g}}, {\mathfrak {h}})$ in Theorem \[thm:1.1\] satisfying (PP), we list the pairs $({\mathfrak {g}}, {\mathfrak {h}})$ satisfying (BB) as follows:
\[prop:B\] Suppose $({\mathfrak {g}}, {\mathfrak {h}})$ is a reductive symmetric pair. Then the following conditions are equivalent:
1. $({\mathfrak {g}}, {\mathfrak {h}})$ satisfies [[(BB)]{}]{}.
2. The pair of the Lie algebras $({\mathfrak{g}},{\mathfrak{h}})$ is isomorphic [[(]{}]{}up to outer automorphisms[[)]{}]{} to the direct sum of pairs [[(A)]{}]{}, [[(B)]{}]{} and [[(F1)]{}]{} – [[(F5)]{}]{}.
\[rem:1.6\]
The classification in Theorem \[thm:1.1\] (or in the , see Theorem \[thm:1.2\]) was known earlier in the following special cases:
1. $({\mathfrak {g}}, {\mathfrak {h}})$ complex pairs: $\Leftrightarrow$ [[(F1)]{}]{} or [[(F2)]{}]{} (M. Kr[ä]{}mer [@Kr]).
2. $\operatorname{rank}_{\mathbb{R}}G=1$: (PP) $\Leftrightarrow$ [[(E1)]{}]{} – [[(E4)]{}]{} (B. Kimelfeld [@Kimelfeld]).
3. $({\mathfrak {g}}, {\mathfrak {h}})
=({\mathfrak {g}}'+{\mathfrak {g}}', \operatorname{diag}
{\mathfrak {g}}')$ (): $\Leftrightarrow$ [[(G1)]{}]{} or [[(G2)]{}]{} ([@xtoshi95]).
Concerning Remark \[rem:1.6\] (2), neither the concept (PP) nor a minimal parabolic subgroup of $H$ appeared in [@Kimelfeld], but one might read (E1)–(E4) from his work. The case (1) () was studied in connection with finite-dimensional representations of compact Lie groups, and the (3) with the tensor product of two (infinite-dimensional) representations, see Corollary \[cor:1.2-copy\] for more details.
The significance of these geometric conditions (PP) and (BB) is their applications to branching problems of infinite-dimensional representations of real reductive groups $G$ to subgroups $H$:
\[[[@xtoshitoshima Theorems C and D]]{}\] \[fact:1.4\] Suppose $G$ is a real reductive Lie group, and $H$ a reductive subgroup defined algebraically over ${\mathbb{R}}$.
1. [[(]{}]{}Finite-multiplicity for branching[[)]{}]{} The pair $(G,H)$ satisfies [[(PP)]{}]{} if and only if $$\dim \operatorname{Hom}_H(\pi|_H, \tau)<\infty$$ for any admissible smooth representation $\pi$ of $G$ and for any admissible smooth representation $\tau$ of $H$.
2. [[(]{}]{}Bounded-multiplicity for branching[[)]{}]{} The pair $(G,H)$ satisfies [[(BB)]{}]{} if and only if there exists a constant $C< \infty$ such that $$\dim \operatorname{Hom}_H(\pi|_H, \tau) \le C$$ for any irreducible smooth representation $\pi$ of $G$ and for any irreducible smooth representation $\tau$ of $H$.
In Section \[sec:fm\] we review briefly some basic notion on admissible smooth representations of real reductive groups and discuss applications of our classification results to branching problems in details.
[**[Organization of the paper.]{}**]{} We give an outline of the proof of Theorem \[thm:1.1\] in Section \[sec:strategy\] by dividing it into five steps. Sections \[sec:method\] to \[sec:Ke\] are devoted to the proof of Theorem \[thm:1.1\] of the paper.
In Section \[sec:fm\] we explain our initial motivation for studying the real spherical property (PP) from the viewpoint of the (infinite-dimensional) representation theory of real reductive groups, and give an application of our geometric results to branching problems of smooth admissible representations.
${\mathbb{R}}_+ :=\{t \in {\mathbb{R}}:t >0\}$, and ${\mathbb{R}}_{\ge 0} :=\{t \in {\mathbb{R}}:t \ge0\}$. The first author was partially supported by Grant-in-Aid for Scientific Research (A)(25247006) JSPS.
Strategy of the proof {#sec:strategy}
=====================
We give an outline of the proof of Theorem \[thm:1.1\] by dividing it into five steps.
[[**[Step 1.]{}**]{}]{}Reduction to irreducible symmetric pairs.
A reductive symmetric pair $({\mathfrak {g}}, {\mathfrak {h}})$ is said to be [*[irreducible]{}*]{} if ${\mathfrak {g}} \not \simeq {\mathbb{R}}$, ${\mathfrak{h}}$ and if $({\mathfrak {g}}, {\mathfrak {h}})$ is not isomorphic to the direct sum of two reductive symmetric pairs $({\mathfrak {g}}_1, {\mathfrak {h}}_1)$ and $({\mathfrak {g}}_2, {\mathfrak {h}}_2)$. The proof of Theorem \[thm:1.1\] reduces to the case where $({\mathfrak {g}}, {\mathfrak {h}})$ is an irreducible symmetric pair. This consists of two families up to outer automorphisms:
1. (group case) $({\mathfrak{g}}'+{\mathfrak{g}}',\operatorname{diag}{\mathfrak{g}}')$ with ${\mathfrak{g}}'$ simple.
2. $({\mathfrak{g}},{\mathfrak{h}})$ with ${\mathfrak{g}}$ simple.
Therefore the task of this article is to carry out the following classification:
\[thm:1.2\] For an irreducible symmetric pair $({\mathfrak {g}}, {\mathfrak {h}})$, the following two conditions are equivalent:
1. $(G \times H)/\operatorname{diag} H$ is real spherical.
2. $({\mathfrak {g}}, {\mathfrak {h}})$ is isomorphic to one of [[(C)–(H)]{}]{} up to outer automorphisms.
The main case is when ${\mathfrak {g}}$ is simple. The (G) is relatively easy and the classification of those satisfying (PP) was already given in [@xtoshi95], but we supply a proof here for the sake of completeness.
[**[Step 2.]{}**]{}Condition (QP).
Suppose $\sigma$ is an involutive automorphism of $G$. In general there is no $\sigma$-stable minimal parabolic subgroup of $G$. We introduce a condition (QP) which is slightly weaker than (PP) by using a $\sigma$-stable parabolic subgroup (Subsection \[subsec:Q\]). The difference between (QP) and (PP) is described as in Theorem \[thm:QpPp\] below.
[**[Step 3.]{}**]{}Linearization of (PP) and (QP).
We find a necessary and sufficient condition for a pair $(G,H)$ to satisfy the conditions (PP) (and also (QP)), by means of the open-orbit property of a certain linear action (Theorems \[thm:pp\] and \[thm:qp\]). The case (QP) is easier because the parabolic subgroup $Q$ is $\sigma$-stable, whereas the criterion of (PP) is more involved since the parabolic subgroup $P_G$ (or its conjugate) is not necessarily $\sigma$-stable.
[**[Step 4.]{}**]{}The proof for (ii) $\Rightarrow$ (i) in Theorem \[thm:1.1\].
The proof is carried out in Sections \[sec:cpx\], \[sec:classical\] and \[sec:rank1\]. We shall verify the existence of an open orbit of the adjoint action of $(M_H \cap M_G)A_H$ in ${\mathfrak {n}}^{-\sigma}$, by using the criterion of (PP) in Step 3.
Section \[sec:classical\] deals with specific symmetric pairs in a case-by-case fashion, for instance, $({\mathfrak {g}}, {\mathfrak {h}})=
({\mathfrak {o}}(i+j, k+l), {\mathfrak {o}}(i,k)+{\mathfrak {o}}(j,l))$. We use the invariant theory of quivers (Subsection \[subsec:upq\]). In the section we classify not only the irreducible symmetric pairs $({\mathfrak {g}}, {\mathfrak {h}})$ satisfying (PP) but also those satisfying (QP). See Theorem \[thm:QpPp\] below. Here is the precise place where the proof for (ii) $\Rightarrow$ (i) in Theorem \[thm:1.1\] is given. We give two proofs for some families of symmetric pairs $(G,H)$.
$$\begin{aligned}
{2}
&\text{Proposition \ref{prop:cpx}}
\qquad
&&
\text{(F1)(F2)(F3)(F4)(F5)}
\\
&\text{Proposition \ref{prop:upq}}
\qquad
&&
\text{(E1)(E2)(E3)(F4)(F5)(H4)}
\\
&\text{Proposition \ref{prop:glgl}}
\qquad
&&
\text{(F1)(F3)(H2)}
\\
&\text{Proposition \ref{prop:somn}}
\qquad
&&
\text{(F2)}
\\
&\text{Proposition \ref{prop:sostar}}
\qquad
&&
\text{(H3)}
\\
&\text{Proposition \ref{prop:e6}}
\quad
&&
\text{(H5)}
\\
&\text{Proposition \ref{prop:rankH}}
\qquad
&&
\text{(E1)(E2)(E3)(E4)}\end{aligned}$$
[**[Step 5.]{}**]{}The proof for (i) $\Rightarrow$ (ii) in Theorem \[thm:1.1\].
The proof is carried out together with the classification of a larger set of the irreducible symmetric pairs satisfying (QP). We divide irreducible symmetric pairs $({\mathfrak {g}}, {\mathfrak {h}})$ into the following three cases. Some of the results for concrete examples in Section \[sec:classical\] are used in Sections \[sec:nonKe\] and \[sec:Ke\]. $$\begin{aligned}
{2}
\text{Case 5a. \enspace (Section \ref{sec:rank1})}
\qquad
&\operatorname{rank}_{\mathbb{R}}H=1.
&&
\\
\text{Case 5b. \enspace (Section \ref{sec:nonKe})}
\qquad
&\operatorname{rank}_{\mathbb{R}}H \ge 2,
\qquad
&&\text{$({\mathfrak {g}}, {\mathfrak {g}}^{\sigma \theta})$
does not belong to $K_{\varepsilon}$-family.}
\\
\text{Case 5c. \enspace (Section \ref{sec:Ke})}
\qquad
&\operatorname{rank}_{\mathbb{R}}H \ge 2,
\qquad
&&\text{$({\mathfrak {g}}, {\mathfrak {g}}^{\sigma \theta})$
belongs to $K_{\varepsilon}$-family.}\end{aligned}$$ As a byproduct of the proof of Theorem \[thm:1.1\], we obtain a complete list of the irreducible symmetric pairs $({\mathfrak {g}}, {\mathfrak {h}})$ satisfying (QP):
\[thm:QpPp\] Irreducible symmetric pairs satisfying [[(QP)]{}]{} but not satisfying [[(PP)]{}]{} are listed as follows: $$\begin{aligned}
{3}
\operatorname{I_{\mathbb{R}}}&:
\enspace
&&({\mathfrak{o}} (p+1,q), {\mathfrak{o}} (p)+{\mathfrak{o}} (1,q))
&&(p,q \ge 2),
\\
\operatorname{I_{\mathbb{C}}}&:
\enspace
&&({\mathfrak{su}} (p+1,q), {\mathfrak{s}}({\mathfrak {u}}(p)+{\mathfrak{u}} (1,q)))
\qquad
&&
(p,q \ge 2),
\hphantom{MMMMMMMMMMMMM}
\\
\operatorname{I_{\mathbb{H}}}&:\enspace
&&({\mathfrak{sp}} (p+1,q), {\mathfrak{sp}} (p)+{\mathfrak{sp}} (1,q))
&&(p,q \ge 2)
\\
{\operatorname{II}}&:
\enspace
&&({\mathfrak{o}} (n+1, {\mathbb{C}}), {\mathfrak{o}} (n,1))
&&(n \ge 4)
\\
{\operatorname{III}}&:
\enspace
&&({\mathfrak{o}}^{\ast} (2n+2), {\mathfrak{u}} (n,1))
&&(n \ge 4). \end{aligned}$$
We shall see in Proposition \[prop:rankH\] that $({\mathfrak {g}}, {\mathfrak {h}})$ satisfies (QP) if $({\mathfrak {g}}, {\mathfrak {h}})$ is one of ${\rm{I}}_{\mathbb{R}}$, ${\rm{I}}_{\mathbb{C}}$, ${\rm{I}}_{\mathbb{H}}$, ${\rm{II}}$ or ${\rm{III}}$. Parts of this assertion also follow from Proposition \[prop:upq\] when $({\mathfrak {g}}, {\mathfrak {h}})$ is ${\rm{I}}_{\mathbb{R}}$, ${\rm{I}}_{\mathbb{C}}$, or ${\rm{I}}_{\mathbb{H}}$, and from Proposition \[prop:OUpq\] when $({\mathfrak {g}}, {\mathfrak {h}})$ is ${\rm{III}}$. The exhaustion of this list is a crucial part of Step 5.
By the classification in Theorem \[thm:QpPp\], we obtain
\[cor:QP\] For irreducible symmetric pairs $({\mathfrak{g}}, {\mathfrak {h}})$ with $\operatorname{rank}_{\mathbb{R}} {\mathfrak{h}} \ge 2$, [[(PP)]{}]{} $\Leftrightarrow$ [[(QP)]{}]{}.
Linearization of the open-orbit conditions [[(PP)]{}]{} and [[(QP)]{}]{} {#sec:method}
========================================================================
The goal of this section is to give a criterion for [[(PP)]{}]{} by linearization. The main result is Theorem \[thm:pp\]. The proof for the implication (ii) $\Rightarrow$ (i) in Theorem \[thm:1.2\] is carried out by this criterion in later sections. In order to optimise the proof for the exhaustion of the list (C)–(H), we introduce another geometric condition (QP), which is slightly weaker than (PP). Then the condition [[(QP)]{}]{} becomes a stepping-stone in the proof of the implication (i) $\Rightarrow$ (ii) by removing most of the symmetric pairs $({\mathfrak {g}}, {\mathfrak {h}})$ that do not satisfy [[(PP)]{}]{}. The condition [[(QP)]{}]{} is also linearized in Theorem \[thm:qp\]. We shall further analyze the condition [[(QP)]{}]{} in Propositions \[prop:cdual\] and \[prop:QPrank\].
Parabolic subgroup $Q$ associated to $(G,H)$ {#subsec:Q}
--------------------------------------------
Let $G$ be a real reductive linear Lie group. Suppose that $\sigma$ is an involutive automorphism of $G$. The set $G^{\sigma}:=\{g \in G: \sigma g =g\}$ of fixed points by $\sigma$ is a closed subgroup of $G$. We say $(G,H)$ is a [*[reductive symmetric pair]{}*]{} if $H$ is an open subgroup of $G^{\sigma}$ for some $\sigma$.
We take a Cartan involution $\theta$ of $G$ commuting with $\sigma$, and set $K:=G^{\theta}$, a maximal compact subgroup of $G$. The Lie algebras will be denoted by lower German letters such as ${\mathfrak {g}}$, ${\mathfrak {h}}$, ${\mathfrak {k}}$, $\cdots$, and we shall use the same letters $\sigma$ and $\theta$ for the induced automorphisms of the Lie algebra ${\mathfrak {g}}$. If $\tau$ is an involutive endomorphism of a real vector space $V$, then $\tau$ is diagonalizable with eigenvalues $\pm 1$. We write the eigenspace decomposition as $$V=V^{\tau} +V^{-\tau}$$ where $V^{\pm \tau}:=\{X \in V:\tau X=\pm X\}$. With the above notation, ${\mathfrak {h}}={\mathfrak {g}}^{\sigma}$, ${\mathfrak {k}}={\mathfrak {g}}^{\theta}$, and ${\mathfrak {g}}={\mathfrak {g}}^{\theta} + {\mathfrak {g}}^{-\theta}$ is a Cartan decomposition of the Lie algebra ${\mathfrak {g}}$.
We fix a maximal abelian subspace ${\mathfrak {a}}_H$ in ${\mathfrak {h}}^{-\theta}$, and extend it to a maximal abelian subspace ${\mathfrak {a}}_G$ in ${\mathfrak {g}}^{-\theta}$. The split rank of $H$ will be denoted by $$\operatorname{rank}_{\mathbb{R}}H
:=
\dim_{\mathbb{R}}{\mathfrak {a}}_H.$$ For $\alpha \in {\mathfrak {a}}_G^{\ast}$, we write $${\mathfrak {g}}({\mathfrak {a}}_G; \alpha)
:=\{X \in {\mathfrak {g}}
:
[H,X]=\alpha(H)X
\quad
\text{ for }\,\, H \in {\mathfrak {a}}_G
\},$$ and denote by $\Sigma({\mathfrak {g}}, {\mathfrak {a}}_G)$ the set of nonzero $\alpha$ such that ${\mathfrak {g}}({\mathfrak {a}}_G; \alpha)\ne \{0\}$. Similar notation is applied to ${\mathfrak {a}}_H$. Then the set of nonzero weights $\Sigma({\mathfrak {g}}, {\mathfrak {a}}_H)$ satisfies the axiom of root systems ([@OS Theorem 2.1]) as well as $\Sigma({\mathfrak {g}}, {\mathfrak {a}}_G)$. We choose [*[compatible]{}*]{} positive systems $\Sigma^+({\mathfrak {g}}, {\mathfrak {a}}_G)$ and $\Sigma^+({\mathfrak {g}}, {\mathfrak {a}}_H)$ in the sense that $$\alpha|_{\mathfrak {a}_H}
\in \Sigma^+({\mathfrak {g}}, {\mathfrak {a}}_H) \cup \{0\}
\text{ for any }
\alpha \in \Sigma^+({\mathfrak {g}}, {\mathfrak {a}}_G).$$ We write ${\mathfrak {g}}({\mathfrak {a}}_G; \alpha)$ and ${\mathfrak {g}}({\mathfrak {a}}_H; \lambda)$ for the root space of $\alpha \in \Sigma({\mathfrak {g}}, {\mathfrak {a}}_G)$ and $\lambda \in \Sigma({\mathfrak {g}}, {\mathfrak {a}}_H)$, respectively. We set $$\begin{aligned}
{\mathfrak {n}}:=&
\bigoplus
_{\alpha|_{{\mathfrak {a}_H}} \in \Sigma^+({\mathfrak {g}}, {\mathfrak {a}}_H)}
{\mathfrak {g}}({\mathfrak {a}}_G; \alpha)
=
\bigoplus
_{\lambda \in \Sigma^+({\mathfrak {g}}, {\mathfrak {a}}_H)}
{\mathfrak {g}}({\mathfrak {a}}_H; \lambda),
\\
{\mathfrak {n}}_G:=&
\bigoplus
_{\alpha \in \Sigma^+({\mathfrak {g}}, {\mathfrak {a}}_G)}
{\mathfrak {g}}({\mathfrak {a}}_G; \alpha). \end{aligned}$$ Clearly ${\mathfrak {n}} \subset {\mathfrak {n}}_G$. We remark that ${\mathfrak {n}}_G$ is not necessarily $\sigma$-stable, but ${\mathfrak {n}}$ is $\sigma$-stable. So we have a direct sum decomposition: $${\mathfrak {n}}={\mathfrak {n}}^{\sigma}+{\mathfrak {n}}^{-\sigma}.$$ We write $\Delta({\mathfrak {n}}^{\pm\sigma})$ for the set of ${\mathfrak {a}}_H$-weights in ${\mathfrak {n}}^{\pm\sigma}$. Then we have $$\Sigma^+({\mathfrak {g}}, {\mathfrak {a}}_H)
=
\Delta({\mathfrak {n}}^{\sigma}) \cup \Delta({\mathfrak {n}}^{-\sigma}),$$ which is not disjoint in general. Let $P_G$ be the minimal parabolic subgroup of $G$ that normalizes ${\mathfrak {n}}_G$, $\overline{P_G}$ the opposite parabolic, and $Q$ and $\overline Q$ the parabolic subgroups of $G$ corresponding to $\Sigma^+({\mathfrak {g}}, {\mathfrak {a}}_H)$ and $-\Sigma^+({\mathfrak {g}}, {\mathfrak {a}}_H)$, respectively. Then ${\mathfrak {p}}_H:={\mathfrak {q}} \cap {\mathfrak {h}}$ is a minimal parabolic subalgebra of ${\mathfrak {h}}$. We set $$\begin{aligned}
M_G:=& Z_{K}({\mathfrak {a}}_G),
\\
M_H:=& Z_{H \cap K}({\mathfrak {a}}_H),
\\
A_H:=&\exp({\mathfrak {a}}_H),
\\
L:=& Z_G({\mathfrak {a}}_H),
L_H:=Z_H({\mathfrak {a}}_H)=M_H A_H.
\intertext{Then we have}
\\
Q=&LN=L \exp ({\mathfrak {n}}),
\quad
P_H=L_H N ^{\sigma}=M_HA_HN^{\sigma}. \end{aligned}$$ We note $P_G \subset Q \supset P_H$ and $Q \cap \overline{P_G} = L \cap \overline{P_G}$.
In Introduction, we considered the following two properties: $$\begin{aligned}
&\text{({\rm{PP}})\quad
$P_H$ has an open orbit on the real flag variety $G/P_G$,
}
\\
&
\text{({\rm{BB}})\quad
$B_H$ has an open orbit
on the complex flag variety $G_{\mathbb{C}}/B_G$.
}
\intertext{
We note that the conditions (PP) and (BB)
are independent
of coverings or connectedness
of the groups,
and depend only on the pair
of the Lie algebras $({\mathfrak {g}}, {\mathfrak {h}})$.
\vskip 0.3pc
In addition to the properties
{\rm{(PP)}} and {\rm{(BB)}},
we consider
}
&
\text{
{\rm{(QP)}}
\quad
$P_H$ has an open orbit
on the real generalized flag variety $G/\overline Q$.
}\end{aligned}$$ Among the three properties, we have:
\[lem:BPQ\] Let $({\mathfrak {g}}, {\mathfrak {h}})$ be a symmetric pair. Then we have
1. [[(BB)]{}]{} $\Rightarrow$ [[(PP)]{}]{} $\Rightarrow$ [[(QP)]{}]{}.
2. If $\operatorname{rank}_{\mathbb{R}}H
=\operatorname{rank}_{\mathbb{R}}G$, then [[(PP)]{}]{} $\Leftrightarrow$ [[(QP)]{}]{}.
1)The implication [[(BB)]{}]{} $\Rightarrow$ [[(PP)]{}]{} follows from [@xtoshitoshima Lemmas 4.2 and 5.3]. The implication [[(PP)]{}]{} $\Rightarrow$ [[(QP)]{}]{} is obvious because $P_G$ is conjugate to $\overline P_G$ and $\overline P_G \subset \overline Q$.
2)If ${\mathfrak {a}}_H={\mathfrak {a}}_G$, then $P_G$ coincides with $Q$. Thus [[(PP)]{}]{} is equivalent to [[(QP)]{}]{}.
1)We defined (PP) and (BB) without assuming that $({\mathfrak {g}}, {\mathfrak {h}})$ is a symmetric pair, however, we can define (QP) only for symmetric pairs $({\mathfrak {g}}, {\mathfrak {h}})$.
2)The equivalence (PP) $\Leftrightarrow$ (QP) holds also for any irreducible symmetric pair $({\mathfrak {g}}, {\mathfrak {h}})$ with ${\operatorname{rank}}_{\mathbb{R}} {\mathfrak {h}} \ge 2$ (see Corollary \[cor:QP\]).
Criterion for (PP) and (QP) {#subsec:PPQP}
---------------------------
We are ready to state a necessary and sufficient condition for the property [[(PP)]{}]{}, and that for [[(QP)]{}]{} in terms of the adjoint action of $Z_H({\mathfrak {a}}_H)=M_H A_H$ on ${\mathfrak {n}}^{-\sigma}$.
\[thm:pp\] The following two conditions are equivalent:
1. $({\mathfrak {g}}, {\mathfrak {h}})$ satisfies [[(PP)]{}]{}.
2. $(M_H \cap M_G)A_H$ has an open orbit on ${\mathfrak {n}}^{-\sigma}$ via the adjoint action.
\[thm:qp\] Let $(G,H)$ be a reductive symmetric pair. Then the following two conditions are equivalent:
1. $({\mathfrak {g}}, {\mathfrak {h}})$ satisfies [[(QP)]{}]{}.
2. $Z_H({\mathfrak {a}}_H)=M_H A_H$ has an open orbit on ${\mathfrak {n}}^{-\sigma}$.
For the proof of Theorems \[thm:pp\] and \[thm:qp\], we need a basic structural result on the centralizer of ${\mathfrak {a}}_H$ and ${\mathfrak {a}}_G$, respectively.
\[lem:LIwasawa\] [[1)]{}]{}$
Z_{{\mathfrak{h}} \cap {\mathfrak{k}}}({\mathfrak{a}}_H)
\cap
Z_{{\mathfrak{k}}}({\mathfrak{a}}_G)
=
Z_{{\mathfrak{h}}\cap{\mathfrak{k}}}({\mathfrak{a}}_G).
$
1. $
Z_{{\mathfrak{h}} \cap {\mathfrak{k}}}({\mathfrak{a}}_H)
+
Z_{{\mathfrak{k}}}({\mathfrak{a}}_G)
=
Z_{{\mathfrak{k}}}({\mathfrak{a}}_H).
$
2. $
Z_{{\mathfrak{g}}}({\mathfrak{a}}_H)
=
Z_{{\mathfrak{h}}\cap {\mathfrak{k}}}({\mathfrak{a}}_H)
+
(Z_{{\mathfrak{g}}}({\mathfrak{a}}_H) \cap
\overline{\mathfrak{p}}_G)
$.
1\) Clear from ${\mathfrak {a}}_H \subset {\mathfrak {a}}_G$.
2)If $\alpha|_{{\mathfrak {a}}_H}=0$ then $\sigma \theta \alpha=\alpha$, and therefore the involution $\sigma \theta$ stabilizes ${\mathfrak{g}}({\mathfrak{a}}_G;\alpha)$ with $\alpha|_{{\mathfrak{a}}_H}=0$. Thus we have a direct sum decomposition $${\mathfrak{g}}({\mathfrak{a}}_{G};\alpha)
=
{\mathfrak{g}}^{\sigma\theta}({\mathfrak{a}}_G;\alpha)
+
{\mathfrak{g}}^{-\sigma\theta}({\mathfrak{a}}_G;\alpha).$$ We claim that $
{\mathfrak{g}}^{-\sigma\theta}({\mathfrak{a}}_G;\alpha)
=\{0\}
$ for any $\alpha \in \Sigma({\mathfrak{g}}, {\mathfrak{a}}_G)$ with $\alpha|_{{\mathfrak {a}}_H}=0$. In fact, suppose that a nonzero element $X \in {\mathfrak{g}}({\mathfrak{a}}_G;\alpha)$ satisfies $\sigma \theta X=-X$. Then $X+ \sigma X \ne 0$ because $\sigma X \in {\mathfrak{g}}({\mathfrak{a}}_G; \sigma \alpha)
={\mathfrak {g}}({\mathfrak {a}}_G;-\alpha)$ and ${\mathfrak{g}}({\mathfrak{a}}_G;\alpha)
\cap {\mathfrak{g}}({\mathfrak{a}}_G;-\alpha)=\{0\}$ if $\alpha \ne 0$. On the other hand, $$X + \sigma X = X - \theta X
\in {\mathfrak{h}}^{-\theta}.$$ Since $[{\mathfrak{a}}_H, X + \sigma X]=\{0\}$, it contradicts the maximality of ${\mathfrak{a}}_H$ as an abelian subspace in ${\mathfrak{h}}^{-\theta}$. Thus we have shown the claim.
Therefore we have the following direct sum decomposition $$Z_{\mathfrak{g}}({\mathfrak{a}}_H)
=\bigoplus_{\alpha|_{{\mathfrak{a}}_H}=0}
{\mathfrak{g}}({\mathfrak{a}}_G;\alpha)
={\mathfrak{g}}({\mathfrak{a}}_G;0)
\oplus
\bigoplus_{\substack{\alpha|_{{\mathfrak{a}}_H}=0 \\ \alpha \ne0}}
{\mathfrak{g}}^{\sigma\theta}({\mathfrak{a}}_G;\alpha).$$ Taking the intersection with ${\mathfrak {k}}$, we get the identity $$\begin{aligned}
Z_{\mathfrak{k}}({\mathfrak{a}}_H)
=&
({\mathfrak{g}}({\mathfrak{a}}_G;0) \cap {\mathfrak{k}})
\oplus
\bigoplus
_{\substack{\alpha|_{{\mathfrak{a}}_H}=0,\\\alpha \ne 0}}
({\mathfrak{g}}^{\sigma\theta}({\mathfrak{a}}_G;\alpha) \cap {\mathfrak{k}})
\\
=&Z_{\mathfrak{k}}({\mathfrak{a}}_G)
+
Z_{\mathfrak {h} \cap \mathfrak{k}}({\mathfrak{a}}_H). \end{aligned}$$
3)By the Iwasawa decomposition of the reductive subalgebra $Z_{\mathfrak{g}}({\mathfrak{a}}_H)$, we have $$Z_{\mathfrak{g}}({\mathfrak{a}}_H)
=
Z_{\mathfrak{k}}({\mathfrak{a}}_H)
+
(Z_{\mathfrak{g}}({\mathfrak{a}}_H)
\cap
\overline {\mathfrak{p}}_G).$$ Combining this with the second statement, we have $$Z_{\mathfrak{g}}({\mathfrak{a}}_H)
=
Z_{\mathfrak {h} \cap \mathfrak{k}}({\mathfrak{a}}_H)
+
Z_{\mathfrak{k}}({\mathfrak{a}}_G)
+
(Z_{\mathfrak{g}}({\mathfrak{a}}_H)
\cap
\overline {\mathfrak{p}}_G)
=
Z_{\mathfrak {h} \cap \mathfrak{k}}({\mathfrak{a}}_H)
+
(Z_{\mathfrak{g}}({\mathfrak{a}}_H)
\cap
\overline {\mathfrak{p}}_G)$$ because $Z_{\mathfrak{k}}({\mathfrak{a}}_G)
\subset
Z_{\mathfrak{g}}({\mathfrak{a}}_H)
\cap
\overline {\mathfrak{p}}_G$.
The following lemma is known ([@Benoist]), but we give a proof for the sake of completeness.
\[lem:Nsigma\] Suppose $N$ is a simply connected nilpotent Lie group with an involutive automorphism $\sigma$.
1. The exponential map $\exp :{\mathfrak {n}} \to N$ induces bijections ${\mathfrak {n}}^{\sigma} \overset \sim \to N^{\sigma}$ and ${\mathfrak {n}}^{-\sigma} \overset \sim \to N^{-\sigma}$.
2. The following map is also bijective: $$\label{eqn:Nsigma}
{\mathfrak {n}}^{\sigma} + {\mathfrak {n}}^{-\sigma}
\to N,
\quad
(X, Y) \mapsto \exp X \exp Y.$$
[[1)]{}]{}Since $N$ is simply connected and nilpotent, the exponential map is bijective. We write $\log : N \to {\mathfrak {n}}$ for its inverse. Then, for the first statement, it is sufficient to prove the surjectivity of the restriction ${\mathfrak {n}}^{\pm \sigma}
\to N^{\pm\sigma}$. Take an arbitrary $y \in N$ such that $\sigma(y)=y^{\pm 1}$. Then $Y:=\log y$ satisfies $\exp (\sigma Y)=\exp(\pm Y)$, whence $\sigma Y = \pm Y$. Thus $\exp : {\mathfrak {n}}^{\sigma} \to N^{\sigma}$ and ${\mathfrak {n}}^{-\sigma} \to N^{-\sigma}$ are both surjective.
[[2)]{}]{}Clearly the map is injective. To see is surjective, we take $z \in N$. Since $z^{-1} \sigma(z) \in N^{-\sigma}$, we have $Y:=-\frac 1 2 \log(z^{-1} \sigma(z)) \in {\mathfrak {n}}
^{-\sigma}$. We set $x:=z \exp (-Y)$. Then $
x \sigma(x)^{-1} = z \exp (-2 Y) \sigma(z)^{-1}=e.
$ Thus $X:=\log (x) \in {\mathfrak {n}}^{\sigma}$ and $z =\exp X \exp Y$. Hence we have shown that the map is surjective, too.
\[lem:Qdouble\] We let $Z_H({\mathfrak {a}}_H)=M_H A_H$ act linearly on ${\mathfrak {n}}^{-\sigma}$. Then the natural inclusion ${\mathfrak {n}}^{-\sigma} \overset {\exp}\to N^{-\sigma}
\hookrightarrow Q$ induces the following bijections: $$\begin{aligned}
\label{eqn:Qdouble}
{\mathfrak {n}}^{-\sigma}/(M_H \cap M_G)A_H
&\overset \sim \to P_H\backslash Q / (L \cap \overline{P_G}),
\\
\label{eqn:QLdouble}
{\mathfrak {n}}^{-\sigma}/M_H A_H
&\overset \sim \to P_H\backslash Q / L. \end{aligned}$$
It follows from Lemmas \[lem:LIwasawa\] and \[lem:Nsigma\] that $$Q=NL=NM_H(L \cap \overline{P_G})
=M_H N^{\sigma} \exp ({\mathfrak {n}}^{-\sigma})
(L \cap \overline{P_G}).$$ Thus the map is surjective, and so is .
1)Suppose two elements $X_1$, $X_2 \in {\mathfrak {n}}^{-\sigma}$ have the same image in . This means that there exist $l_H \in L_H =M_HA_H$, $n_H \in N^{\sigma}$, and $l \in L \cap \overline{P_G}$ such that $x_i = \exp (X_i)$ ($i=1,2$) satisfy $x_1=l_Hn_Hx_2 l$. Then we have $$L \ni l_H^{-1}l^{-1}
=(l_H^{-1} x_1^{-1} l_H)n_H x_2
\in N^{-\sigma}N^{\sigma}N^{-\sigma}=N,$$ and therefore $l=l_H^{-1}$, $n_H =e$, and $l_H^{-1}x_1 l_H=x_2$. Hence $\operatorname{Ad} (l_H)X_2=X_1$. Since $l_H=l^{-1}$ belongs to $$M_H A_H \cap (L \cap \overline P_G)
=
(M_H \cap M_G)A_H,$$ the map is injective.
2)The proof parallels to that for . The only difference is that $l \in L$ instead of the previous condition $l \in L \cap \overline P_G$, and thus $l_H=l^{-1}$ belongs to $M_H A_H \cap L = M_H A_H$. Hence $X_1$ and $X_2$ give the same equivalence class under the action of $M_HA_H$.
We are ready to complete the proof of Theorems \[thm:pp\] and \[thm:qp\].
\[Proof of Theorem \[thm:pp\]\] Since any minimal parabolic subgroup is conjugate to each other by inner automorphisms, [[(PP)]{}]{} is equivalent to the existence of an open $P_H$-orbit in $G/\overline P_G$. By the Bruhat decomposition, the $Q$-orbit through the origin $o=e \overline P_G$ in $G/\overline P_G$ is open dense because $P_G \subset Q$. This open orbit is given by $Q/(Q \cap \overline P_G) = Q/(L \cap \overline P_G)$ as a homogeneous space of $Q$. Since $Q$ contains $P_H$, the condition [[(PP)]{}]{} is equivalent to the existence of an open $P_H$-orbit in $Q/(L \cap \overline P_G)$. By Lemma \[lem:Qdouble\], this amounts to the existence of an open $(M_H \cap M_G)A_H$ orbit in ${\mathfrak {n}}^{-\sigma}$.
\[Proof of Theorem \[thm:qp\]\] The proof is similar to that for Theorem \[thm:pp\]. In fact, since the $Q$-orbit through the origin $o=e \overline Q$ in $G/\overline Q$ is open dense and given by $Q/(Q \cap \overline Q) = Q/L$, the condition [[(QP)]{}]{} is equivalent to the existence of an open $P_H$-orbit in $Q/L$, which in turn is equivalent to the existence of an open $M_H A_H$-orbit in ${\mathfrak {n}}^{-\sigma}$ by Lemma \[lem:Qdouble\].
$c$-dual of symmetric pairs and (QP) {#subsec:cdual}
------------------------------------
For a symmetric pair $({\mathfrak {g}}, {\mathfrak {h}})$ defined by an involutive automorphism $\sigma$ of ${\mathfrak {g}}$, we write $${\mathfrak {g}}={\mathfrak {g}}^{\sigma} + {\mathfrak {g}}^{-\sigma}$$ for the eigenspace decomposition of $\sigma$ with eigenvalues $+1$ and $-1$ as before. Then ${\mathfrak {h}}={\mathfrak {g}}^{\sigma}$. We set $${\mathfrak {g}}^c:={\mathfrak {g}}^{\sigma}+\sqrt{-1}{\mathfrak {g}}^{-\sigma}.$$ Then the vector space ${\mathfrak {g}}^c$ carries a natural Lie algebra structure, and the pair $({\mathfrak {g}}^c, {\mathfrak {h}})$ forms a symmetric pair by the restriction of the complex linear extension of $\sigma$ to ${\mathfrak {g}}_{\mathbb{C}} = {\mathfrak {g}} \otimes _{\mathbb{R}}
{\mathbb{C}}$. The pair $({\mathfrak {g}}^c, {\mathfrak {h}})$ is called the [*[$c$-dual]{}*]{} of the symmetric pair $({\mathfrak {g}}, {\mathfrak {h}})$. We note that ${\mathfrak {g}}$ is reductive if and only if ${\mathfrak {g}}^c$ is reductive.
\[ex:cdual\]
1)The $c$-dual of the $({\mathfrak {g}}\oplus {\mathfrak {g}}, \operatorname{diag}{\mathfrak {g}})$ is isomorphic to the pair $({\mathfrak {g}}_{\mathbb{C}}, {\mathfrak {g}})$ where the involution of ${\mathfrak {g}}_{\mathbb{C}}$ is given by the complex conjugation with respect to the real form ${\mathfrak {g}}$.
2)The complex symmetric pair $({\mathfrak {g}}_{\mathbb{C}}, {\mathfrak {h}}_{\mathbb{C}})$ is self $c$-dual.
\[prop:cdual\] A reductive symmetric pair $({\mathfrak {g}}, {\mathfrak {h}})$ satisfies [[[[(QP)]{}]{}]{}]{} if and only if the $c$-dual $({\mathfrak {g}}^c, {\mathfrak {h}})$ satisfies [[(QP)]{}]{}.
By the criterion in Theorem \[thm:qp\], the reductive symmetric pair $({\mathfrak {g}}, {\mathfrak {h}})$ (respectively, the $c$-dual $({\mathfrak {g}}^c, {\mathfrak {h}})$) satisfies [[(QP)]{}]{} if and only if the group $Z_H({\mathfrak {a}}_H)$ has an open orbit in ${\mathfrak {n}}^{-\sigma}$ (respectively, in $\sqrt{-1}{\mathfrak {n}}^{-\sigma}$). Since ${\mathfrak {n}}^{-\sigma}$ and $\sqrt{-1}{\mathfrak {n}}^{-\sigma}$ are isomorphic to each other as modules of the group $Z_H({\mathfrak {a}}_H)$, we get the proposition.
\[rem:cdual\] [[ An analogous statement to Proposition \[prop:cdual\] does not hold for [[(PP)]{}]{} in general. ]{}]{}
Further properties for (QP) {#subsec:QPmore}
---------------------------
In order to screen the symmetric pairs that do not satisfy (QP), it is convenient to find a necessary condition for (QP) in terms of the restricted root system.
Here is the one that we frequently use in later sections:
\[prop:QPrank\] If $({\mathfrak {g}}, {\mathfrak {h}})$ satisfies [[(QP)]{}]{}, then elements of $\Delta({\mathfrak {n}}^{-\sigma})$ are linearly independent. In particular, we have $$\label{eqn:rn}
\operatorname{rank}_{\mathbb{R}}H
\ge
\# \Delta({\mathfrak {n}}^{-\sigma}),$$ where $\# \Delta({\mathfrak {n}}^{-\sigma})$ denotes the cardinality of the weights of ${\mathfrak {a}}_H$ in ${\mathfrak {n}}^{-\sigma}$ without counting the multiplicities.
The converse statement of Proposition \[prop:QPrank\] is not true; however, we shall see that the condition is a fairly good criterion for (QP). For example if $(G,H)=(SO^{\ast}(2p+2q),SO^{\ast}(2p) \times SO^{\ast}(2q))$ then the condition is equivalent to (QP) except for $(p,q)=(2,2)$, see Proposition \[prop:sostar\].
For $\lambda \in {\mathfrak {a}}_H^{\ast}
={\operatorname{Hom}}_{\mathbb{R}}({\mathfrak {a}}_H, {\mathbb{R}})$, let $\chi_{\lambda}$ be the one-dimensional real representation of the abelian group $A_H$ given by $$\chi_\lambda(\exp Y)
=\exp \langle \lambda, Y \rangle
\quad
\text{ for }
Y \in {\mathfrak {a}}_H,$$ and write ${\mathbb{R}}_\lambda$ $(\simeq {\mathbb{R}})$ for the representation space of ${\chi}_{\lambda}$. To prove the proposition, we need the following elementary lemma:
\[lem:li\] Let $F$ be a finite subset of ${\mathfrak {a}}_H^{\ast}$. If $A_H$ has an open orbit in the vector space $\bigoplus_{\lambda \in F}{\mathbb{R}}_\lambda$, then $F$ consists of linearly independent elements. In particular, $\# F \le \dim {\mathfrak {a}}_H$.
We return to the proof of Proposition \[prop:QPrank\]:
\[Proof of Proposition \[prop:QPrank\]\] Since ${\mathfrak {a}}_H$ is a maximal abelian subspace in ${\mathfrak {h}}^{-\theta}$, we have $Z_H({\mathfrak {a}}_H)
= M_H A_H$ with $M_H=Z_{H \cap K}({\mathfrak {a}}_H)$ compact. We equip ${\mathfrak {n}}^{-\sigma}$ with an $M_H$-inner product such that the decomposition $${\mathfrak {n}}^{-\sigma}
\simeq
\bigoplus_{\lambda \in \Delta({\mathfrak {n}}^{-\sigma})}
{\mathfrak{g}}^{-\sigma}({\mathfrak{a}}_H;\lambda)$$ is orthogonal to each other.
Let $O_{\lambda}$ be the orthogonal group of the subspace ${\mathfrak {g}}^{-\sigma}({\mathfrak {a}}_H;\lambda)$. Then the quotient space of ${\mathfrak {g}}^{-\sigma}({\mathfrak {a}}_H;\lambda)$ by $O_{\lambda}$ is given by the : $${\mathfrak {g}}^{-\sigma}({\mathfrak {a}}_H;\lambda)/O_{\lambda}
\simeq ({\mathbb{R}}_{\lambda})_{\ge 0}.$$
Since the compact group $M_H$ preserves the inner product on ${\mathfrak {n}}^{-\sigma}$, we have a natural surjective map between the quotient spaces of ${\mathfrak {n}}^{-\sigma}$: $${\mathfrak {n}}^{-\sigma}/M_H
\to
{\mathfrak {n}}^{-\sigma}
/
(\prod_{\lambda \in \Delta({\mathfrak {n}}^{-\sigma})} O_{\lambda})
\simeq
\prod_{\lambda \in \Delta({\mathfrak {n}}^{-\sigma})}
({\mathfrak {g}}^{-\sigma}({\mathfrak {a}}_H;\lambda)/O_{\lambda})
\simeq
\prod_{\lambda \in \Delta({\mathfrak {n}}^{-\sigma})}
(
{\mathbb{R}}_\lambda
)_{\ge 0}.$$ Then if $Z_{H \cap K}({\mathfrak {a}}_H)A_H$ has an open orbit in ${\mathfrak {n}}^{-\sigma}$ via the adjoint representation, then $A_H$ has an open orbit in the quotient of ${\mathfrak {n}}^{-\sigma}$ by $M_H$. Therefore $A_H$ has an open orbit in $$\prod_{\lambda \in \Delta({\mathfrak {n}}^{-\sigma})}
({\mathbb{R}}_{\lambda})_{\ge 0}
\subset
\bigoplus_{\lambda \in \Delta({\mathfrak {n}}^{-\sigma})}
{\mathbb{R}}_{\lambda},$$ too. Applying Lemma \[lem:li\], we conclude that the elements of $\Delta({\mathfrak {n}}^{-\sigma})$ are linearly independent and therefore, $\# \Delta({\mathfrak {n}}^{-\sigma}
)
\ge \dim {\mathfrak {a}}_H
=\operatorname{rank}_{\mathbb{R}}H$. Hence the proposition is proved.
Next we analyze the inequality in Proposition \[prop:QPrank\]. For this, we denote by $W_H$ the Weyl group of the restricted root system $\Sigma({\mathfrak {h}}, {\mathfrak {a}}_H)$. Then $W_H$ acts on the finite set $\Delta({\mathfrak {n}}^{-\sigma})
\cup (-\Delta({\mathfrak {n}}^{-\sigma}))$, and consequently, we have an obvious inequality $$2 \# \Delta({\mathfrak {n}}^{-\sigma})
\ge \# (W_H \cdot \lambda)$$ for any $\lambda \in \Delta({\mathfrak {n}}^{-\sigma})$. Hence the inequality implies that for any $\lambda \in \Delta({\mathfrak {n}}^{-\sigma})$, we have $$\label{eqn:aW}
2 \dim {\mathfrak {a}}_H
\ge \# (W_H \cdot \lambda).$$ The inequality gives strong constraints on both the root system $\Delta({\mathfrak{h}},{\mathfrak{a}}_H)$ and $\Delta({\mathfrak{n}}^{-\sigma})$. Let us examine in an abstract setting (corresponding to the case where ${\mathfrak {h}}$ is simple) as follows:
\[lem:2.6\] Let $\Delta$ be an irreducible root system on a vector space $E$, and $W$ the Weyl group of $\Delta$. If there exists $\lambda \in E \setminus \{0\}$ such that $$\label{eqn:EW}
2 \dim E \ge \# (W \cdot \lambda),$$ then $\Delta$ is a classical root system. For an (irreducible) classical root system $\Delta$, we take a standard basis and the set $\Pi$ of simple roots as follows: $$\begin{aligned}
\text{Case 1}:&
\Delta =A_n,
\quad\hphantom{m}
\Pi =\{\alpha_i= e_i-e_{i+1}: 1 \le i \le n\}
\text{ in }
{\mathbb{R}}^n/{\mathbb{R}}(e_1 + \cdots + e_{n+1}),
\\
\text{Case 2}:&
\Delta =
\begin{cases}
B_n,
&\Pi=\{\alpha_i =e_i - e_{i+1}: 1 \le i \le n-1\}
\cup \{\alpha_n=e_n\},
\\
C_n,
&\Pi=\{\alpha_i = e_i- e_{i+1}: 1 \le i \le n-1\}
\cup \{\alpha_n=2e_n\},
\\
D_n,
&\Pi=\{\alpha_i = e_i-e_{i+1}: 1 \le i \le n-1 \}
\cup \{\alpha_n=e_{n-1}+e_n\}.
\end{cases}\end{aligned}$$ Here we assume $n \ge 1$ for $\Delta=A_n$, $n \ge 2$ for $\Delta=B_n$, $n \ge 3$ for $\Delta=C_n$, and $n \ge 4$ for $\Delta=D_n$.
Then $\lambda$ satisfying must be of the following form: $$\begin{aligned}
{3}
& \lambda \in {\mathbb{R}}e_i /({\mathbb{R}}(e_1+\cdots +e_{n+1}))
\quad
&&\text{for some $i$ \,$(1 \le i \le n+1)$}
\quad
&&\text{in Case 1},
\\
& \lambda \in {\mathbb{R}}e_i
\quad
&&\text{for some $i$ \,\,$(1 \le i \le n)$}
&&\text{in Case 2}. \end{aligned}$$
For a root system $\Delta$, we consider the minimum cardinality of $W$-orbits defined by $$c(\Delta):=\inf_{\lambda \in E \setminus \{0\}}
\# (W \cdot \lambda).$$ Let us compute $c(\Delta)$. For this, we fix a positive system $\Delta^+$, and write $\Pi=\{\alpha_1, \cdots, \alpha_n\}$ for the set of simple roots, and $\{\omega_1, \cdots, \omega_n\}$ for the set of fundamental weights. In order to compute the cardinality of the orbit $W \cdot \lambda$, we may assume $\lambda \in \overline{C_+}\setminus \{0\}$ without loss of generality, where $\overline{C_+}$ is the dominant chamber defined by $$\overline{C_+}
:=\{\sum_{i=1}^{n} a_i \omega_i:
a_1, \cdots, a_n \ge 0\}.$$
We define a partial order on $\overline{C_+}$ by $$\lambda \succ \mu
\quad
\text{if }
\lambda-\mu \in \overline{C_+}.$$ We denote by $W_{\lambda}$ the isotropy subgroup of $W$ at $\lambda \in E$. Then $\# (W \cdot \lambda)=\# W/\# W_{\lambda}$. If $\lambda, \mu \in \overline{C_+}$ satisfies $\lambda \succ \mu$, then there is an inclusion relation $W_{\lambda} \subset W_{\mu}$, and therefore $\# W \cdot \lambda \ge \# W \cdot \mu$. Thus $\# W \cdot \lambda$ attains its minimum only if $\lambda$ lies in the most singular part of the Weyl chamber, namely, only if $\lambda \in {\mathbb{R}}_+ \omega_i$ ($1 \le i \le n$). In this case, $W_{\lambda}$ coincides with the Weyl group $W({\mathfrak {l}}_i)$ of the Levi part ${\mathfrak {l}}_i$ of the maximal parabolic subgroup defined by the simple root $\alpha_i$. Thus we have $$c(\Delta)=\frac{\# W}{\max_{1 \le i \le n} \# W({\mathfrak {l}}_i)}.$$ This formula yields the explicit value of $c(\Delta)$ as in the table below, and also tells precisely when $\# W \cdot \lambda$ attains its minimum.
------------- ------- ------- ------- ------- -------------------- -------------------- -------------------- -------------------- --------------------
$\Delta$ $A_n$ $B_n$ $C_n$ $D_n$ ${\mathfrak{e}}_6$ ${\mathfrak{e}}_7$ ${\mathfrak{e}}_8$ ${\mathfrak{f}}_4$ ${\mathfrak{g}}_2$
$c(\Delta)$ $n+1$ $2n$ $2n$ $2n$ $27$ $56$ $240$ $24$ $6$
------------- ------- ------- ------- ------- -------------------- -------------------- -------------------- -------------------- --------------------
: $c(\Delta)$ for simple root systems $\Delta$[]{data-label="tab:cDelta"}
For $\Delta=A_n$, we label simple roots as indicated. Then by a simple computation, we see that $\#W({\mathfrak {l}}_i)$ attains its maximum $n!$ at $i=1$ and $i=n$, and $\# W/\# W({\mathfrak {l}}_i) > 2n$ for $2 \le i \le n-1$. Thus the inequality holds for $\lambda \in \overline{C_+} \setminus \{0\}$ if and only if $\lambda \in {\mathbb{R}}_+ \omega_1$ or ${\mathbb{R}}_+ \omega_n$, namely, $\lambda \in {\mathbb{R}}_+ e_1$ or $\lambda \in {\mathbb{R}}_- e_{n+1}
\mod {\mathbb{R}}(e_1 + \cdots + e_{n+1})$.
For $\Delta =B_n$ ($n \ge 2$), $C_n$ ($n \ge 3$), or $D_n$ ($n \ge 4$), $\#W({\mathfrak {l}}_i)$ attains its maximum only at $i=1$, and the inequality is actually the equality when $\lambda \in {\mathbb{R}} \omega_1$.
For exceptional root systems $\Delta$, it is immediate from Table \[tab:cDelta\] that does not hold. Hence Lemma \[lem:2.6\] is proved.
We end this section with an easy-to-check necessary condition for (QP) when $\operatorname{rank}_{\mathbb{R}} G=\operatorname{rank}_{\mathbb{R}}H$. Proposition \[prop:QPineq\] below will be used in Section \[sec:Ke\] when we deal with exceptional Lie algebras. For a real reductive Lie group $G$ with a Cartan involution $\theta$, we take a maximal abelian subspace ${\mathfrak {a}}_G$ in ${\mathfrak {g}}^{-\theta}$ and fix a positive system $\Sigma^+({\mathfrak {g}}, {\mathfrak {a}}_G)$ as before. We set $$\begin{aligned}
m(G):= \max_{\alpha \in \Sigma ({\mathfrak {g}}, {\mathfrak {a}}_G)}
\dim_{\mathbb{R}}{\mathfrak {g}}({\mathfrak {a}}_G;\alpha),
\label{eqn:mG}
\\
n(G):= \sum_{\alpha \in \Sigma^+ ({\mathfrak {g}}, {\mathfrak {a}}_G)}
\dim_{\mathbb{R}}{\mathfrak {g}}({\mathfrak {a}}_G;\alpha).
\label{eqn:nG}\end{aligned}$$ We note that $n(G)$ is equal to the dimension of the real flag variety $G/P_G$.
\[prop:QPineq\] Assume $\operatorname{rank}_{\mathbb{R}}G=\operatorname{rank}_{\mathbb{R}}H$. If the symmetric pair $(G,H)$ satisfies [[(QP)]{}]{}, then $$\label{eqn:QPineq}
n(G) -n(H) \le m(G) \operatorname{rank}_{\mathbb{R}}H.$$
Since ${\mathfrak{a}}_H={\mathfrak{a}}_G$ by the real rank assumption, we have $$m(G) \# \Delta({\mathfrak{n}}^{-\sigma})
\ge \dim {\mathfrak{n}}^{-\sigma}
=n(G)-n(H).$$ Hence the inequality implies $\operatorname{rank}_{\mathbb{R}}
H \ge \# \Delta ({\mathfrak{n}}^{-\sigma})$. Thus Proposition \[prop:QPineq\] follows from Proposition \[prop:QPrank\].
Strong Gelfand pairs and their real forms {#sec:cpx}
=========================================
This section focuses on (BB), which is much stronger than (PP) for real reductive pair $({\mathfrak {g}}, {\mathfrak {h}})$ in general unless both ${\mathfrak {g}}$ and ${\mathfrak {h}}$ are quasi-split Lie algebras. We begin with the . In this case the condition (BB) is also referred to as a [*[strong Gelfand pair]{}*]{}.
\[prop:cpx-1\] Suppose that $({\mathfrak {g}}, {\mathfrak {h}})$ is a symmetric pair such that ${\mathfrak {g}}$ is a complex simple Lie algebra and ${\mathfrak {h}}$ is a complex subalgebra. Then the following three conditions are equivalent:
1. The pair $({\mathfrak {g}}, {\mathfrak {h}})$ satisfies [[(PP)]{}]{}.
2. The pair $({\mathfrak {g}}, {\mathfrak {h}})$ satisfies [[(BB)]{}]{}.
3. $({\mathfrak {g}}, {\mathfrak {h}})$ is isomorphic to $({\mathfrak {sl}}(n+1, {\mathbb{C}}),
{\mathfrak {gl}}(n, {\mathbb{C}}))$ or $({\mathfrak {so}}(n+1, {\mathbb{C}}),
{\mathfrak {so}}(n, {\mathbb{C}}))$ up to outer automorphisms.
The equivalence (ii) $\Leftrightarrow$ (iii) was proved by Kr[ä]{}mer [@Kr]. Since any minimal parabolic subgroup is a Borel subgroup for complex reductive Lie groups, the equivalence (i) $\Leftrightarrow$ (ii) is obvious.
Alternatively, the proof of Proposition \[prop:cpx-1\] is covered by special cases of our propositions in later sections: $$\begin{aligned}
{2}
&\text{Proposition \ref{prop:somn}}
\qquad&& ({\mathfrak {o}}(m+n,{\mathbb{C}}), {\mathfrak {o}}(m,{\mathbb{C}})+{\mathfrak {o}}(n,{\mathbb{C}})).
\\
&\text{Proposition \ref{prop:rankH}}
&& \operatorname{rank}_{\mathbb{R}} H=1.
\\
&\text{Proposition \ref{prop:nonKe}}
&& \operatorname{rank}_{\mathbb{R}} H \ge 2
\text{ or }
({\mathfrak {g}}, {\mathfrak {h}})
\not \simeq
({\mathfrak {o}}(m+n,{\mathbb{C}}), {\mathfrak {o}}(m,{\mathbb{C}})+{\mathfrak {o}}(n,{\mathbb{C}})). \end{aligned}$$ See also Propositions \[prop:glgl\] and \[prop:sp\] for an alternative and direct proof for the pairs $({\mathfrak {sl}}(m+n,{\mathbb{C}}),
{\mathfrak {s}}({\mathfrak {gl}}(m,{\mathbb{C}})+{\mathfrak {gl}}(n,{\mathbb{C}})))$ and $({\mathfrak {sp}}(m+n,{\mathbb{C}}), {\mathfrak {sp}}(m,{\mathbb{C}})+{\mathfrak {sp}}(n,{\mathbb{C}}))$, respectively.
\[rem:cpx\] [[ In [@Cooper; @Kr], the pair $({\mathfrak {so}}(8,{\mathbb{C}}), {\mathfrak {spin}}(7,{\mathbb{C}}))$ also appears in the classification. However, it is isomorphic to the pair $({\mathfrak {so}}(8,{\mathbb{C}}),
{\mathfrak {so}}(7,{\mathbb{C}}))$ by an outer automorphism of ${\mathfrak {so}}(8,{\mathbb{C}})$. The automorphism arises from the triality of $D_4$ (see also Lemma \[lem:SS\]). ]{}]{}
Since the condition (BB) is determined by the complexification of the pair $({\mathfrak {g}}, {\mathfrak {h}})$, we have:
\[prop:cpx\] Let $({\mathfrak {g}}, {\mathfrak {h}})$ be an irreducible symmetric pair. Then the following two conditions are equivalent:
1. $({\mathfrak {g}}, {\mathfrak {h}})$ satisfies [[(BB)]{}]{}.
2. $({\mathfrak {g}}, {\mathfrak {h}})$ is isomorphic to [[(F1)]{}]{}– [[(F5)]{}]{}.
[[ The $({\mathfrak {g}}'+{\mathfrak {g}}', \operatorname{diag}{\mathfrak {g}}')$ satisfying [[(BB)]{}]{} is either ${\mathfrak {g}}' \simeq {\mathfrak {sl}}
(2,{\mathbb{R}})$ or ${\mathfrak {g}}' \simeq {\mathfrak {sl}}
(2,{\mathbb{C}})$ when ${\mathfrak {g}}'$ is simple. They are included as special cases of [[(F2)]{}]{} and [[(F5)]{}]{}: $$\begin{aligned}
({\mathfrak{sl}}(2,{\mathbb{R}})+{\mathfrak{sl}}(2,{\mathbb{R}}),
\operatorname{diag}{\mathfrak{sl}}(2,{\mathbb{R}}))
\approx&({\mathfrak{o}}(2,2), {\mathfrak{o}}(2,1))
\\
\approx&
({\mathfrak{o}}(2,1)+{\mathfrak{o}}(2,1),
\operatorname{diag}{\mathfrak{o}}(2,1)).\end{aligned}$$ $$\begin{aligned}
({\mathfrak{sl}}(2,{\mathbb{C}})+{\mathfrak{sl}}(2,{\mathbb{C}}),
\operatorname{diag}{\mathfrak{sl}}(2,{\mathbb{C}}))
\simeq
&({\mathfrak{so}}(4,{\mathbb{C}}), {\mathfrak{so}}(3,{\mathbb{C}}))
\\
\simeq
&
({\mathfrak{o}}(3,1)+{\mathfrak{o}}(3,1),
\operatorname{diag}{\mathfrak{o}}(3,1)).\end{aligned}$$ ]{}]{}
Some classical and exceptional cases {#sec:classical}
====================================
In this section, we deal with some classical symmetric pairs $(G,H)$ in matrix forms and one exceptional symmetric pair. Classical symmetric spaces have parameters such as $i$, $j$, $k$ and $l$ in $(G,H)=(O(i+j,k+l),O(i,k) \times O(j,l))$. We determine for which parameters they satisfy [[(PP)]{}]{} or [[(QP)]{}]{} by using the criteria, Theorems \[thm:pp\] and \[thm:qp\]. The cases we treat in Section \[sec:classical\] cover (C)–(H) in Theorem \[thm:1.1\] except for (E4) (see Step 4 in Section \[sec:strategy\]). In particular, the implication (ii) $\Rightarrow$ (i) in Theorem \[thm:1.2\] is proved in this section except for (E4). The case (E4) will be treated in Section \[sec:rank1\] together with other symmetric pairs $({\mathfrak {g}}, {\mathfrak {h}})$ with $\operatorname{rank}_{\mathbb{R}}{\mathfrak {h}}=1$.
$(G,H)=(U(i+j, k+l;{\mathbb{F}}),
U(i, k;{\mathbb{F}}) \times U(j,l;{\mathbb{F}}))$ {#subsec:upq}
-------------------------------------------------
The open orbit properties (BB), (PP), and (QP) do not change if we replace $(G,H)$ by their coverings, connected components, or the quotients $(G/Z, H/H \cap Z)$ by a central subgroup $Z$ of $G$. Thus, we shall treat the disconnected group $O(p,q)$ rather than the connected group $SO_0(p,q)$, and the reductive group $U(p,q)$ rather than the semisimple group $SU(p,q)$.
The goal of this subsection is to prove the following:
\[prop:upq\] Let $(G,H):=(U(i+j, k+l;{\mathbb{F}}),
U(i, k;{\mathbb{F}}) \times U(j,l;{\mathbb{F}}))$ with ${\mathbb{F}}={\mathbb{R}}$, ${\mathbb{C}}$ or the quarternionic number field ${\mathbb{H}}$. Suppose that $l \le \min\{i,j,k\}$.
[[1)]{}]{}The pair $(G,H)$ satisfies [[(QP)]{}]{} if and only if $$l =0 \quad\text{ and }\,\, \min(i,j,k)=1.$$
[[2)]{}]{}The pair $(G,H)$ satisfies [[(PP)]{}]{} if and only if $$l =0 \quad\text{ and }\,\, \min(j,k)=1.$$
In particular, Proposition \[prop:upq\] proves the implication (ii) $\Rightarrow$ (i) in Theorem \[thm:1.1\] for (E1), (E2), (E3), (F4), (F5), and (H4).
In order to prove Proposition \[prop:upq\], we begin with the following:
\[lem:upql\] If $(G, H)$ satisfies [[(QP)]{}]{}, then $l=0$.
By Theorem \[thm:qp\], (QP) is equivalent to the existence of an open orbit of $M_H A_H$ on ${\mathfrak {n}}^{-\sigma}$. Then the idea of the proof of the lemma is to find a non-trivial $M_H A_H$-invariant function on ${\mathfrak {n}}^{-\sigma}$ for $l >0$. By a simple matrix computation, we have natural isomorphisms of groups and vector spaces: $$\begin{aligned}
M_H A_H
\simeq&
({\mathbb{F}}^{\times})^{\min(i,k)}
\times U(|i-k|,{\mathbb{F}}) \times({\mathbb{F}}^{\times})^l
\times U(j-l,{\mathbb{F}}),
\\
{\mathfrak {n}}^{-\sigma}
\simeq &
M(i,l;{\mathbb{F}}) \oplus M(j,\min(i,k);{\mathbb{F}}). \end{aligned}$$ Via these identifications, the adjoint action of an element $(a,A, b, B) \in M_H A_H$ on the vector space ${\mathfrak {n}}^{-\sigma}$ is given as $$\label{eqn:MAI}
(X,Y)
\mapsto
\begin{cases}
(
\begin{pmatrix}
a & \\
& A
\end{pmatrix}
Xb^{-1},
\begin{pmatrix}
b & \\
& B
\end{pmatrix}
Y a^{-1}
)
&\text{for }
i \ge k,
\\
(a X b^{-1}, \begin{pmatrix}
b & \\
& B
\end{pmatrix}
Y a^{-1})
&\text{for }
i \le k.
\end{cases}$$ In the above formula, we have identified $b \in ({\mathbb{F}}^{\times})^l$ with a diagonal matrix in $GL(l,{\mathbb{F}})$, and likewise for $a \in ({\mathbb{F}}^{\times})^{\min(i,k)}$.
According to the partition $j=l+(j-l)$, we write $Y \in M(j,\min(i,k);{\mathbb{F}})$ as a block matrix $Y=\begin{pmatrix} Y' \\ \ast \end{pmatrix}$ with $Y' \in M(l,\min(i,k);{\mathbb{F}})$.
Consider the matrix $XY' \in M(i,\min(i,k);{\mathbb{F}})$. In view of the formula of the $M_H A_H$-action on ${\mathfrak {n}}^{-\sigma}$, the $(1,1)$-component of $XY'$, to be denoted by $z$, is transformed as $$z \mapsto a_1 z a_1^{-1},$$ where $a_1 \in {\mathbb{F}}^{\times}$ is the first component of $a$. This formula means that the real algebraic function $$\psi:{\mathfrak {n}}^{-\sigma} \to {\mathbb{R}},
\quad
(X,Y) \mapsto |z|^2$$ is $M_H A_H$-invariant. Clearly, $\psi$ is non-zero if $l>0$. Hence $M_H A_H$ cannot have an open orbit in ${\mathfrak {n}}^{-\sigma}$ if $l >0$.
To complete the proof of Proposition \[prop:upq\] we need the following elementary lemma:
\[lem:FGauss\] Suppose $0 \le p' \le p$ and $p,q \ge 1$. We let $S:= U(1, {\mathbb{F}})^{p'}
\times U(p-p',{\mathbb{F}}) \times ({\mathbb{F}}^{\times})^q$ act on $M(p,q;{\mathbb{F}})$ by $$X \mapsto \begin{pmatrix} a & \\ & A\end{pmatrix} X b^{-1}
\quad
\text{for }
\,\,
(a,A,b) \in L
\text{ and }
X \in M(p,q;{\mathbb{F}}).$$ Then the group $S$ has an open orbit in $M(p,q;{\mathbb{F}})$ if and only if $p=1$ or ($q=1$ and $p'=0$).
First we observe that $({\mathbb{F}}^{\times})^q$ acts on the quotient space $U(p,{\mathbb{F}}) \backslash M(p,q;{\mathbb{F}})$ from the right. This action has an open orbit if and only if $p=1$ or $q=1$ by the Gauss decomposition for $U(p,{\mathbb{F}}) \backslash M(p,q;{\mathbb{F}})$. Hence the equivalence assertion of Lemma \[lem:FGauss\] is proved for $p'=0$. What remains to prove is that there is no open orbit if $p'>0$ and $q=1$, but this is obvious.
\[Proof of Proposition \[prop:upq\]\] 1)By Lemma \[lem:upql\], we may and do assume $l=0$. Then the group $$M_H A_H \simeq ({\mathbb{F}}^{\times})^{\min(i,k)}
\times U(|i-k|,{\mathbb{F}}) \times U(j,{\mathbb{F}})
\ni (a,A,B)$$ acts on ${\mathfrak{n}}^{-\sigma} \simeq M(j,\min(i,k);{\mathbb{F}})$ by $Y \mapsto BYa^{-1}$. We observe that the second factor $U(|i-k|, {\mathbb{F}})$ of $M_H A_H$ acts trivially on ${\mathfrak{n}}^{-\sigma}$. Then by the criterion in Theorem \[thm:qp\], the first statement follows as a special case of Lemma \[lem:FGauss\] with $p=j$ and $q=\min(i,k)$.
2)We need to prove that (PP) holds if $l=0$ and $\min(j,k)=1$, and fails if $l=0$, $=1$ and $\min(j,k)>1$. We shall use Theorem \[thm:pp\] and Lemma \[lem:FGauss\].
[**[Case 2-1.]{}**]{}$k=1$. Then $
\operatorname{rank}_{{\mathbb{R}}}H
=
\operatorname{rank}_{{\mathbb{R}}}G$ $(=1)$, and therefore (QP) is equivalent to (PP) by Lemma \[lem:BPQ\] (2). Hence (PP) holds.
[**[Case 2-2.]{}**]{}$j=1$. We apply Lemma \[lem:FGauss\] with $p=1$ and $q=\min(i,k)$ to conclude that the action of $(M_H \cap M_G) A_H$ has an open orbit in $
{\mathfrak {n}}^{-\sigma} \simeq {\mathbb{F}}^{\min(i,k)}
$.
[**[Case 2-3.]{}**]{}$i=1$. We apply Lemma \[lem:FGauss\] with $p=j$, $p'=\min(j,k-1)$ and $q=1$ to conclude that the action of $(M_H \cap M_G) A_H$ does not have an open orbit if $j,k >1$.
Hence the proof of Proposition \[prop:upq\] is completed.
$(G,H)=(GL(p+q,{\mathbb{F}}),
GL(p,{\mathbb{F}}) \times GL(q,{\mathbb{F}}))$ {#subsec:GLGL}
-----------------------------------------------
Next we treat the symmetric pair $(G,H)=(GL(p+q,{\mathbb{F}}),
GL(p,{\mathbb{F}}) \times GL(q,{\mathbb{F}}))$ for ${\mathbb{F}}={\mathbb{R}}$, ${\mathbb{C}}$ and ${\mathbb{H}}$. Surprisingly, the property (PP) behaves uniformly for all ${\mathbb{F}}={\mathbb{R}}$, ${\mathbb{C}}$ and ${\mathbb{H}}$ for this pair. In contrast, that the property (BB) behaves differently for ${\mathbb{F}}={\mathbb{H}}$ (see Remark \[rem:GLGL\] below).
\[prop:glgl\] Let $p,q \ge 1$ and $$(G,H)=(GL(p+q,{\mathbb{F}}), GL(p,{\mathbb{F}}) \times GL(q,{\mathbb{F}})),
\quad
{\mathbb{F}}={\mathbb{R}}, \,{\mathbb{C}}\text{ or }{\mathbb{H}}.$$ Then the following three conditions are equivalent:
1. The pair $(G,H)$ satisfies [[(QP)]{}]{}.
2. The pair $(G,H)$ satisfies [[(PP)]{}]{}.
3. $\min(p,q)=1$.
\[rem:GLGL\] [[ For $\min(p,q)=1$, $(G,H)$ satisfies (BB) if and only if ${\mathbb{F}}={\mathbb{R}}$ or ${\mathbb{C}}$. In fact, for ${\mathbb{F}}={\mathbb{H}}$, the complexified Lie algebra $$({\mathfrak {g}}_{\mathbb{C}},
{\mathfrak {h}}_{\mathbb{C}})
\simeq
({\mathfrak {gl}}(2(p+q),{\mathbb{C}}),
{\mathfrak {gl}}(2p,{\mathbb{C}})
+ {\mathfrak {gl}}(2q,{\mathbb{C}}))$$ cannot be a strong Gelfand pair (see Proposition \[prop:cpx-1\]). ]{}]{}
\[Proof of Proposition \[prop:glgl\]\] Via the isomorphisms $$M_H A_H \simeq ({\mathbb{F}}^{\times})^{p+q},
\quad
{\mathfrak {n}}^{-\sigma} \simeq M(p,q;{\mathbb{F}}),$$ the adjoint action of $M_H A_H$ on ${\mathfrak {n}}^{-\sigma}$ is given as the action of $({\mathbb{F}}^{\times})^{p} \times ({\mathbb{F}}^{\times})^{q}$ on $M(p,q;{\mathbb{F}})$ by the left and right multiplication.
If $p,q \ge 2$, then $$M(p,q;{\mathbb{F}}) \to {\mathbb{R}},
\quad
X \mapsto |X_{11}X_{22}|^2/|X_{12}X_{21}|^2$$ is well-defined on an open dense subset of $M(p,q;{\mathbb{F}})$ and is invariant by the action of $
({\mathbb{F}}^{\times})^{p} \times ({\mathbb{F}}^{\times})^{q},
$ and thus there is no open orbit.
Conversely, if $q=1$, then clearly $({\mathbb{F}}^{\times})^{p}$ has an open orbit in ${\mathbb{F}}^{p}$, and so does $M_H A_H$ in ${\mathfrak {n}}^{-\sigma}$. Thus the equivalence (i) $\Leftrightarrow$ (iii) follows from Theorem \[thm:qp\].
Since $\operatorname{rank}_{\mathbb{R}}H
=\operatorname{rank}_{\mathbb{R}}G$, the equivalence (i) $\Leftrightarrow$ (ii) holds. Hence Proposition is proved.
$(G,H)=(O(m+n,{\mathbb{C}}),O(m,{\mathbb{C}}) \times O(n,{\mathbb{C}}))$ {#subsec:OmnC}
------------------------------------------------------------------------
The main goal of this section is to prove the following proposition. The equivalence (i) $\Leftrightarrow$ (ii) $\Leftrightarrow$ (iii) is a special case of Proposition \[prop:cpx-1\]. We shall use Proposition \[prop:somn\] in Proposition \[prop:nonKe\].
\[prop:somn\] Let $$(G,H)=(O(m+n,{\mathbb{C}}), O(m,{\mathbb{C}}) \times O(n,{\mathbb{C}}))
\text{ with }
m, n \ge 1.$$ Then the following four conditions are equivalent:
[[(i)]{}]{}The pair $(G,H)$ satisfies [[(QP)]{}]{}.
[[(ii)]{}]{}The pair $(G,H)$ satisfies [[(PP)]{}]{}.
[[(iii)]{}]{}$m=1$, $n=1$ or $(m,n)=(2,2)$.
[[(iv)]{}]{}$\operatorname{rank}_{\mathbb{R}}
H \ge \# \Delta({\mathfrak {n}}^{-\sigma})$.
The equivalence (iii) $\Leftrightarrow$ (iv): From the table below, we see that $({\mathfrak {g}}, {\mathfrak {h}})$ satisfies (iv) if and only if $m=1$, $n=1$, or $(m,n)=(1,2)$. $$\begin{aligned}
{4}
&\quad m
&&\quad n
&& \operatorname{rank}_{\mathbb{R}} H
\qquad
&& \# \Delta({\mathfrak {n}}^{-\sigma})
\\
& 2p+1
\qquad
&& 2q+1
\qquad
&&\, p+q
&& 2pq + p+q
\\
& 2p+1
&& 2 q
&&\, p+q
&& 2pq + q
\\
& 2p
&& 2q
&&\, p+q
&& 2pq\end{aligned}$$
$(G,H)=(O^{\ast}(2p+2q), O^{\ast}(2p) \times O^{\ast}(2q))$
-----------------------------------------------------------
The goal of this subsection is to prove the following:
\[prop:sostar\] Let $$(G,H)=(O^{\ast}(2p+2q), O^{\ast}(2p) \times O^{\ast}(2q))
\text{ with }
p,q \ge 1.$$ The following three conditions are equivalent:
[[(i)]{}]{}The pair $(G,H)$ satisfies [[(QP)]{}]{}.
[[(ii)]{}]{}The pair $(G,H)$ satisfies [[(PP)]{}]{}.
[[(iii)]{}]{}$\min(p,q)=1$.
In particular, Proposition \[prop:sostar\] shows the implication (ii) $\Rightarrow$ (i) in Theorem \[thm:1.1\] for (H3).
In order to give a proof of the implication (i) $\Rightarrow$ (iii) in Proposition \[prop:sostar\], we use Proposition \[prop:QPrank\]. For this, we need:
\[lem:sostar\] $\operatorname{rank}_{\mathbb{R}}
H \ge \# \Delta({\mathfrak {n}}^{-\sigma})$ if and only if $\min(p,q)=1$ or $(p,q)=(2,2)$.
Lemma \[lem:sostar\] is an immediate consequence of the formulae: $$\operatorname{rank}_{\mathbb{R}}{\mathfrak {a}}_H
=[\frac p 2]+[\frac q 2]
\quad
\text{ and }
\#\Delta({\mathfrak {n}}^{-\sigma})=[\frac {pq} 2].$$
The implication (ii) $\Rightarrow$ (i) holds in general by Lemma \[lem:BPQ\] (1). By Lemma \[lem:sostar\] and Proposition \[prop:QPrank\], (QP) holds only if $\min(p,q)=1$ or $(p,q)=(2,2)$. What remains to prove is:
1. (QP) fails if $(p,q)=(2,2)$.
2. (PP) holds if $q=1$.
In view of the isomorphism of symmetric pairs: $$({\mathfrak {o}}^{\ast}(8),
{\mathfrak {o}}^{\ast}(4)+{\mathfrak {o}}^{\ast}(4))
\simeq
({\mathfrak {o}}(2,6), {\mathfrak {o}}(2,2)+{\mathfrak {o}}(4)),$$ the first claim is regarded as a special case of Proposition \[prop:upq\], which we have already proved.
To see the second claim, suppose $q=1$. Then we have the following natural isomorphisms of groups and vector spaces, respectively: $$\begin{aligned}
M_H A_H
\simeq&
\begin{cases}
({\mathbb{H}}^{\times})^{\frac{p}{2}} \times {\mathbb{T}}
\quad
&\text{($p$:even)},
\\
({\mathbb{H}}^{\times})^{\frac{p-1}{2}} \times {\mathbb{T}}^2
\quad
&\text{($p$:odd)},
\end{cases}
\\
(M_H \cap M_G)A_H
\simeq&
({\mathbb{H}}^{\times})^{[\frac{p}{2}]} \times {\mathbb{T}},
\\
{\mathfrak {n}}^{-\sigma}
\simeq & {\mathbb{H}}^{[\frac p 2]}. \end{aligned}$$ Via these isomorphisms, the adjoint action of the first factor of $(M_H \cap M_G)A_H$ on ${\mathfrak {n}}^{-\sigma}$ is given by the natural action of $({\mathbb{H}}^{\times})^{[\frac{p}{2}]}$ on ${\mathbb{H}}^{[\frac{p}{2}]}$, which has obviously an open dense orbit. By Theorem \[thm:pp\], we conclude that (PP) holds if $q=1$.
Hence the proof of Proposition \[prop:sostar\] is completed.
$(G,H)$ is of type $(C_n, A_n)$ {#subsec:CA}
-------------------------------
In this subsection, we treat the reductive symmetric pairs $({\mathfrak {g}}, {\mathfrak {h}})$ that have the following three properties:
$$\begin{aligned}
&\text
{
The root system $\Sigma({\mathfrak {g}}, {\mathfrak {a}}_H)$
is of type $C_n$,
}
\label{eqn:CA1}
\\
&\text{
The root system
$\Sigma({\mathfrak {h}}, {\mathfrak {a}}_H)$
is of type $A_n$,
}
\label{eqn:CA2}
\\
&\text{
Either $m^+(\lambda)=0$
of $m^-(\lambda)=0$
for each $\lambda \in \Sigma({\mathfrak {g}}, {\mathfrak {a}}_H)$.}
\label{eqn:CA3}\end{aligned}$$
Here we define $$m^{\pm}(\lambda)= \dim_{\mathbb{R}}
{\mathfrak {g}}^{\pm\sigma}({\mathfrak {a}}_H;\lambda).$$ The main results of this subsection are Propositions \[prop:ugl\] and \[prop:ug2\].
\[prop:ugl\] Let $(G,H)$ be one of the following symmetric pairs: $$\begin{aligned}
&(U(n,n;{\mathbb{F}}),GL(n,{\mathbb{F}}))
\quad
{\mathbb{F}}={\mathbb{C}}\text{ or }{\mathbb{H}},
\\
&(Sp(n,{\mathbb{R}}),GL(n,{\mathbb{R}})),
\\
&(O^{\ast}(4n),GL(n,{\mathbb{H}})). \end{aligned}$$ Then the following three conditions are equivalent:
1. The pair $(G,H)$ satisfies [[(QP)]{}]{}.
2. The pair $(G,H)$ satisfies [[(PP)]{}]{}.
3. $n=1$.
To begin with, we observe the following:
\[lem:CA\] The three families of symmetric pairs $({\mathfrak {g}}, {\mathfrak {h}})$ in Proposition \[prop:ugl\] satisfy , , and .
We take the standard basis $\{e_1, \cdots, e_n\}$ of ${\mathfrak {a}}_H^{\ast}$ such that $\Sigma({\mathfrak {h}}, {\mathfrak {a}}_H)
=\{\pm(e_i-e_j):
1\le i < j \le n\}$. We set $d=\dim_{\mathbb{R}} {\mathbb{F}}$. Then the pair of multiplicities $\begin{pmatrix} m^+(\lambda)\\ m^-(\lambda)\end{pmatrix}$ is given as follows:
[cccccc]{} & ${\mathfrak {g}}$ & ${\mathfrak {h}}$ & $e_i-e_j$ & $e_i+e_j$ & $\quad 2 e_l$\
\
& ${\mathfrak {u}}(n,n;{\mathbb{F}})$& ${\mathfrak {gl}}(n,{\mathbb{F}})$& $\begin{pmatrix} d \\ 0\end{pmatrix}$& $\begin{pmatrix} 0 \\ d\end{pmatrix}$& $\begin{pmatrix} 0 \\ d-1 \end{pmatrix}$\
\
& ${\mathfrak {sp}}(n,{\mathbb{R}})$ & ${\mathfrak {gl}}(n,{\mathbb{R}})$ & $\begin{pmatrix} 1 \\ 0\end{pmatrix}$ & $\begin{pmatrix} 0 \\ 1\end{pmatrix}$ & $\quad\begin{pmatrix} 0 \\ 1\end{pmatrix}$\
\
& ${\mathfrak {so}}^{\ast}(4n)$ & ${\mathfrak {gl}}(n,{\mathbb{H}})$ & $\begin{pmatrix} 4 \\ 0\end{pmatrix}$ & $\begin{pmatrix} 0 \\ 4\end{pmatrix}$ & $\quad\begin{pmatrix} 0 \\ 1\end{pmatrix}$
The lemma is clear from Table \[tab:4.1\].
\[Proof of Proposition \[prop:ugl\]\] By , and , $\#\Delta({\mathfrak {n}}^{-\sigma})$ is equal to half the difference of the cardinalities of roots in $C_n$ and $A_n$, namely, $$\#\Delta({\mathfrak {n}}^{-\sigma})
=
\frac 1 2 (2n^2-(n^2-n))
=\frac 1 2 n(n+1).$$ Therefore, the inequality $\operatorname{rank}_{\mathbb{R}} H \ge \#\Delta({\mathfrak {n}}^{-\sigma})$ amounts to $n \ge \frac 1 2 n (n+1)$, namely, $n=1$. By Proposition \[prop:QPrank\], we have shown the implication (i) $\Rightarrow$ (iii).
The equivalence (i) $\Leftrightarrow$ (ii) follows from Lemma \[lem:BPQ\] (2) because $\operatorname{rank}_{\mathbb{R}} H
=\operatorname{rank}_{\mathbb{R}} G$.
Finally, the implication (iii) $\Rightarrow$ (i) is included in a special (and easy) case of other families, which we have already shown to satisfy [[(PP)]{}]{}. In fact, $$\begin{aligned}
({\mathfrak {u}}(1,1), {\mathfrak {gl}}(1,{\mathbb{C}}))
\simeq\,&
({\mathfrak {o}}(2,1), {\mathfrak {o}}(1,1))
+({\mathbb{R}}, {\mathbb{R}}),
\\
({\mathfrak {sp}}(1,1), {\mathfrak {gl}}(1,{\mathbb{H}}))
\simeq\,& ({\mathfrak {o}}(1,4), {\mathfrak {o}}(1,1)+{\mathfrak {o}}(3)),
\\
({\mathfrak {sp}}(1,{\mathbb{R}}), {\mathfrak {gl}}(1,{\mathbb{R}}))
\simeq\,& ({\mathfrak {o}}(2,1), {\mathfrak {o}}(1,1)),
\\
({\mathfrak {o}}^{\ast}(4), {\mathfrak {gl}}(1,{\mathbb{H}}))
\simeq\,& ({\mathfrak {o}}(3),{\mathfrak {o}}(3))
\oplus
({\mathfrak {o}}(1,2),{\mathfrak {o}}(1,1)). \end{aligned}$$ We know that the symmetric pairs in the right-hand side satisfy (PP) as special cases of Proposition \[prop:upq\]. Thus we have proved Proposition \[prop:ugl\].
We end this subsection with the symmetric pair $(U(n,n;{\mathbb{F}}), GL(n,{\mathbb{F}}))$ for ${\mathbb{F}}={\mathbb{R}}$, which was excluded from Proposition \[prop:ugl\] as a .
\[prop:ug2\] Let $(G,H)=(O(n,n), GL(n,{\mathbb{R}}))$ $(n \ge 2)$. Then [[(QP)]{}]{}$\Leftrightarrow$ [[(PP)]{}]{}$\Leftrightarrow$ $n=2$ or $3$.
The root multiplicities are given in the first row of Table \[tab:4.1\]. We observe that the long roots $\pm 2 e_l$ do not appear because $d=1$ for ${\mathbb{F}}={\mathbb{R}}$. As a result, we have $\#\Delta({\mathfrak {n}}^{-\sigma})
=\frac 1 2 n(n-1)$, and the inequality $\operatorname{rank}_{\mathbb{R}}H \ge
\#\Delta({\mathfrak {n}}^{-\sigma})$ amounts to $n\ge \frac 1 2 n(n-1)$, namely, $n=2$ or $3$. Thus we have proved the implications (PP) $\Leftrightarrow$(QP) $\Rightarrow$ $n=2$ or $3$ by Proposition \[prop:QPrank\].
Conversely, for $n=2$, $3$, we observe the following isomorphisms: $$\begin{aligned}
({\mathfrak {o}}(2,2), {\mathfrak {gl}}(2,{\mathbb{R}}))
\simeq\, &
({\mathfrak {o}}(1,2),{\mathfrak {o}}(1,2)) \oplus
({\mathfrak {o}}(1,2),{\mathfrak {o}}(1,1)),
\\
({\mathfrak {o}}(3,3), {\mathfrak {gl}}(3,{\mathbb{R}}))
\simeq\, &
({\mathfrak {sl}}(4,{\mathbb{R}}),{\mathfrak {gl}}(3,{\mathbb{R}})). \end{aligned}$$ They satisfy (PP) as special cases of Propositions \[prop:upq\] and \[prop:cpx\], respectively.
$(G,H)=(Sp(p+q,{\mathbb{F}}),
Sp(p,{\mathbb{F}})\times Sp(q,{\mathbb{F}}))$, ${\mathbb{F}}={\mathbb{R}}$ or ${\mathbb{C}}$ {#subsec:Sp}
---------------------------------------------------------------------------------------------
\[prop:sp\] Let $p, q \ge 1$ and $$(G,H)=(Sp(p+q,{\mathbb{F}}),
Sp(p,{\mathbb{F}}) \times Sp(q,{\mathbb{F}})),
\qquad
{\mathbb{F}}={\mathbb{R}}\,\,\text{ or }\,\, {\mathbb{C}}.$$ Then [[(QP)]{}]{} $\Leftrightarrow$ [[(PP)]{}]{} $\Leftrightarrow$ $(p,q)=(1,1)$.
Take the standard basis $\{f_1, \cdots, f_{p+q}\}$ of ${\mathfrak {a}}_H^{\ast}={\mathfrak {a}}_G^{\ast}$ such that $$\Delta({\mathfrak {n}}^{-\sigma})
=
\{f_i \pm f_j: 1 \le i \le p, p+1 \le j \le p+q\}.$$ Then the inequality $\operatorname{rank}_{\mathbb{R}}H \ge \# \Delta({\mathfrak {n}}^{-\sigma})$ amounts to $p+q \ge 2 pq$, which holds only if $(p,q)=(1,1)$. Therefore, if $(G,H)$ satisfies (QP), then $(p,q)=(1,1)$ by Theorem \[thm:qp\] and Proposition \[prop:QPrank\]. Conversely, if $(p,q)=(1,1)$, then $$\begin{aligned}
&({\mathfrak {sp}}(2,{\mathbb{R}}),
{\mathfrak {sp}}(1,{\mathbb{R}})+{\mathfrak {sp}}(1,{\mathbb{R}}))
\simeq
({\mathfrak {o}}(3,2), {\mathfrak {o}}(2,2)),
\\
& ({\mathfrak {sp}}(2,{\mathbb{C}}),
{\mathfrak {sp}}(1,{\mathbb{C}})+{\mathfrak {sp}}(1,{\mathbb{C}}))
\simeq
({\mathfrak {o}}(5,{\mathbb{C}}), {\mathfrak {o}}(4,{\mathbb{C}})),\end{aligned}$$ which satisfy (PP) as we have seen in Propositions \[prop:upq\] and \[prop:somn\], respectively. Hence Proposition \[prop:sp\] is proved.
$(G,H)=(O^{\ast}(2p+2q), U(p,q))$ {#subsec:OU}
---------------------------------
As a final example of classical symmetric pairs, we consider $({\mathfrak {g}}, {\mathfrak {h}})
=({\mathfrak {o}}^{\ast}(2p+2q), {\mathfrak {u}}(p,q))$ which is the $c$-dual of the symmetric pair $({\mathfrak {o}}(2p+2q), {\mathfrak {u}}(p,q))$.
\[prop:OUpq\] Let $$(G,H)=(O^{\ast}(2p+2q), U(p,q))
\quad
\text{with }
p \ge q \ge 1.$$
1. The pair $(G,H)$ satisfies [[(QP)]{}]{} if and only if $q=1$.
2. The pair $(G,H)$ satisfies [[(PP)]{}]{} if and only if $(p,q)=(3,1), (2,1)$ or $(1,1)$.
We take the standard basis $\{e_1, \cdots, e_q\}$ of ${\mathfrak {a}}_H^{\ast}$ such that $$\Sigma({\mathfrak {g}}, {\mathfrak {a}}_H)
\subset \{\pm e_i \pm e_j: 1 \le i < j \le q\}
\cup
\{\pm e_l, \pm 2e_l: 1 \le l \le q\}.$$ The inclusion is actually the equality if and only if $p>q$. Further, the root multiplicities $m^{\pm}(\lambda)$ are given according to the parity of $p+q$ as follows:
[**[Case 1.]{}**]{}$p \equiv q \mod 2$.
$$\begin{array}{ccccc}
&\lambda
&\,\,\pm e_i \pm e_j\,\,
&\pm e_l
&\pm 2 e_l
\\
&m^{+}(\lambda)
& 2
&\,\,2(p-q)\,\,
& 1
\\
&m^{-}(\lambda)
&2
&\,\,2(p-q)\,\,
&0
\end{array}$$
[**[Case 2.]{}**]{}$p \equiv q+1 \mod 2$. $$\begin{array}{ccccc}
&\lambda
&\,\,\pm e_i \pm e_j\,\,
&\pm e_l
&\pm 2 e_l
\\
&m^{+}(\lambda)
& 2
&\,\,2(p-q+1)\,\,
& 1
\\
&m^{-}(\lambda)
&2
&2(p-q+1)
&0
\end{array}$$ Thus we can take a positive system such that $$\Delta({\mathfrak {n}}^{-\sigma})
=\{\pm e_i \pm e_j: 1 \le i < j \le q\}
\,\,
(\cup
\{\pm e_l: 1 \le l \le q\}
\,\,\text{ for }\,\, p>q).$$ Hence the inequality $\operatorname{rank}_{\mathbb{R}}H
\ge \# \Delta({\mathfrak {n}}^{-\sigma})$ implies $$q \ge
\begin{cases}
q(q-1)
\qquad
&\text{for }\,\, p=q,
\\
q(q-1)+q
\qquad
&\text{for }\,\, p>q.
\end{cases}$$ By Proposition \[prop:QPrank\], if $(G,H)$ satisfies (QP) then $(p,q)=(2,2)$ or $q=1$.
Conversely, suppose that $(p,q)=(2,2)$. In view of the isomorphism $$({\mathfrak{o}}^{\ast}(8),{\mathfrak{u}}(2,2))
\simeq
({\mathfrak{o}}(6,2),{\mathfrak{o}}(4,2)+{\mathfrak{o}}(2)),$$ we see that $(G,H)$ does not satisfy (QP) by Proposition \[prop:upq\] (1).
Suppose now that $q=1$. Then $\operatorname{rank}_{\mathbb{R}}H =1$, and we shall show in Proposition \[prop:rankH\] that $(G,H)$ satisfies (QP) for any $p$ and (PP) for $p \le 3$ (see III in Table \[tab:5.2\]). This completes the proof of Proposition \[prop:OUpq\].
$({\mathfrak {g}}, {\mathfrak {h}})=({\mathfrak {e}}_{6(-26)}, {\mathfrak {so}}(9,1)+{\mathbb{R}})$ {#subsec:e6}
---------------------------------------------------------------------------------------------------
The exceptional real Lie algebra $
{\mathfrak {g}}:=
{\mathfrak {e}}_{6(-26)}
$ is a simple Lie algebra with the following property: $${\mathfrak {k}} \simeq {\mathfrak {f}}_{4(-52)}
\,\,
\text{and }
\,\,
\operatorname{rank}_{\mathbb{R}}{\mathfrak {g}}=2.$$ The goal of this subsection is to prove the following:
\[prop:e6\] Let $(G,H)$ be a symmetric pair with Lie algebras $$({\mathfrak {g}}, {\mathfrak {h}})
=
({\mathfrak {e}}_{6(-26)}, {\mathfrak {so}}(9,1)+{\mathbb{R}}).$$ Then $(G,H)$ satisfies [[(PP)]{}]{} and [[(QP)]{}]{}.
We begin with the Lie algebra ${\mathfrak {g}}={\mathfrak {e}}_{6(-26)}$. Then the Lie algebra ${\mathfrak {m}}_G=Z_{{\mathfrak {k}}}({\mathfrak {a}}_G)
\simeq {\mathfrak {so}}(8)$ acts on ${\mathfrak {n}}\simeq {\mathbb{R}}^{24}$ via the adjoint action as the direct sum of the following three non-isomorphic 8-dimensional irreducible representations: $$\begin{aligned}
{2}
&
&&\text{highest weight}
\\
&\text{Natural representation $i$}
&&
\lambda_1=(1,0,0,0),
\\
&\text{Half spin representation $\operatorname{spin}^+$}
\qquad
&&
\lambda_2=\frac 1 2 (1,1,1,1),
\\
&\text{Half spin representation $\operatorname{spin}^-$}
&&
\lambda_3=\frac 1 2 (1,1,1,-1). \end{aligned}$$ Here the highest weights are expressed by means of the standard basis of $D_4$ as in the proof of Lemma \[lem:2.6\].
These representations are the differentials of the representations of $Spin(8)$, to be denoted by the same letters $i$, $\operatorname{spin}^+$, and $\operatorname{spin}^-$, respectively, which in turn induce three actions on the 7-dimensional sphere $S^7 \simeq ({\mathbb{R}}^8 - \{0\}) /{\mathbb{R}}_{>0}$. We need the following:
\[lem:SS\] Let $Spin(8)$ act diagonally on the direct product manifold $S^7 \times S^7$ via any choice of two distinct 8-dimensional representations among $i$, $\operatorname{spin}^+$, and $\operatorname{spin}^-$. Then the action is transitive.
The automorphism of the Dynkin diagram $D_4$ gives rise to the triality in $Spin(8)$. We denote by $\sigma$ the outer automorphism of $Spin(8)$ of order three corresponding to the outer automorphism of $D_4$ as described in the figure below.
\#1\#2\#3\#4\#5[ @font ]{}
(5594,4885)(0,-10) (3195.889,2494.083) (3220.341,2483.281) (3189.289,2473.087) (800,2491) (4420,441) (4420,4541) (3200,2491) (950,2491)(3050,2491) (4325,541)(3275,2360) (3302,2594)(4352,4413) (3538,12)(3659,155) (5176,4017)(4992,4051) (5024,3867)(4993,4051) (948,3232)(914,3416) (950,3236)(1104,3343) (3664,157)(3489,222) (4700,175)[(0,0)\[lb\]]{} (4700,4666)[(0,0)\[lb\]]{} (2200,2051)[(0,0)\[lb\]]{} (0,2051)[(0,0)\[lb\]]{}
Then $\sigma$ induces the permutation of the set $\{\lambda_1, \lambda_2, \lambda_3\}$, by $\lambda_1 \mapsto
\lambda_2 \mapsto \lambda_3 \mapsto \lambda_1$, and thus the representations $i$, $\operatorname{spin}^+$, and $\operatorname{spin}^-$ are mutually equivalent by the outer automorphism group of $Spin(8)$ ([*[triliality]{}*]{} of $D_4$). Hence, without loss of generality, we may and do assume that $Spin(8)$ acts on $S^7 \times S^7$ via $i \oplus \operatorname{spin}^+$.
First, we consider the action of $Spin(8)$ on the first factor $S^7$ via the natural representation $Spin(8) \overset i \to SO(8)$, which is a transitive action and gives rise to a natural diffeomorphism $Spin(8)/Spin(7) \simeq S^7$.
Second, we consider the action of the isotropy subgroup $Spin(7)$ on the second factor $S^7$ via the following composition: $$Spin(7) \hookrightarrow Spin(8) \overset{\operatorname{spin}^+} \to
SO(8).$$ This action is again transitive, and giving a natural diffeomorphism $Spin(7)/G_2 \simeq S^7$. Thus we have shown that $Spin(8)$ acts transitively on $S^7 \times S^7$ via $i \oplus {\operatorname{spin}^+}$.
For the Lie algebra ${\mathfrak {h}}={\mathfrak {o}}(9,1) +{\mathbb{R}}$, the adjoint action of the Lie algebra ${\mathfrak {m}}_H$ on ${\mathfrak {n}}^{\sigma}
={\mathfrak {n}} \cap {\mathfrak {h}}$ is isomorphic to the natural representation of ${\mathfrak {so}}(8)$ on ${\mathbb{R}}^8$.
We are ready to complete the proof of Proposition \[prop:e6\].
\[Proof of Proposition \[prop:e6\]\] Since $\operatorname{rank}_{\mathbb{R}}H=\operatorname{rank}_{\mathbb{R}}G$ ($=2$), (PP) is equivalent to (QP) by Lemma \[lem:BPQ\].
The identity component $(M_H)_0$ of $M_H$ is isomorphic to $Spin(8)$, and the adjoint action of $(M_H)_0$ on ${\mathfrak {n}}^{-\sigma}
\simeq {\mathfrak {n}}/{\mathfrak {n}}^{\sigma}$ is isomorphic to the spin representation ${\operatorname{spin}^+}\oplus {\operatorname{spin}^-}$ of $Spin(8)$ on ${\mathbb{R}}^{16}={\mathbb{R}}^8 \oplus {\mathbb{R}}^8$. Thus it induces a transitive action of $Spin(8)$ on $S^7 \times S^7$ by Lemma \[lem:SS\]. On the other hand, since there are two distinct weights of ${\mathfrak {a}}_H$ on ${\mathfrak {n}}^{-\sigma}$, we conclude that the adjoint action of $M_H A_H$ has an open dense orbit in ${\mathfrak {n}}^{-\sigma} \simeq {\mathbb{R}}^{16}$. By Theorem \[thm:qp\], $(G,H)$ satisfies (QP).
Symmetric pair $(G,H)$ with ${\operatorname{rank}}_{\mathbb{R}} H=1$ {#sec:rank1}
====================================================================
Since a minimal parabolic subgroup of a compact Lie group $K$ is $K$ itself, the following proposition is obvious by the Iwasawa decomposition $G=K A_G N= K P_G$:
\[prop:GK\] Any Riemannian symmetric pair $(G,K)$ satisfies [[(PP)]{}]{} and [[(QP)]{}]{}.
Among reductive symmetric pairs $(G,H)$, the Riemannian symmetric pair is characterized by the condition ${\operatorname{rank}}_{\mathbb{R}} H=0$. In this section, as the of Proposition \[prop:GK\], we highlight the case where ${\operatorname{rank}}_{\mathbb{R}}H=1$ and we give a classification of $({\mathfrak {g}}, {\mathfrak {h}})$ satisfying (PP), see (E1)–(E4), (G2), or (H1), in Theorem \[thm:1.1\].
In Table \[tab:5.1\], we give a list of all irreducible symmetric pairs $({\mathfrak {g}}, {\mathfrak{h}})$ with $\operatorname{rank}_{{\mathbb{R}}}{\mathfrak {h}}=1$. Since $({\mathfrak {g}}, {\mathfrak{h}})$ and its $c$-dual $({\mathfrak {g}}^c, {\mathfrak{h}})$ have the same root multiplicities $m^{\pm}(\lambda)= \dim {\mathfrak {g}}^{\pm \sigma}({\mathfrak {a}}_H;\lambda)$, we write them in the same row. Some of the symmetric pairs are labelled as ${\rm{I}}_{{\mathbb{R}}}$, ${\rm{I}}_{{\mathbb{R}}}^c$, $\cdots$, for which we give more detailed data in Table \[tab:5.2\]. We are now ready to state the main result of this section. This completes the proof of Theorems \[thm:1.1\] and \[thm:QpPp\] for the classification of $({\mathfrak {g}}, {\mathfrak{h}})$ with (PP) and (QP), respectively, under the assumption that ${\operatorname{rank}}_{\mathbb{R}} H=1$.
\[prop:rankH\] Suppose $(G,H)$ is an irreducible symmetric pair with $\operatorname{rank}_{{\mathbb{R}}}H=1$.
1. The following two conditions [[(i)]{}]{} and [[(ii)]{}]{} on the pair $(G,H)$ are equivalent:
1. The pair $(G,H)$ satisfies [[(QP)]{}]{}.
2. The pair $({\mathfrak {g}}, {\mathfrak {h}})$ is one of ${\rm{I}}_{{\mathbb{F}}}$, ${\rm{I}}_{{\mathbb{F}}}^c$ $({\mathbb{F}}={\mathbb{R}}$, ${\mathbb{C}}$, ${\mathbb{H}}$, or ${\mathbb{O}})$, ${\rm{II}}$, ${\rm{II}}^c$, ${\rm{III}}$ or ${\rm{III}}^c$.
2. The following two conditions [[(iii)]{}]{} and [[(iv)]{}]{} of the pair $(G,H)$ are equivalent:
1. The pair $(G,H)$ satisfies [[(PP)]{}]{}.
2. The pair $({\mathfrak {g}}, {\mathfrak {h}})$ is one of ${\rm{I}}_{{\mathbb{R}}}^c$, ${\rm{I}}_{{\mathbb{C}}}^c$, ${\rm{I}}_{{\mathbb{H}}}^c$, ${\rm{I}}_{{\mathbb{O}}}^c$, ${\rm{II}}^c$, ${\rm{III}}^c$, ${\rm{I}}_{{\mathbb{F}}}$ $({\mathbb{F}}={\mathbb{R}}, {\mathbb{C}}, {\mathbb{H}})$ with $p=0$ or $q=1$, ${\rm{II}}$ with $m=1, 2$, or ${\rm{III}}$ with $m=1,2$.
We divide the proof into three steps.
[**[Step 1.]{}**]{}For the implication (i) $\Rightarrow$ (ii), we apply Proposition \[prop:QPrank\]. Since $\operatorname{rank}_{{\mathbb{R}}}H=1$, if $(G,H)$ satisfies (QP), then $m^-(\lambda)+m^-(2\lambda) \le 1$ by . In light of Table \[tab:5.1\], we have shown that (i) implies (ii).
[**[Step 2.]{}**]{}In order to prove the equivalence (iii) $\Leftrightarrow$ (iv), it is sufficient to deal with symmetric pairs $(G,H)$ satisfying (QP) because (PP) implies (QP) (see Lemma \[lem:BPQ\]). In particular, we may assume that $(G,H)$ satisfies (ii) by Step 1. For the pairs $({\mathfrak {g}}, {\mathfrak {h}})$ satisfying (ii), we give a list of the vector spaces ${\mathfrak {n}}^{-\sigma}$ on which the subalgebras $({\mathfrak {m}}_H \cap {\mathfrak {m}}_G)+{\mathfrak {a}}_H
\subset {\mathfrak {m}}_H + {\mathfrak {a}}_H$ act via the adjoint representation in Table \[tab:5.2\]. For ${\rm{I}}_{{\mathbb{F}}}$ $({\mathbb{F}}={\mathbb{R}}$, ${\mathbb{C}}$, or ${\mathbb{H}})$ in this table, the action of ${\mathfrak {u}}(|p-q|;{\mathbb{F}})$ on ${\mathbb{F}}^q$ is trivial if $p \ge q$ and is the natural action on the first factor of the decomposition ${\mathbb{F}}^q={\mathbb{F}}^{q-p} \oplus {\mathbb{F}}^p$ if $q \ge p$. In view of this table, we see that $(M_H \cap M_G)A_H$ has an open orbit in ${\mathfrak {n}}^{-\sigma}$ if and only if (iv) holds. Hence the equivalence (iii) $\Leftrightarrow$ (iv) follows from Theorem \[thm:pp\].
[**[Step 3.]{}**]{}The converse implication (i) $\Leftarrow$ (ii) follows from Step 2 and Proposition \[prop:cdual\] because ${\rm{I}}_{{\mathbb{F}}}^c$, ${\rm{II}}^c$, and ${\rm{III}}^c$ are the $c$-duals of ${\rm{I}}_{{\mathbb{F}}}$, ${\rm{II}}$, and ${\rm{III}}$, respectively.
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
$\begin{matrix} ${\mathfrak {h}}$ $\begin{pmatrix} m^+(\lambda) & m^+(2\lambda) \\ m^-(\lambda) & m^-(2\lambda)\end{pmatrix}$
{\mathfrak {g}}
\\
{\mathfrak {g}}^c
\end{matrix}$
------------------------------------------------- ------------------------------------------------------------------------------------------------------- ------------------------------------------------- ---------------------------------------------------------------------------------------------
$\begin{matrix} $\begin{matrix} {\mathfrak {so}}(p+1,q+1) ${\mathfrak {so}}(q) + {\mathfrak {so}}(p+1,1)$ $\begin{pmatrix} p & 0 \\ q & 0 \end{pmatrix}$
{\rm{I}}_{\mathbb{R}} \\
\\ {\mathfrak {so}}(p+q+1,1)\end{matrix}$
{\rm{I}}_{\mathbb{R}}^c \end{matrix}$
$\begin{matrix} $\begin{matrix} ${\mathfrak {u}}(q) + {\mathfrak {u}}(p+1,1)$ $\begin{pmatrix} 2p & 1 \\ 2q & 0 \end{pmatrix}$
{\rm{I}}_{\mathbb{C}} {\mathfrak {u}}(p+1,q+1)
\\ \\
{\rm{I}}_{\mathbb{C}}^c {\mathfrak {u}}(p+q+1,1)\end{matrix}$
\end{matrix}$
$\begin{matrix} $\begin{matrix} ${\mathfrak {sp}}(q) + {\mathfrak {sp}}(p+1,1)$ $\begin{pmatrix} 4p & 3 \\ 4q & 0 \end{pmatrix}$
{\rm{I}}_{\mathbb{H}} {\mathfrak {sp}}(p+1,q+1)
\\ \\
{\rm{I}}_{\mathbb{H}}^c {\mathfrak {sp}}(p+q+1,1)\end{matrix}$
\end{matrix}$
${\rm{I}}_{\mathbb{O}}={\rm{I}}_{\mathbb{O}}^c$ ${\mathfrak {f}}_{4(-20)}$ ${\mathfrak {so}}(8,1)$ $\begin{pmatrix} 0 & 7 \\ 8 & 0 \end{pmatrix}$
$\begin{matrix} {\mathfrak {sl}}(m+2,{\mathbb{R}}) \\ {\mathfrak {sl}}(m+1,{\mathbb{R}})\end{matrix}$ ${\mathfrak {so}}(m+1,1)$ $\begin{pmatrix} m & 0 \\ m & 1 \end{pmatrix}$
$\begin{matrix} {\mathfrak {sp}}(m+2,{\mathbb{R}}) \\ {\mathfrak {sp}}(m+1,{\mathbb{R}})\end{matrix}$ ${\mathfrak {u}}(m+1,1)$ $\begin{pmatrix} 2m & 1 \\ 2m & 2 \end{pmatrix}$
$\begin{matrix} {\mathfrak {f}}_{4(4)} \\ {\mathfrak {f}}_{4(-20)}\end{matrix}$ ${\mathfrak {sp}}(2,1)+{\mathfrak {su}}(2)$ $\begin{pmatrix} 4 & 3 \\ 4 & 4 \end{pmatrix}$
$\begin{matrix} {\rm{II}} $\begin{matrix} {\mathfrak {so}}(m+2,{\mathbb{C}}) ${\mathfrak {so}}(m+1,1)$ $\begin{pmatrix} m & 0 \\ m & 0 \end{pmatrix}$
\\ \\
{\rm{II}}^c \end{matrix}$ {\mathfrak {h}}+{\mathfrak{h}}\end{matrix}$
$\begin{matrix} {\mathfrak {sl}}(m+2,{\mathbb{C}}) ${\mathfrak {su}}(m+1,1)$ $\begin{pmatrix} 2m & 1 \\ 2m & 1 \end{pmatrix}$
\\ {\mathfrak {h}}+{\mathfrak{h}}\end{matrix}$
$\begin{matrix} ${\mathfrak {sp}}(m+1,1)$ $\begin{pmatrix} 4m & 3 \\ 4m & 3 \end{pmatrix}$
{\mathfrak {sp}}(m+2,{\mathbb{C}})
\\
{\mathfrak {h}}+{\mathfrak{h}}\end{matrix}$
$\begin{matrix} {\mathfrak {f}}_{4}({\mathbb{C}}) ${\mathfrak {f}}_{4(-20)}$ $\begin{pmatrix}8 & 7 \\ 8 & 7 \end{pmatrix}$
\\ {\mathfrak {h}}+{\mathfrak {h}}\end{matrix}$
$\begin{matrix} $\begin{matrix} {\mathfrak {so}}^{\ast}(2m+4) ${\mathfrak {u}}(m+1,1)$ $\begin{pmatrix}2m & 1 \\ 2m & 0 \end{pmatrix}$
{\rm{III}} \\ {\mathfrak {so}}(2m+2,2)\end{matrix}$
\\
{\rm{III}}^c\end{matrix}$
$\begin{matrix} {\mathfrak {su}}^{\ast}(2m+4) ${\mathfrak {sp}}(m+1,1)$ $\begin{pmatrix}4m & 3 \\ 4m & 1 \end{pmatrix}$
\\ {\mathfrak {su}}(2m+2,2)\end{matrix}$
$\begin{matrix} {\mathfrak {e}}_{6(-26)} ${\mathfrak {f}}_{4(-20)}$ $\begin{pmatrix}8 & 7 \\ 8 & 1 \end{pmatrix}$
\\ {\mathfrak {e}}_{6(-14)}\end{matrix}$
${\mathfrak {sl}}(3,{\mathbb{C}})$ ${\mathfrak {so}}(3,{\mathbb{C}})$ $\begin{pmatrix} 2 & 0 \\ 2 & 2 \end{pmatrix}$
$\begin{matrix} {\mathfrak {su}}(3,3) ${\mathfrak {so}}^{\ast}(6)$ $\begin{pmatrix} 4 & 1 \\ 4 & 3 \end{pmatrix}$
\\ {\mathfrak {su}}^{\ast}(6) \end{matrix}$
$\begin{matrix} {\mathfrak {e}}_{6(2)} ${\mathfrak {sp}}(3,1)$ $\begin{pmatrix}8 & 3 \\ 8 & 5 \end{pmatrix}$
\\ {\mathfrak {e}}_{6(-26)}\end{matrix}$
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
: Irreducible symmetric pairs $({\mathfrak{g}}, {\mathfrak {h}})$ with $\operatorname{rank}_{\mathbb{R}}{\mathfrak {h}}=1$[]{data-label="tab:5.1"}
\
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
(${\mathfrak {m}}_H \cap {\mathfrak {m}}_G)+{\mathfrak {a}}_H$ ${\mathfrak {m}}_H + {\mathfrak {a}}_H$ ${\mathfrak {n}}^{-\sigma}$
------------------------------------------------- ---------------------------------------------------------------------------- --------------------------------------------------------------------------- --------------------------------------
${\rm{I}}_{\mathbb{R}}$ ${\mathfrak {o}}(|p-q|)+{\mathbb{R}}$
${\rm{I}}_{\mathbb{R}}^c$ ${\mathfrak {o}}(q) + {\mathfrak {o}}(p)+{\mathbb{R}}$ \[0cm\]\[0cm\][${\mathfrak {o}}(q)+{\mathfrak {o}}(p)+{\mathbb{R}}$]{} \[0cm\]\[0cm\][ ${\mathbb{R}}^q$]{}
${\rm{I}}_{\mathbb{C}}$ ${\mathfrak {u}}(|p-q|) +(\sqrt{-1}{\mathbb{R}})^{\min(p,q)}+{\mathbb{R}}$
${\rm{I}}_{\mathbb{C}}^c$ ${\mathfrak {u}}(q)+{\mathfrak {u}}(p)+{\mathbb{C}}$ \[0cm\]\[0cm\][${\mathfrak {u}}(q)+{\mathfrak {u}}(p) +{\mathbb{C}}$]{} \[0cm\]\[0cm\][ ${\mathbb{C}}^q$]{}
${\rm{I}}_{\mathbb{H}}$ ${\mathfrak {sp}}(|p-q|)+{\mathfrak {sp}}(1)^{\min(p,q)}+{\mathbb{R}}$
${\rm{I}}_{\mathbb{H}}^c$ ${\mathfrak {sp}}(q)+{\mathfrak {sp}}(p)+ \[0cm\]\[0cm\][${\mathfrak {sp}}(q)+ {\mathfrak {sp}}(p)+{\mathbb{H}}$]{} \[0cm\]\[0cm\][ ${\mathbb{H}}^q$]{}
{\mathbb{H}}$
${\rm{I}}_{\mathbb{O}}={\rm{I}}_{\mathbb{O}}^c$ ${\mathfrak{spin}}(7) + {\mathbb{R}}$ ${\mathfrak{spin}}(7) + {\mathbb{R}}$ ${\mathbb{R}}^8$
${\rm{II}}$ ${\mathbb{T}}^{[\frac m 2]}+{\mathbb{R}}$
${\rm{II}}^c$ ${\mathfrak {o}}(m)+{\mathbb{R}}$ \[0cm\]\[0cm\][ ${\mathfrak {o}}(m)+{\mathbb{R}}$]{} \[0cm\]\[0cm\] [${\mathbb{R}}^m$ ]{}
${\rm{III}}$ ${\mathfrak {sp}}(1)^{[\frac m 2]}+{\mathbb{C}}$
${\rm{III}}^c$ ${\mathfrak {u}}(m)+{\mathbb{R}}$ \[0cm\]\[0cm\][${\mathfrak {u}}(m)+{\mathbb{C}}$]{} \[0cm\]\[0cm\][ ${\mathbb{C}}^m$]{}
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
: Irreducible symmetric pairs with (QP) and $\operatorname{rank}_{{\mathbb{R}}}{\mathfrak {h}}=1$[]{data-label="tab:5.2"}
\
Associated symmetric pairs of non-$K_{\varepsilon}$-family {#sec:nonKe}
==========================================================
In this section and the next section, we complete the proof of the classification of symmetric pairs $({\mathfrak {g}}, {\mathfrak {h}})$ satisfying (PP) (or (QP)) and ${\operatorname{rank}}_{\mathbb{R}}H \ge 2$. For this, we make use of the $K_{\varepsilon}$-family introduced in [@OS1] (See Definition \[def:Ke\] below), which is a fairly large class of reductive symmetric pairs.
We recall that if a reductive symmetric pair $({\mathfrak {g}}, {\mathfrak {h}})$ is defined by an involutive automorphism $\sigma$ of ${\mathfrak {g}}$ then we can define another involution $\sigma \theta$ by taking a Cartan involution $\theta$ commuting with $\sigma$. The symmetric pair $({\mathfrak {g}}, {\mathfrak {g}}^{\sigma \theta})$ is called the [*[associated symmetric pair]{}*]{} of $({\mathfrak {g}}, {\mathfrak {h}})
\equiv ({\mathfrak {g}}, {\mathfrak {g}}^{\sigma})$. Our strategy is based on the following ideas.
1. Very few pairs $({\mathfrak {g}}, {\mathfrak {h}})
\equiv({\mathfrak {g}}, {\mathfrak {g}}^{\sigma})$ satisfy (QP) if $({\mathfrak {g}}, {\mathfrak {g}}^{\sigma\theta})$ does not belong to the $K_{\varepsilon}$-family (Proposition \[prop:nonKe\]).
2. $\operatorname{rank}_{\mathbb{R}}G =\operatorname{rank}_{\mathbb{R}}H$ if $({\mathfrak {g}}, {\mathfrak {g}}^{\sigma\theta})$ belongs to the $K_{\varepsilon}$-family.
In this section we treat the case where the associated symmetric pair $({\mathfrak {g}}, {\mathfrak {g}}^{\sigma\theta})$ does not belong to the $K_{\varepsilon}$-family, and in the next section we discuss the opposite case where $({\mathfrak {g}}, {\mathfrak {g}}^{\sigma\theta})$ belongs to the $K_{\varepsilon}$-family. To be more precise, let us review the definition of $K_{\varepsilon}$-family. Suppose ${\mathfrak {a}}_G$ is a maximal abelian subspace of ${\mathfrak {g}}^{-\theta}$ as before.
\[def:Ke\] [[ A map $\varepsilon :\Sigma ({\mathfrak {g}}, {\mathfrak {a}}_G)
\cup \{0\} \to \{\pm 1\}$ is said to be a [*[signature]{}*]{} if $$\varepsilon(\alpha+ \beta)
=
\varepsilon(\alpha)\varepsilon(\beta)
\quad
\text{for any}
\,\,
\alpha, \beta
\,\,
\text{and }
\,\, \alpha+ \beta
\in \Sigma ({\mathfrak {g}}, {\mathfrak {a}}_G)
\cup \{0\}.$$ We note that $\varepsilon(0)=1$ and $\varepsilon(\alpha)=\varepsilon(-\alpha)$ for any $\alpha \in \Sigma ({\mathfrak {g}}, {\mathfrak {a}}_G)$. We define another involution $\theta_{\varepsilon}$ by $$\theta_{\varepsilon} (X)
:=
\varepsilon(\alpha) \theta(X)
\quad \text{for }\,\, X \in {\mathfrak {g}}({\mathfrak {a}}_G;\alpha),$$ and set ${\mathfrak {k}}_{\varepsilon}
:=\{X \in {\mathfrak {g}}:\theta_{\varepsilon}(X)=X\}
$. If $\varepsilon \equiv 1$ then ${\mathfrak {k}}_{\varepsilon}={\mathfrak {k}}$. The reductive symmetric pairs $\{({\mathfrak {g}}, {\mathfrak {k}}_{\varepsilon}):
\varepsilon\text{ is a signature}\}$ are called the [*[$K_{\varepsilon}$-family]{}*]{}. ]{}]{}
Here is the main result of this section:
\[prop:nonKe\] Let $(G,H)$ be an irreducible symmetric pair defined by an involution $\sigma$. Assume that the following two conditions are fulfilled: $$\begin{aligned}
&\text{The associated pair
$({\mathfrak {g}}, {\mathfrak {g}}^{\sigma\theta})$
does not belong to the $K_{\varepsilon}$-family. }
\label{eqn:nonKe(a)}
\\
&\operatorname{rank}_{\mathbb{R}}H >1.
\label{eqn:nonKe(c)}\end{aligned}$$ Then either $\operatorname{rank}_{\mathbb{R}}H
< \# \Delta({\mathfrak {n}}^{-\sigma})$ or $$\label{eqn:nonKe(b)}
\text{
$({\mathfrak {g}}, {\mathfrak {h}})$
is a symmetric pair
treated in Propositions
\ref{prop:upq}, \ref{prop:somn} and \ref{prop:sostar}.
}$$ In particular, there is no irreducible symmetric pair $({\mathfrak {g}}, {\mathfrak {h}})$ with and other than those listed in Propositions \[prop:upq\], \[prop:somn\] and \[prop:sostar\].
Suppose $({\mathfrak {g}}, {\mathfrak {g}}^{\sigma\theta})$ does not belong to the $K_{\varepsilon}$-family. Then by using the classification [@OS Table V] and by computing the correspondence $({\mathfrak {g}}, {\mathfrak {h}})
\equiv({\mathfrak {g}}, {\mathfrak {g}}^{\sigma})
\leftrightarrow
({\mathfrak {g}}, {\mathfrak {g}}^{\sigma\theta})$, we observe that either $H$ is a simple Lie group up to a compact torus or holds.
From now on, we assume that the irreducible symmetric pair $({\mathfrak {g}}, {\mathfrak {h}})$ satisfies and but does not satisfy . Then the restricted root system $\Sigma({\mathfrak {h}}, {\mathfrak {a}}_H)$ is irreducible. Then, by Lemma \[lem:2.6\], the condition $\operatorname{rank}_{\mathbb{R}}H
\ge \# \Delta({\mathfrak {n}}^{-\sigma})$ gives strong constraints on both the irreducible root system $\Sigma({\mathfrak {h}}, {\mathfrak {a}}_H)$ and the set $\Delta({\mathfrak {n}}^{-\sigma})$, namely, $\operatorname{rank}_{\mathbb{R}}H
\ge \# \Delta({\mathfrak {n}}^{-\sigma})$ implies that $\Sigma({\mathfrak {h}}, {\mathfrak {a}}_H)$ is one of type $B_l$, $C_l$, $D_l$ or $BC_l$ and that $\Delta({\mathfrak {n}}^{-\sigma})$ is contained in either $\{\pm e_i: 1 \le i \le l\}$ or $\{\pm 2 e_i: 1 \le i \le l\}$. Furthermore, $m^-(\lambda) \le 1$ and $m^-(\lambda)m^-(2\lambda)=0$ for all $\lambda$.
In turn, in view of the classification of irreducible symmetric pairs satisfying and the formulae for the multiplicities $m^-(\lambda_i)$ and $m^-(2\lambda_i)$ in [@OS Table V], we see that this does not happen. To verify it, we remark that the role of $({\mathfrak {g}}, {\mathfrak {g}}^{\sigma})$ and $({\mathfrak {g}}, {\mathfrak {g}}^{\sigma\theta})$ in their table is opposite to our notation here, but the role of the multiplicities $m^{\pm}(\lambda)$ is the same. With this remark in mind, we obtain the following small list from [@OS Table V] by picking up those having the above constraints on $\Sigma({\mathfrak {h}}, {\mathfrak {a}}_H)$ and $\Delta({\mathfrak {n}}^{-\sigma})$ and by skipping those belonging to the families in Proposition \[prop:upq\], \[prop:somn\] and \[prop:sostar\]: $$\begin{aligned}
{3}
&{\mathfrak{g}}
&&{\mathfrak{g}}^{\sigma\theta}
&&{\mathfrak{g}}^{\sigma}={\mathfrak{h}}
\\
&{\mathfrak{sl}}(4,{\mathbb{R}})
\qquad
&&{\mathfrak{sl}}(2,{\mathbb{C}})+\sqrt{-1}{\mathbb{R}}
\qquad
&&{\mathfrak{sp}}(2,{\mathbb{R}})
\\
&{\mathfrak{su}}(2,2)
&&{\mathfrak{so}}^{\ast}(4)
&&{\mathfrak{sp}}(2,{\mathbb{R}})
\\
&{\mathfrak{so}}^{\ast}(8)
&&{\mathfrak{so}}^{\ast}(4)+{\mathfrak{so}}^{\ast}(4)
&&{\mathfrak{u}}(2,2)
\\
&{\mathfrak{so}}(4,4)
&&{\mathfrak{u}}(2,2)
&&{\mathfrak{u}}(2,2)
\\
&{\mathfrak{sl}}(4,{\mathbb{C}})
&&{\mathfrak{su}}^{\ast}(4)
&&{\mathfrak{sp}}(2,{\mathbb{C}})\end{aligned}$$ However, these exceptional cases are actually included in the family of symmetric pairs in Propositions \[prop:upq\] and \[prop:somn\] via the following isomorphisms: $$\begin{aligned}
({\mathfrak {sl}}(4,{\mathbb{R}}), {\mathfrak {sp}}(2,{\mathbb{R}}))
\simeq
&({\mathfrak {so}}(3,3), {\mathfrak {so}}(3,2)),
\\
({\mathfrak {su}}(2,2), {\mathfrak {sp}}(2,{\mathbb{R}}))
\simeq
& ({\mathfrak {so}}(4,2), {\mathfrak {so}}(3,2)),
\\
({\mathfrak {so}}^{\ast}(8), {\mathfrak {u}}(2,2))
\simeq
& ({\mathfrak {so}}(6,2), {\mathfrak {so}}(4,2)+{\mathfrak {so}}(2)),
\\
({\mathfrak {so}}(4,4), {\mathfrak {u}}(2,2))
\simeq
& ({\mathfrak {so}}(4,4), {\mathfrak {so}}(4,2)+{\mathfrak {so}}(2)),
\\
({\mathfrak {sl}}(4,{\mathbb{C}}), {\mathfrak {sp}}(2,{\mathbb{C}}))
\simeq
& ({\mathfrak {so}}(6,{\mathbb{C}}), {\mathfrak {so}}(5,{\mathbb{C}})). \end{aligned}$$ Thus we have proved that $\operatorname{rank}_{\mathbb{R}}H < \#
\Delta ({\mathfrak {n}}^{-\sigma})$ if , and are satisfied and if is not satisfied.
Associated symmetric pairs of $K_{\varepsilon}$-family {#sec:Ke}
======================================================
In this section we consider irreducible symmetric pairs $({\mathfrak {g}}, {\mathfrak {h}})
\equiv({\mathfrak {g}}, {\mathfrak {g}}^{\sigma})$ such that the associated symmetric pair $({\mathfrak {g}}, {\mathfrak {g}}^{\sigma\theta})$ belongs to the $K_{\varepsilon}$-family. In this case, $\operatorname{rank}_{\mathbb{R}}H=\operatorname{rank}_{\mathbb{R}} G$ holds from the definition of $K_{\varepsilon}$-family, and consequently, the condition [[(QP)]{}]{} is equivalent to [[(PP)]{}]{} by Lemma \[lem:BPQ\].
Let ${\mathfrak{g}}_{\mathbb{C}}$ be the complexification of ${\mathfrak{g}}$. For a simple Lie algebra ${\mathfrak{g}}$ over ${\mathbb{R}}$, ${\mathfrak{g}}_{\mathbb{C}}$ is a complex simple Lie algebra if and only if ${\mathfrak{g}}$ itself does not carry a complex Lie algebra structure. We divide the proof into the following three cases:
[**[Case 1.]{}**]{}${\mathfrak{g}}_{\mathbb{C}}$ is not simple.
[**[Case 2.]{}**]{}${\mathfrak{g}}_{\mathbb{C}}$ is a simple classical Lie algebra.
[**[Case 3.]{}**]{}${\mathfrak{g}}_{\mathbb{C}}$ is a simple exceptional Lie algebra.
In Case 1, the pair $({\mathfrak{g}}, {\mathfrak{h}})$ was treated in Proposition \[prop:cpx\]. In fact, ${\mathfrak{g}}$ is a complex simple Lie algebra. Further, ${\mathfrak{g}}^{\sigma \theta}$ is a real form of ${\mathfrak{g}}$ as noted in [@OS1 Appendix], and consequently ${\mathfrak {h}}={\mathfrak {g}}^{\sigma}$ is a complex Lie subalgebra. Hence $({\mathfrak{g}}, {\mathfrak{h}})$ is a complex symmetric pair such that $\operatorname{rank}{\mathfrak{h}}
=\operatorname{rank}{\mathfrak{g}}$.
[**[Case 2.]{}**]{}Suppose that ${\mathfrak{g}}_{\mathbb{C}}$ is a classical simple Lie algebra.
By the classification of $K_{\varepsilon}$-family (see [@OS Table 1]), the pair $({\mathfrak{g}}, {\mathfrak{h}})$ is one of the following pairs up to the center of ${\mathfrak{g}}$. $$\begin{aligned}
&({\mathfrak{gl}}(p+q, {\mathbb{F}}),
{\mathfrak{gl}}(p, {\mathbb{F}})+{\mathfrak{gl}}(q, {\mathbb{F}})),
\quad
{\mathbb{F}}={\mathbb{R}}, {\mathbb{H}},
\\
&({\mathfrak{sp}}(p+q, {\mathbb{R}}),
{\mathfrak{sp}}(p, {\mathbb{R}})
+{\mathfrak{sp}}(q, {\mathbb{R}})),
\\
&({\mathfrak{u}}(n,n;{\mathbb{F}}),
{\mathfrak{gl}}(n, {\mathbb{F}})),
\quad
{\mathbb{F}}={\mathbb{R}}, {\mathbb{C}}, {\mathbb{H}},
\\
&({\mathfrak{sp}}(n, {\mathbb{R}}),
{\mathfrak{gl}}(n, {\mathbb{R}})),
\\
& ({\mathfrak{so}}^{\ast}(4n), {\mathfrak{gl}}(n, {\mathbb{H}})),
\\
\intertext{or the following two families}
&
({\mathfrak{u}}(i+j, k+l;{\mathbb{F}}),
{\mathfrak{u}}(i, k;{\mathbb{F}})+{\mathfrak{u}}(j, l;{\mathbb{F}})),
\quad
{\mathbb{F}}={\mathbb{R}}, {\mathbb{C}}, {\mathbb{H}},
\\
&
({\mathfrak{o}}^{\ast}(2p+2q), {\mathfrak{o}}^{\ast}(2p)+
{\mathfrak{o}}^{\ast}(2q)). \end{aligned}$$ In the last two cases, the condition that $({\mathfrak {g}}, {\mathfrak {g}}^{\sigma \theta})$ belongs to the $K_{\varepsilon}$-family imposes certain constraints on the parameters ([*[e.g.]{}*]{} $pq$ is even in the last case).
The first five cases were treated in Propositions \[prop:glgl\], \[prop:sp\], \[prop:ugl\], and \[prop:ug2\]. The last two cases are covered by Propositions \[prop:upq\] and \[prop:sostar\] without constraints on parameters, respectively. Thus there is no symmetric pair $({\mathfrak {g}}, {\mathfrak {h}})$ that satisfies (QP).
[**[Case 3.]{}**]{}${\mathfrak{g}}_{\mathbb{C}}$ is an exceptional simple Lie algebra.
In this case, we prove the following:
\[prop:except\] Let $({\mathfrak {g}},{\mathfrak {h}})\equiv
({\mathfrak {g}}, {\mathfrak {g}}^{\sigma})$ be a symmetric pair such that its associated symmetric pair $({\mathfrak {g}}, {\mathfrak {g}}^{\sigma\theta})$ belongs to the $K_{\varepsilon}$-family. If ${\mathfrak {g}}_{\mathbb{C}}$ is a simple exceptional Lie algebra, then the following three conditions are equivalent:
1. $({\mathfrak {g}},{\mathfrak {h}})$ satisfies [[(QP)]{}]{}.
2. $({\mathfrak {g}},{\mathfrak {h}})$ satisfies [[(PP)]{}]{}.
3. $({\mathfrak {g}},{\mathfrak {h}})$ is either $({\mathfrak {e}}_{6(-26)},{\mathfrak {so}}(9,1)+{\mathbb{R}})$ or $({\mathfrak {f}}_{4(-20)},{\mathfrak {so}}(8,1))$.
The equivalence (i) $\Leftrightarrow$ (ii) holds because ${\operatorname{rank}}_{\mathbb{R}}G=
{\operatorname{rank}}_{\mathbb{R}}H$. We have already proved the implication (iii) $\Rightarrow$ (ii) in Propositions \[prop:e6\] and \[prop:rankH\]. The remaining implication (i) $\Rightarrow$ (iii) is deduced from the following two lemmas.
\[lem:eso82\] The symmetric pair $({\mathfrak {e}}_{6(-14)}, {\mathfrak {so}}(8,2)+\sqrt{-1}{\mathbb{R}})$ does not satisfy [[(QP)]{}]{}.
We take the standard basis $\{e_1, e_2\}$ of ${\mathfrak {a}}_H^{\ast}={\mathfrak {a}}_G^{\ast}$ in such a way that $\Sigma^+({\mathfrak {h}}, {\mathfrak {a}}_H)
=\{e_1, e_2, e_1 \pm e_2\}$. Then the root multiplicities $m^{\pm}(\lambda)$ are given as follows: $$\begin{aligned}
{4}
&\lambda
\quad
&&\pm e_i\,(i=1,2)
\quad
&&
\pm 2e_i\,(i=1,2)
\quad
&&
\pm e_1 \pm e_2
\\
&m^+(\lambda)\qquad
&&
\quad 6
&&
\quad 0
&&
\quad 1
\\
&m^-(\lambda)
&&
\quad 0
&&
\quad 1
&&
\quad 7\end{aligned}$$ Thus $\Delta({\mathfrak {n}}^{-\sigma})
=\{2e_1, 2e_2, e_1 \pm e_2\}$, and $\# \Delta({\mathfrak {n}}^{-\sigma})=4
> \operatorname{rank}_{\mathbb{R}}H=2$. Now the lemma follows from Proposition \[prop:QPrank\].
For the remaining cases, we use Proposition \[prop:QPineq\] as an easy-to-check sufficient condition for [[(QP)]{}]{}. We obtain the following:
\[lem:Kex\] Let ${\mathfrak {g}}$ be an exceptional simple Lie algebra and $({\mathfrak {g}}, {\mathfrak {h}})$ a symmetric pair such that its associated symmetric pair belongs to the $K_{\varepsilon}$-family. Then the inequality holds if and only if the pair $({\mathfrak {g}}, {\mathfrak {h}})$ is one of the following: $$({\mathfrak {e}}_{6(-14)}, {\mathfrak {so}}(8,2) +\sqrt{-1}{\mathbb{R}}),
\quad
({\mathfrak {e}}_{6(-26)}, {\mathfrak {so}}(9,1) +{\mathbb{R}}),
\quad
({\mathfrak {f}}_{4(-20)}, {\mathfrak {so}}(8,1)).$$
In Table \[tab:7.1\], we list all the symmetric pairs $({\mathfrak {g}}, {\mathfrak {h}})
\equiv({\mathfrak {g}}, {\mathfrak {g}}^{\sigma})$ such that ${\mathfrak {g}}_{\mathbb{C}}$ is an exceptional simple Lie algebra and that $({\mathfrak {g}}, {\mathfrak {g}}^{\sigma\theta})$ belongs to the $K_{\varepsilon}$-family. In this table, we also list the data $m(G)$ (see ); $n(G)$, $n(H)$ (see ), and $\operatorname{rank}_{\mathbb{R}}G
(=
\operatorname{rank}_{\mathbb{R}}H$). Now Lemma \[lem:Kex\] follows from the computation of the signature of $n(G)-n(H)-m(G) {\operatorname{rank}}_{\mathbb{R}}H$.
$G$ $\operatorname{rank}_{\mathbb{R}} G$ $m(G)$ $n(G)$ $H$ $n(H)$ $m(G) \operatorname{rank}_{\mathbb{R}} G$ v.s. $n(G)-n(H)$
--------------------------------------------- -------------------------------------- --------------------- ----------------------- --------------------------------------------------------------------- -------- ------------------------------------------------------------
${\mathfrak {sl}}(6,{\mathbb{R}})+{\mathfrak {sl}}(2,{\mathbb{R}})$ 16 $6<20$
\[0cm\]\[0cm\][${\mathfrak{e}}_{6(6)}$]{} \[0cm\]\[0cm\][6]{} \[0cm\]\[0cm\][1]{} \[0cm\]\[0cm\][36]{} [${\mathfrak{so}}(5,5)+{\mathbb{R}}$]{} [20]{} [$6<16$]{}
${\mathfrak{so}}(6,4)+\sqrt{-1}{\mathbb {R}}$ 20 $8<16$
\[0cm\]\[0cm\][${\mathfrak{e}}_{6(2)}$]{} \[0cm\]\[0cm\][4]{} \[0cm\]\[0cm\][2]{} \[0cm\]\[0cm\][36]{} ${\mathfrak{su}}(3,3)+{\mathfrak {sl}}(2,{\mathbb{R}})$ 16 $8<20$
${\mathfrak{su}}(5,1)+{\mathfrak {sl}}(2,{\mathbb{R}})$ 10 $16<20$
\[0cm\]\[0cm\][${\mathfrak{e}}_{6(-14)}$]{} \[0cm\]\[0cm\][2]{} \[0cm\]\[0cm\][8]{} \[0cm\]\[0cm\][30]{} ${\mathfrak{so}}(8,2)+\sqrt{-1}{\mathbb {R}}$ 14 $16=16$
${\mathfrak{e}}_{6(-26)}$ 2 8 24 ${\mathfrak{so}}(9,1)+{\mathbb{R}}$ 8 $16>8$
${\mathfrak {sl}}(8,{\mathbb{R}})$ 28 $7< 35$
${\mathfrak{e}}_{7(7)}$ 7 1 63 ${\mathfrak{so}}(6,6)+{\mathfrak {sl}}(2,{\mathbb{R}})$ 31 $7< 32$
${\mathfrak{e}}_{6(6)}+{\mathbb{R}}$ 36 $7<27$
${\mathfrak{so}}(8,4)+{\mathfrak {su}}(2)$ 28 $16<32$
\[0cm\]\[0cm\][${\mathfrak{e}}_{7(-5)}$]{} \[0cm\]\[0cm\][4]{} \[0cm\]\[0cm\][4]{} \[0cm\]\[0cm\][60]{} ${\mathfrak{so}}^{\ast}(12)+{\mathfrak {sl}}(2,{\mathbb{R}})$ 28 $16<32$
${\mathfrak{e}}_{6(-26)}+{\mathbb{R}}$ 24 $24<27$
\[0cm\]\[0cm\][${\mathfrak{e}}_{7(-25)}$]{} \[0cm\]\[0cm\][3]{} \[0cm\]\[0cm\][8]{} \[0cm\]\[0cm\][51]{} ${\mathfrak{so}}(10,2)+{\mathfrak {sl}}(2,{\mathbb{R}})$ 19 $24<32$
${\mathfrak{so}}(8,8)$ 56 $8< 64$
\[0cm\]\[0cm\][${\mathfrak{e}}_{8(8)}$]{} \[0cm\]\[0cm\][8]{} \[0cm\]\[0cm\][1]{} \[0cm\]\[0cm\][120]{} ${\mathfrak{e}}_{7(7)}+{\mathfrak{sl}}(2,{\mathbb{R}})$ 64 $8<56$
${\mathfrak{so}}(12,4)$ 44 $32<64$
\[0cm\]\[0cm\][${\mathfrak{e}}_{8(-24)}$]{} \[0cm\]\[0cm\][4]{} \[0cm\]\[0cm\][8]{} \[0cm\]\[0cm\][108]{} ${\mathfrak{e}}_{7(-25)}+{\mathfrak{sl}}(2,{\mathbb{R}})$ 52 $32<56$
${\mathfrak{so}}(5,4)$ 16 $4< 8$
\[0cm\]\[0cm\][${\mathfrak{f}}_{4(4)}$]{} \[0cm\]\[0cm\][4]{} \[0cm\]\[0cm\][1]{} \[0cm\]\[0cm\][24]{} ${\mathfrak{sp}}(3,{\mathbb{R}})+{\mathfrak{sl}}(2,{\mathbb{R}})$ 10 $4< 14$
${\mathfrak{f}}_{4(-20)}$ 1 8 15 ${\mathfrak{so}}(8,1)$ 7 $8=8$
${\mathfrak{g}}_{2(2)}$ 2 1 6 ${\mathfrak{sl}}(2,{\mathbb{R}})+{\mathfrak{sl}}(2,{\mathbb{R}})$ 2 $2<4$
: Exceptional symmetric pairs $({\mathfrak {g}}, {\mathfrak {h}})
\equiv({\mathfrak {g}}, {\mathfrak {g}}^{\sigma})$ with $({\mathfrak {g}}, {\mathfrak {g}}^{\sigma\theta})$ in $K_{\varepsilon}$-family[]{data-label="tab:7.1"}
\
Applications to branching problems {#sec:fm}
==================================
This section is devoted to applications of our classification results (Theorem \[thm:1.1\] and Proposition \[prop:B\]) to branching problems of real reductive groups. Given an irreducible representation $\pi$ of $G$, we wish to understand how the representation $\pi$ behaves as a representation of a subgroup $H$ ([*[branching problems]{}*]{}). Basic quantities are the dimension of continuous $H$-homomorphisms $$m(\pi, \tau):=\dim \operatorname{Hom}_H(\pi|_H, \tau),$$ for irreducible representations $\tau$ of $H$. Concrete analysis of the restriction $\pi|_H$ could be reasonably developed under the condition that $m(\pi, \tau)< \infty$. However, the finiteness of the multiplicities does not always hold even when $H$ is a maximal subgroup of $G$ (see [@Kb2; @xkeastwood60] for good behaviors and bad behaviors of the restriction $\pi|_H$). The initial motivation of our work is to single out a nice framework on the pair $(G,H)$ of reductive groups for which we could expect that the branching laws $\pi|_H$ behave reasonably for [*[arbitrary]{}*]{} irreducible representations $\pi$.
Admissible smooth representations {#subsec:adm}
---------------------------------
We begin with a quick review of some basic notion of (infinite-dimensional) continuous representations of real reductive groups.
Suppose $G$ is a real reductive linear Lie group (or its finite cover) and $K$ is a maximal compact subgroup.
Let $\pi$ be a continuous representation of $G$ on a complete, locally convex vector space ${\mathcal{H}}$. The space ${\mathcal{H}}^{\infty}$ of $C^{\infty}$-vectors of $(\pi,{\mathcal{H}})$ is naturally endowed with Fr[é]{}chet topology, and we obtain a continuous representation $\pi^{\infty}$ of $G$ on ${\mathcal{H}}^{\infty}$.
Suppose that $(\pi, {\mathcal{H}})$ is of finite length, in other words, suppose that there are only finitely many closed invariant subspaces in ${\mathcal{H}}$. We say $\pi$ is [*[admissible]{}*]{} (or $K$-[*[admissible]{}*]{}) if $$\dim \operatorname{Hom}_K(\tau, \pi|_K)< \infty$$ for any irreducible finite-dimensional representation $\tau$ of $K$. For an admissible representation $(\pi, {\mathcal{H}})$ such that ${\mathcal{H}}$ is a Banach space, we say $(\pi^{\infty}, {\mathcal{H}}^{\infty})$ is an [*[admissible smooth representation]{}*]{}. By the Casselman–Wallach globalization theory, there is a canonical equivalence of categories between the category of $({\mathfrak {g}}, K)$-modules of finite length and the category of admissible smooth representations of $G$. An admissible smooth representation is sometimes referred to as a smooth Fr[é]{}chet representation of moderate growth ([@WaI Chapter 11]). An irreducible admissible smooth representation of $G$ is said to be an [*[irreducible smooth representation]{}*]{} for simplicity throughout this article.
Finite multiplicity property in branching laws {#subsec:fm}
----------------------------------------------
Suppose that $G$ is a real reductive linear Lie group and $H$ is a reductive subgroup defined algebraically over ${\mathbb{R}}$. In what follows, the results remain true if we replace $(G,H)$ by their finite coverings or by their finite-index subgroups. Following the terminology in [@xtoshitoshima], we formulate a finite-multiplicity property on the pair $(G,H)$ for the restriction of admissible representations:
1. (Finite-multiplicity property) $\dim \operatorname{Hom}_H(\pi|_H, \tau)<\infty$, for any admissible smooth representation $\pi$ of $G$ and for any admissible smooth representation $\tau$ of $H$.
Here $\operatorname{Hom}_H(\, ,\,)$ denotes the space of continuous $H$-homomorphisms.
As a direct consequence of Theorem \[thm:1.1\] and Fact \[fact:1.4\] , we obtain a complete classification of the reductive symmetric pairs $(G,H)$ having the finite-multiplicity property [[(FM)]{}]{}.
\[thm:fm\] Suppose $(G,H)$ is a reductive symmetric pair. Then the following two conditions are equivalent:
1. $(G,H)$ satisfies the finite-multiplicity property [[(FM)]{}]{} for the restriction of admissible smooth representations.
2. The pair $({\mathfrak {g}}, {\mathfrak {h}})$ of Lie algebras is isomorphic to the direct sum of the pairs [[(A)]{}]{}–[[(H)]{}]{} in Theorem \[thm:1.1\] up to outer automorphisms.
\[rem:fm\]
Here are some features of the implication (ii) $\Rightarrow$ (i) in Theorem \[thm:fm\] for the following special settings among (A)–(H):
1. For the pairs (B) and (C), the finite-multiplicity property (FM) is obvious because $\pi$ is a finite-dimensional representation.
2. For the pairs (D) ([*[i.e.]{}*]{} $H=K$), the finite-multiplicity property (FM) is trivial by the definition of admissible representations. (However, there are a number of equivalent conditions on admissibility, and the proof of Fact \[fact:1.4\] given in [@xtoshitoshima] is not a tautology for $H=K$ but includes a microlocal proof of the classical fact that quasisimple irreducible representations are admissible, which was first proved by Harish-Chandra [@HC].)
3. For the pairs (F), we have a uniform estimate of the multiplicities, as we shall see in Subsection \[subsec:BM\].
4. For the pairs (G), [*[i.e.]{}*]{}, $(G,H)=(G' \times G', \operatorname{diag}G')$, the finite-multiplicity property (FM) can be interpreted as the finiteness of linearly independent invariant trilinear forms, see Subsection \[subsec:group\].
\[rem:fm2\] [[ The property (FM) is a condition on the pair $(G,H)$ of groups that assures the finiteness of the multiplicity $m(\pi, \tau)$ for [*[arbitrary]{}*]{} $\pi$ and $\tau$. On the other hand, we may discuss a condition on the triple $(G,H, \pi)$ for which $m(\pi, \tau)$ is finite for arbitrary $\tau$. This direction was pursued in [@Kb2] under the additional assumption of discrete decomposability of branching laws (referred to as [*[$H$-admissible restriction]{}*]{}), and the classification theory has been recently studied in [@decoAq; @xtoshiyoshima], particularly for infinite-dimensional representations $\pi$ of $G$ ([*[e.g.]{}*]{}, Zuckerman’s derived functor modules, minimal representations, [*[etc.]{}*]{}). ]{}]{}
Uniformly bounded multiplicities {#subsec:BM}
--------------------------------
In addition to the aforementioned finite-multiplicity property [**[(FM)]{}**]{}, we consider the following two properties on a pair of reductive groups $(G,H)$:
1. ([*[Bounded-multiplicity restriction]{}*]{}) There exists a constant $C< \infty$ such that $$\dim \operatorname{Hom}_H(\pi|_H, \tau)
\le C,$$ for any irreducible smooth representation $\pi$ of $G$ and for any irreducible smooth representation $\tau$ of $H$.
2. ([*[Multiplicity-free restriction]{}*]{})$$\dim \operatorname{Hom}_H (\pi|_H, \tau) \le 1$$ for any irreducible smooth representation $\pi$ of $G$ and for any irreducible smooth representation $\tau$ of $H$.
Clearly, we have $
\text{(MF) $\Rightarrow$ (BM) $\Rightarrow$ (FM)}.
$ Fact \[fact:1.4\] is summarized by the following equivalences in the vertical direction: $$\begin{aligned}
{6}
& \text{(MF)}
&& \quad\,\,\Rightarrow
&& \text{(BM)}
&& \Rightarrow
&& \text{(FM)}
&& \cdots \text{Representation Theory}
\\
&
&& {\text{\small{\cite[Theorem D]{xtoshitoshima}}}}
&& \,\,\Updownarrow
&&
&& \,\,\Updownarrow {\text{\small{\cite[Theorem C]{xtoshitoshima}}}}
&&
\\
&
&&
&& \text{(BB)}
&& \Rightarrow
&& \text{(PP)}
&& \cdots \text{Geometry of flag varieties}
\\\end{aligned}$$
We note that the properties (FM) and (BM) depend only on the Lie algebras $({\mathfrak {g}}, {\mathfrak {h}})$. Moreover, the bounded-multiplicity property (BM) depends only on the complexified Lie algebra $({\mathfrak {g}}_{\mathbb{C}}, {\mathfrak {h}}_{\mathbb{C}})$, as was proved in [@xtoshitoshima]. On the other hand, the multiplicity-free property (MF) is not determined by the pair of Lie algebras $({\mathfrak {g}}, {\mathfrak {h}})$, but depends on the groups $G$ and $H$. For example, the best constant $C=2$ if $(G,H)=(SL(2,{\mathbb{R}}), SO(1,1))$ and $C=1$ if $(G',H')=(O(2,1), O(1,1))$ although the Lie algebras $({\mathfrak {g}}, {\mathfrak {h}})$ and $({\mathfrak {g}}', {\mathfrak {h}}')$ are isomorphic to each other.
As a corollary of Fact \[fact:1.4\] and Proposition \[prop:cpx\], we have a classification of symmetric pairs $({\mathfrak {g}}, {\mathfrak {h}})$ satisfying the property (BM):
\[cor:B\] Suppose $({\mathfrak {g}}, {\mathfrak {h}})$ is a real reductive symmetric pair. Then the following three conditions are equivalent:
1. For any real reductive Lie groups $G \supset H$ with Lie algebras ${\mathfrak {g}} \supset {\mathfrak {h}}$, respectively, the pair $(G,H)$ satisfies the bounded multiplicity property [[(BM)]{}]{} for restriction.
2. There exists a pair of [[(]{}]{}possibly disconnected[[)]{}]{} real reductive Lie groups $G \supset H$ such that $(G,H)$ satisfies the multiplicity-free property [[(MF)]{}]{} for restriction.
3. The pair of the Lie algebras $({\mathfrak{g}},{\mathfrak{h}})$ is isomorphic [[(]{}]{}up to outer automorphisms[[)]{}]{} to the direct sum of pairs [[(A)]{}]{}, [[(B)]{}]{} and [[(F1)]{}]{} – [[(F5)]{}]{}.
The implication (ii) $\Rightarrow$ (i) is obvious as mentioned. The equivalence (i) $\Leftrightarrow$ (iii) is given in [@xtoshitoshima Theorem D]. The implication (iii) $\Rightarrow$ (ii) was proved in Sun–Zhu [@SZ]. (Thus there are two different proofs for the implication (iii) $\Rightarrow$ (i).) As a more refined form of the implication (iii) $\Rightarrow$ (ii), Gross and Prasad [@GP] formulated a conjecture about the restriction of an irreducible admissible tempered representation of an inner form $G$ of the group $O(n)$ over a local field to a subgroup which is an inner form $O(n-1)$ ([*[cf.]{}*]{} (F2) and (F4) for the Archimedean field).
\[ex:sbon\] [[ For the pair $(G,H)
=(O(n+1,1), O(n,1))$, the space $\operatorname{Hom}_H(\pi|_H, \tau)$ of continuous $H$-homomorphisms was classified in [@xtsbon] for all spherical principal series representations $\pi$ and $\tau$ of $G$ and $H$, respectively. This corresponds to a special case of (F5) in Corollary \[cor:B\]. The classification was based on the explicit orbit decomposition [@xtsbon Chapter 5] $$G \backslash (G \times G)/(P_G \times P_G)
\simeq
P_G \backslash G / P_G,$$ and a meromorphic family of $H$-intertwining operators were constructed for each orbit. ]{}]{}
Invariant trilinear forms {#subsec:group}
-------------------------
A special case of a symmetric pair is the group case $$(G,H)=(G' \times G', {\operatorname{diag}} G'),$$ for which the branching problem deals with the decomposition of the tensor product of two irreducible representations of the group $G'$.
Furthermore, the pair $(G' \times G', \operatorname{diag}G')$ satisfies [[(PP)]{}]{} if and only if the homogeneous space $(G' \times G' \times G')/\operatorname{diag} G'$ is a real spherical variety in view of the following isomorphism: $$(P_{G'}\times P_{G'}\times P_{G'})\backslash(G' \times G' \times G')/
\operatorname{diag}G'
\simeq
(P_{G'} \times P_{G'})\backslash(G' \times G')/ P_{G'}.$$
By these observations, we can interpret Theorem \[thm:fm\] and Corollary \[cor:B\] in the following form (cf. [@xtoshi95]):
\[cor:1.2-copy\] Suppose $G$ is a simple Lie group. Then the following three conditions on $G$ are equivalent:
1. For any triple of admissible smooth representations $\pi_1$, $\pi_2$, and $\pi_3$ of $G$, $$\dim \operatorname{Hom}_G(\pi_1 \otimes \pi_2,
\pi_3)< \infty.$$
2. For any triple of admissible smooth representations $\pi_1$, $\pi_2$ and $\pi_3$ of $G$, the space of invariant trilinear forms is finite-dimensional: $$\dim \operatorname{Hom}_G(\pi_1 \otimes \pi_2 \otimes \pi_3, {\mathbb{C}})<\infty.$$
3. Either $G$ is compact or ${\mathfrak {g}}$ is isomorphic to ${\mathfrak{o}}(n,1)$ $(n \ge 2)$.
Suppose $G$ is a simple Lie group. Then the following three conditions on $G$ are equivalent:
1. There exists a constant $C< \infty$ such that $$\dim \operatorname{Hom}_G(\pi_1 \otimes \pi_2, \pi_3) \le C,$$ for any irreducible smooth representations $\pi_1$, $\pi_2$, and $\pi_3$ of $G$.
2. There exists a constant $C< \infty$ such that $$\dim \operatorname{Hom}_G(\pi_1 \otimes \pi_2 \otimes \pi_3, {\mathbb{C}})
\le C,$$ for any irreducible smooth representations $\pi_1$, $\pi_2$, and $\pi_3$ of $G$.
3. The Lie algebra ${\mathfrak {g}}$ is isomorphic to one of $
{\mathfrak {su}}(2) \simeq {\mathfrak {o}}(3)
$, $
{\mathfrak {su}}(1,1) \simeq {\mathfrak {sl}}(2,{\mathbb{R}})
\simeq
{\mathfrak {o}}(2,1)
$ or $
{\mathfrak {sl}}(2,{\mathbb{C}})
\simeq
{\mathfrak {o}}(3,1)
$.
Built on the nice properties in Corollary \[cor:1.2-copy\], a meromorphic family of invariant trilinear forms of principal series representations of the Lorentz group $O(n,1)$ was studied in [@CKOP].
[99]{} M. Berger, [*[Les espaces sym[é]{}triques non compacts]{}*]{}, Ann. Sci. [É]{}cole Norm. Sup. [**[74]{}**]{} (1957), 85–177.
Y. Benoist, [*[Multiplicit[é]{} un pour les espaces sym[' e]{}triques exponentiels]{}*]{}, M[é]{}m. Soc. Math. France (N.S.) No. 15 (1984), 1–37.
A. Cooper, [*The classifying ring of groups whose classifying ring is commutative*]{}, MIT Thesis (1975), unpublished.
J.-L. Clerc, T. Kobayashi, B. Ørsted, and M. Pevzner, [*[Generalized Bernstein–Reznikov integrals]{}*]{}, Math. Ann. **349** (2011), [395–431](http://dx.doi.org/10.1007/s00208-010-0516-4).
E. B. Dynkin, [*[Semisimple subalgebras of semisimple Lie algebras]{}*]{}, Amer. Math. Soc. Transl. [**[6]{}**]{} (1957), 111–244.
B. Gross, D. Prasad, [*[On the decomposition of a representations of $SO_n$ when restricted to $SO_{n-1}$]{}*]{}, Canad. J. Math. [**44**]{} (1992), 974–1002.
Harish-Chandra, [*[Representations of semisimple Lie groups on a Banach space]{}*]{}, Proc. Nat. Acad. Sci. U. S. A. [**37**]{} (1951), 170–173.
B. Kimelfeld, [*[Homogeneous domains on flag manifolds]{}*]{}, J. Math. Anal. & Appl. [**121**]{}, (1987), pp. 506–588.
T. Kobayashi, *Discrete decomposability of the restriction of $A_{\mathfrak{q}}(
\lambda)$ with respect to reductive subgroups and its applications*, Invent. Math. **117** (1994), [181–205](http://dx.doi.org/10.1007/BF01232239); Part II, Ann. of Math. (2) **147** (1998), [709–729](http://dx.doi.org/10.2307/120963); Part III, Invent. Math. **131** (1998), [229–256](http://dx.doi.org/10.1007/s002220050203).
, [*[Introduction to harmonic analysis on real spherical homogeneous spaces]{}*]{}, Proceedings of the 3rd Summer School on Number Theory in Nagano (F. Sato, ed.), 1995, 22–41 (in Japanese).
, [*[F-method for symmetry breaking operators]{}*]{}, Differential Geom. Appl. [**[33]{}**]{} (2014), pp. 272–289, Special issue in honour of M. Eastwood, [DOI:10.1016/j.difgeo.2013.10.003](http://dx.doi.org/10.1016/j.difgeo.2013.10.003). Published online 20 November 2013, (available at [arXiv:1303.3545](http://arxiv.org/abs/1303.3541)).
, [*[Shintani functions, real spherical manifolds, and symmetry breaking operators]{}*]{}, preprint, 36 pages, [[arXiv:1401.0117](http://arxiv.org/abs/1401.0117)]{}.
T. Kobayashi, T. Oshima, [*[Finite multiplicity theorems for induction and restriction]{}*]{}, Adv. Math. [**[248]{}**]{} (2013), [921–944](http://dx.doi.org/10.1016/j.aim.2013.07.015), (available at [arXiv:1108.3477](http://arxiv.org/abs/1108.3477)).
T. Kobayashi, Y. Oshima, [*[Classification of discretely decomposable $A_{\mathfrak{q}}(\lambda)$ with respect to reductive symmetric pairs]{}*]{}, Adv. Math., [**[231]{}**]{} (2012), [2013–2047](http://dx.doi.org/10.1016/j.aim.2012.07.006).
, [*[Classification of symmetric pairs with discretely decomposable restrictions of $({\mathfrak {g}}, K)$-modules]{}*]{}, Crelles Journal, published on line 13 July, 2013. 19 pp. [doi:10.1515/crelle-2013-0045](http://dx.doi.org/10.1515/crelle-2013-0045).
T. Kobayashi, B. Speh, [*Intertwining operators and the restriction of representations of rank one orthogonal groups*]{}, C. R. Acad. Sci. Paris, Ser. I, [**[352]{}**]{} (2014), [[89-94](http://dx.doi.org/10.1016/j.crma.2013.11.018)]{} ; [*[Symmetry breaking for representations of rank one orthogonal groups]{}*]{}, 131 pages, [arXiv:1310.3213](http://arxiv.org/abs/1310.3213).
M. Krämer, *Multiplicity free subgroups of compact connected Lie groups*, Arch. Math. (Basel) **27** (1976), [28–36](http://dx.doi.org/DOI:10.1007/BF01224637).
T. Oshima, J. Sekiguchi, [*[Eigenspaces of invariant differential operators on an affine symmetric space]{}*]{}, Invent. Math. [**[57]{}**]{} (1980), 1–81.
, [*[The restricted root system of a semisimple symmetric pair]{}*]{}, Adv. Stud. Pure Math., [**[4]{}**]{} (1984), 433–497.
B. Sun, C.-B. Zhu, *Multiplicity one theorems: the Archimedean case*, Ann. of Math. [**[175]{}**]{} (2012), 23–44.
N. R. Wallach, Real reductive groups. I, II, Pure and Applied Mathematics, [**[132]{}**]{} Academic Press, Inc., Boston, MA, 1988. xx+412 pp.
[^1]: Kavli IPMU (WPI) and Graduate School of Mathematical Sciences, the University of Tokyo, Meguro-ku, Tokyo, 153-8914, Japan, E-mail address: [email protected]
[^2]: Faculty of Letters, Ryukoku University, Kyoto, 612-8577, Japan, E-mail address: [email protected]
|
---
abstract: |
Our computations show that there is a total of $40$ pairs of degree six coprime polynomials $f,g$ where $f(x)=(x-1)^6$, $g$ is a product of cyclotomic polynomials, $g(0)=1$ and $f,g$ form a primitive pair. The aim of this article is to determine whether the corresponding $40$ symplectic hypergeometric groups with a maximally unipotent monodromy follow the same dichotomy between arithmeticity and thinness that holds for the $14$ symplectic hypergeometric groups corresponding to the pairs of degree four polynomials $f,g$ where $f(x)=(x-1)^4$ and $g$ is as described above. As a result we prove that at least $18$ of these $40$ groups are arithmetic in $\mathrm{Sp}(6)$.
In addition, we extend our search to all degree six symplectic hypergeometric groups. We find that there is a total of $458$ pairs of polynomials (up to scalar shifts) corresponding to such groups. For $211$ of them, the absolute values of the leading coefficients of the difference polynomials $f-g$ are at most $2$ and the arithmeticity of the corresponding groups follows from Singh and Venkataramana, while the arithmeticity of one more hypergeometric group follows from Detinko, Flannery and Hulpke.
In this article, we show the arithmeticity of $160$ of the remaining $246$ hypergeometric groups.
address:
- 'Mathematisches Institut, Georg-August-Universität Göttingen, Germany'
- 'Mathematisches Institut, Georg-August-Universität Göttingen, Germany'
- 'Department of Mathematics, Indian Institute of Technology Bombay, Mumbai, India'
- 'Department of Mathematics, Indian Institute of Technology Bombay, Mumbai, India'
author:
- 'Jitendra Bajpai, Daniele Dona, Sandip Singh and Shashank Vikram Singh'
bibliography:
- 'BDSS.bib'
nocite: '\nocite{}'
title: Symplectic Hypergeometric Groups of degree six
---
Introduction {#sec:intro}
============
A hypergeometric differential equation of order $n$ is an ordinary differential equation of order $n$ with three regular singular points. It is defined on the thrice punctured Riemann Sphere ${\mathbb{P}}^{1}({{\mathbb{C}}})\backslash \{ 0,1,\infty\}$. Let $\theta = z \frac{d}{dz}$ and $$\alpha = (\alpha_1, \ldots , \alpha_n) , \beta = ( \beta_1, \ldots , \beta_n) \in {{\mathbb{C}}}^{n}.$$ We define the hypergeometric differential equation of order $n$ by $$\label{hde}
[z(\theta + \alpha_1) \cdots (\theta + \alpha_n) - (\theta+\beta_1 -1)\cdots (\theta+\beta_{n -1} -1 )] u(z) =0\,.$$
This has $n$ linearly independent solutions which can be explicitly expressed as hypergeometric functions of type ${}_{n} F_{n-1}$ around any point $z \in {\mathbb{P}}^{1}({{\mathbb{C}}})\backslash \{ 0,1,\infty\}$. For $\alpha=(\alpha_1, \ldots , \alpha_n) $ and $\beta=(\beta_1, \ldots , \beta_{n-1})$, we define
$${}_{n} F_{n-1}(\alpha_1,\ldots, \alpha_n; \beta_1, \ldots, \beta_{n-1} | z)= \sum_{k=0}^{\infty} \frac{(\alpha_1)_{k} \ldots (\alpha_n)_{k} }{(\beta_1)_{k} \ldots (\beta_{n-1})_{k}} \frac{z^{k}}{k!}\,,$$ where $(\alpha)_{k}= \frac{\Gamma(\alpha+k)}{\Gamma(\alpha)}$. Then $n$-linearly independent solutions $u(z)$ of the equation (\[hde\]) are defined by the functions $$z^{1-\beta_j} {}_{n} F_{n-1}(1+\alpha_1-\beta_j,\ldots, 1+\alpha_n-\beta_j; 1+\beta_1-\beta_j, \ldots, \overline{1+\beta_j -\beta_j}, \ldots, 1+\beta_{n} -\beta_j | z)$$ where $\overline{1+\beta_j -\beta_j}$ represents the omission of the term in the above expression.
Now it follows that the fundamental group $\pi_1$ of ${\mathbb{P}}^{1}({{\mathbb{C}}})\backslash \{ 0,1,\infty\}$ acts on the (local) solution space of the hypergeometric equation (\[hde\]) and we get the monodromy representation $\rho:\pi_1\longrightarrow {\mathrm{GL}}(V)$ where $V$ is the $n$ dimensional solution space of the differential equation (\[hde\]) on a small neighbourhood of a point $z_0$ (say) in ${\mathbb{P}}^{1}({{\mathbb{C}}})\backslash \{ 0,1,\infty\}$. The subgroup $\rho(\pi_1)$ of ${\mathrm{GL}}(V)$ is said to be the monodromy group of the hypergeometric differential equation (\[hde\]). We also call it the hypergeometric group associated to the parameters $\alpha = (\alpha_1, \ldots , \alpha_n) , \beta = ( \beta_1, \ldots , \beta_n) \in {{\mathbb{C}}}^{n}.$
Levelt [@BH Theorem 3.5] showed that if $\alpha_j-\beta_k\notin{{\mathbb{Z}}}$ for all $1\le j,k\le n$, then there exists a basis of the solution space of the hypergeometric equation with respect to which the hypergeometric group corresponding to the parameters $\alpha = (\alpha_1, \ldots, \alpha_n), \beta = (\beta_1, \ldots, \beta_n) \in {{\mathbb{C}}}^{n}$ is the subgroup of ${\mathrm{GL}}_n({{\mathbb{C}}})$ generated by the companion matrices of the polynomials $$f(x)=\prod_{j=1}^n(x-e^{2\pi i\alpha_j}),\quad g(x)=\prod_{j=1}^n(x-e^{2\pi i\beta_j})$$ and any other hypergeometric group having the same parameters is a conjugate of this one. Note that the condition $\alpha_j-\beta_k\notin{{\mathbb{Z}}}$ for all $1\le j,k\le n$ ensures that the polynomials $f$ and $g$ do not have any common root.
Now we consider the case $n=6$ and the pair of polynomials $f,g$ that are products of cyclotomic polynomials, do not have any common root, form a primitive pair (that is, there do not exist polynomials $f_1,g_1\in{{\mathbb{Z}}}[x]$ so that $f(x)=f_1(x^k), g(x)=g_1(x^k)$ for $k\ge 2$), and $f(0)=g(0)=1$. Then, it follows from Beukers and Heckman [@BH Theorem 6.5] that the corresponding hypergeometric group $\Gamma(f,g)$ preserves a non-degenerate symplectic form $\Omega$ on ${{\mathbb{Q}}}^6$ and $\Gamma(f,g)$ is Zariski dense inside the corresponding symplectic group ${\mathrm{Sp}}_\Omega$. So in our case $\Gamma(f,g)\subseteq{\mathrm{Sp}}_\Omega({{\mathbb{Z}}})$ and we determine the pairs $f,g$ corresponding to which $\Gamma(f,g)$ has finite index in ${\mathrm{Sp}}_\Omega({{\mathbb{Z}}})$; whenever this occurs we call $\Gamma(f,g)$ arithmetic in the corresponding symplectic group.
Note that we made our count of all such pairs $f,g$ up to scalar shifts. By this we mean that it is equivalent to study the hypergeometric groups $\Gamma(f,g)$ and $\Gamma(f', g')$ when $f'(x)=f(-x)$ and $g'(x)=g(-x)$. This equivalence can be explained following the Remark 1.2 of [@S17] by making an appropriate transition of $4 \times 4$ matrices into $6 \times 6$ matrices. We explain this using the following example: consider the pairs of polynomials $f(x)=\Phi_{1}(x)^{6}$, $g(x)=\Phi_3(x)\Phi_{6}(x)^{2}$ associated to the pairs of parameters $\alpha=(0,0,0,0,0,0)$ and $\beta=(\frac{1}{3}, \frac{2}{3},\frac{1}{6}, \frac{5}{6},\frac{1}{6}, \frac{5}{6} )$ and the pairs $f(-x)=\Phi_{2}(x)^{6}$, $g(-x)=\Phi_6(x)\Phi_{3}(x)^{2}$ associated to the pairs of parameters $\alpha'=(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2})$ and $\beta'=(\frac{1}{3}, \frac{2}{3},\frac{1}{3}, \frac{2}{3},\frac{1}{6}, \frac{5}{6} )$. The pairs $\alpha', \beta'$ and the pairs $\alpha, \beta$ can be transformed into one another by simply adding $\frac{1}{2}$ in each of their entries. Here $\Phi_{n}(x)$ denotes the $n^{th}$-cyclotomic polynomial. The pairs $\alpha, \beta$, or equivalently the pairs $f,g$, respecting all the conditions discussed above will be our [*qualified pairs*]{} to be considered for the study of this article, and we find $458$ such pairs.
The arithmeticity of the degree six symplectic hypergeometric groups has been also investigated by Detinko, Flannery and Hulpke [@DFH] and they have found one arithmetic group associated to pairs of polynomials $f=\Phi_{3}(x) \Phi_5(x)$ and $g=\Phi_{14}(x)$. This is listed in Table 2 of [@DFH]. For the complete list of their investigation see [@H]. Notice that all mentioned arithmetic groups in their list, except the groups associated to the pair of polynomials $f=\Phi_{3}(x) \Phi_5(x)$, $g=\Phi_{14}(x)$ and the pair $f'=\Phi_6(x) \Phi_{10}(x)$, $g'=\Phi_7(x)$ (which is simply a scalar shift of the pair $f, g$), are arithmetic by the criterion of Singh and Venkataramana [@SV Theorem 1.1].
The following proposition easily follows from Singh and Venkataramana, see [@SV Remark 5.1].
\[Proposition\] Let $f,g$ be a pair of degree $n$ polynomials which are products of cyclotomic polynomials, do not have any common roots, form a primitive pair and have the constant terms equal to $1$ (these conditions ensure that $n$ must be even). Let the leading coefficient of the difference polynomial $f-g$ has the absolute value $\ge 3$. Let $e_1,e_2,\ldots,e_n$ be the standard basis vectors of ${{\mathbb{Q}}}^n$ over ${{\mathbb{Q}}}$ and $I$ be the $n\times n$ identity matrix. Let $A,B$ be the companion matrices of the polynomials $f,g$, respectively, and $v=(A^{-1}B-I)(e_n)$.
If there exists an element $\gamma\in\Gamma(f,g)$ such that the three vectors $v, \gamma (v), \gamma^{-1}(v)$ are [*linearly independent*]{} and the coefficient of $e_n$ in $\gamma(v)$ is either $\pm 2$ or $\pm 1$, then the corresponding hypergeometric group $\Gamma(f,g)$ is arithmetic in the corresponding symplectic group.
The above proposition is proved just by replacing either $A^k$ or $B^k$, depending on whether $\{v,A^k(v), A^{-k}(v)\}$ or $\{v,B^k(v), B^{-k}(v)\}$ is linearly independent (cf. [@SV Lemma 4.2]), by $\gamma$ in the proof of [@SV Theorem 1.1]. For the sake of completeness we provide a proof of the above proposition using [@SV Theorem 1.2] in Section \[ProofoftheProposition\].
It follows then that to show the arithmeticity of a symplectic hypergeometric group we only need to find an element $\gamma\in\Gamma(f,g)$ that satisfies the hypotheses of Proposition \[Proposition\]. To apply this criterion we look at the hypergeometric groups $\Gamma(f,g)$ (where $f,g$ are products of cyclotomic polynomials) inside ${\mathrm{Sp}}(6)$ and find that there are in total $458$ hypergeometric groups satisfying the conditions of Beukers and Heckman [@BH] so that they are Zariski dense inside the corresponding symplectic groups (cf. Tables A, B, C and D). Out of these $458$ groups, there are $211$ (cf. Table C) satisfying the criterion of Singh and Venkataramana [@SV Theorem 1.1] and their arithmeticity follows. There are $247$ remaining groups (cf. Tables A, B and D) which do not satisfy the criterion of Singh and Venkataramana [@SV Theorem 1.1] and out of them there are $161$ (cf. Table A and Table B) which satisfy the hypotheses of Proposition \[Proposition\] and their arithmeticity follows.
Note that the linear independence condition in Proposition \[Proposition\] is not reduntant: it is not always true that for a $\gamma\in\Gamma(f,g)$ for which the coefficient of $e_n$ in $\gamma(v)$ has absolute value 1 or 2, the three vectors $v, \gamma (v), \gamma^{-1}(v)$ are linearly independent. We have the following two examples.
In case $n=4$, let $$\alpha=\left(\frac{1}{2},\frac{1}{2},\frac{1}{3},\frac{2}{3}\right),\ \beta=\left(\frac{1}{4},\frac{1}{4},\frac{3}{4},\frac{3}{4}\right).$$ In this case the corresponding polynomials are $$f(x)=(x+1)^2(x^2+x+1)=x^4+3x^3+4x^2+3x+1,\ g(x)=(x^2+1)^2=x^4+2x^2+1.$$ Now, if we denote, respectively, by $A$ and $B$ the companion matrices of $f$ and $g$, then $v=(3,2,3,0)$ and for $\gamma=BA$, the coefficient of $e_4$ in $\gamma(v)$ is $2$ but the vectors $v, \gamma (v), \gamma^{-1}(v)$ are [*not*]{} linearly independent.
In case $n=6$, let $$\alpha=\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{6},\frac{5}{6}\right),\ \beta=\left(\frac{1}{9},\frac{2}{9},\frac{4}{9},\frac{5}{9},\frac{7}{9},\frac{8}{9}\right).$$ In this case the corresponding polynomials are $$f(x)=(x+1)^4(x^2-x+1)={x}^{6}+3\,{x}^{5}+3\,{x}^{4}+2\,{x}^{3}+3\,{x}^{2}+3\,x+1,\ g(x)=x^6+x^3+1.$$ Now, if we denote, respectively, by $A$ and $B$ the companion matrices of $f$ and $g$, then $v=(3,3,1,3,3,0)$ and for $\gamma=B^2A$, the coefficient of $e_6$ in $\gamma(v)$ is $1$ but the vectors $v, \gamma (v), \gamma^{-1}(v)$ are [*not*]{} linearly independent.
Therefore, we cannot drop the linear independence condition from the above proposition if we want to use the method of the proof of [@SV Theorem 1.1].
One of the starting motivations behind the work of this article lies in an attempt to answer a question asked by N. Katz during the workshop on “Thin Groups and Super Approximation" held at IAS Princeton in March 2016, where the first and the third author were among the participants. He asked whether the degree six symplectic hypergeometric groups with a maximally unipotent monodromy follow the same pattern as the 14 degree four symplectic hypergeometric groups with a maximally unipotent monodromy: we know in fact from [@S15S; @SV] that 7 of the 14 degree four groups are arithmetic, and the other 7 are thin by [@BT], and one may wonder whether a similar dichotomy occurs in this particular family of degree six symplectic hypergeometric groups.
We summarize our effort to answer the question above in the following theorem.
\[thm1\] There are $40$ degree six symplectic hypergeometric groups with a maximally unipotent monodromy, listed in Table A, out of which at least $18$ are arithmetic.
In addition to these $18$ arithmetic hypergeometric groups, we extend our search to the remaining hypergeometric groups: one of them is known to be arithmetic by [@DFH], and we are able to find $142$ more. More precisely, we conclude the following.
\[thm2\] The $143$ hypergeometric groups appearing in Table B in Section \[arithmeticexamples\] are arithmetic.
The arithmeticity of the groups mentioned in Theorem \[thm1\] and Theorem \[thm2\] above follows from Proposition \[Proposition\].
Proof of Proposition \[Proposition\] {#ProofoftheProposition}
====================================
It follows that the hypergeometric group $\Gamma(f,g)$ preserves a non-degenerate symplectic form $\Omega$ and $\Gamma(f,g)\subseteq{\mathrm{Sp}}_\Omega({{\mathbb{Z}}})$ is a Zariski dense subgroup (cf. [@BH Theorem 6.5]). Hence to use [@SV Theorem 1.2] we need to find three transvections $C_1,C_2,C_3\in \Gamma(f,g)$ and vectors $w_1,w_2,w_3\in{{\mathbb{Z}}}^n$ so that the set $\{w_1,w_2,w_3\}$ is linearly independent, ${{\mathbb{Z}}}w_i=(C_i-1)({{\mathbb{Z}}}^n),\ \forall 1\le i\le 3$ and $\Omega(w_i,w_j)\neq 0$ for some $1\le i,j\le 3$. With these conditions it follows that each $C_i$ maps $W$, the subspace spanned by the vectors $w_1,w_2,w_3$, into itself and then we also need to show that the group generated by the restrictions $C_1|_W,C_2|_W,C_3|_W$ contains a nontrivial element of the unipotent radical of ${\mathrm{Sp}}_W$.
We consider the following transvections $C_1=C=A^{-1}B$, $C_2=\gamma^{-1}C\gamma$, $C_3=\gamma C\gamma^{-1}$ and $w_1=v=(C-1)(e_n)$, $w_2=\gamma ^{-1}(v)$ and $w_3=\gamma (v)$. Note that $(C_1-1)(w)=\lambda v$ for all $w\in{{\mathbb{Z}}}^n$ and for some $\lambda\in{{\mathbb{Z}}}$, and then it follows that for each $1\le i\le 3$, $(C_i-1)(w)=\lambda_jw_i$ for all $w\in{{\mathbb{Z}}}^n$ and for some $\lambda_i\in{{\mathbb{Z}}}$. Hence the condition that $(C_i-1)({{\mathbb{Z}}}^n)={{\mathbb{Z}}}w_i, \forall 1\le i\le 3$ is satisfied. Also, it is part of the hypotheses of the proposition that the vectors $w_1,w_2,w_3$ are linearly independent.
Just by using the invariance of $\Omega$ under the action of $C$ and its non-degeneracy, we find that $\Omega(v,e_j)=0, \forall 1\le j\le n-1$ and $\Omega(v,e_n)\neq 0$. If $c$ is the coefficient of $e_n$ in $\gamma (v)$, $\Omega(v,\gamma^{-1} v)=\Omega(\gamma v,v)=-c\Omega(v,e_n)\neq 0$ implies that $\Omega(w_1,w_2)\neq0$.
Now, we consider the $3$ dimensional subspace $W=\sum_{j=1}^3{{\mathbb{Q}}}w_j$ and show that the group generated by the restrictions of the $C_i$ (for $1\le i\le 3$) to $W$ contains a nontrivial element of the unipotent radical of the symplectic group of $W$. Since $\dim W=3$ (an odd number), the restriction $\Omega|_W$ of $\Omega$ on $W$ is degenerate and $W\cap W^\perp$ is one dimensional as $\Omega(w_1,w_2)\neq 0$. Let $e\in W$ be a vector such that $W\cap W^\perp=\left<e\right>$. Note that $e$ cannot be written as linear combination of $w_1$ and $w_2$, and hence the set $\{e,w_1,w_2\}$ is linearly independent and gives a basis of $W$. With respect to this basis, ${\mathrm{Sp}}(W)={{\mathbb{Q}}}^2\rtimes{\mathrm{SL}}_2({{\mathbb{Q}}})$ can be realized as $$\left\{\begin{pmatrix}1&u_1&u_2\\0&a_1&a_2\\0&b_1&b_2
\end{pmatrix}: u_1,u_2,a_1,a_2,b_1,b_2\in{{\mathbb{Q}}}, a_1b_2-a_2b_1=1 \right\}$$and $$\left\{\begin{pmatrix}1&u_1&u_2\\0&1&0\\0&0&1
\end{pmatrix}: u_1,u_2\in{{\mathbb{Q}}}\right\}$$is the unipotent radical of ${\mathrm{Sp}}(W)$.
Now, we only need to check that $$\begin{pmatrix}1&u_1&u_2\\0&1&0\\0&0&1
\end{pmatrix}\in\left<C_1|_W,C_2|_W,C_3|_W\right>$$for some $(u_1,u_2)\neq (0,0)\in{{\mathbb{Z}}}^2$.
Since $C_j$ is unipotent and $C_j(e)\in W\cap W^\perp$, it follows that $C_j(e)=e$, $\forall 1\le j\le 3$.
By an easy check we find that $C_1(w_1)=w_1$, $C_1(w_2)=C(\gamma^{-1}(v))=\gamma^{-1}(v)-cv=w_2-cw_1$ and it follows that $$C_1|_W=\begin{pmatrix}
1&0&0\\0&1&-c\\0&0&1
\end{pmatrix}.$$
Also, $C_2(w_1)=\gamma^{-1}C\gamma(v)=\gamma^{-1}(\gamma(v)+cv)=v+c\gamma^{-1}(v)=w_1+cw_2$, $C_2(w_2)=\gamma^{-1}C\gamma(\gamma^{-1}(v))=\gamma^{-1}(v)=w_2$ and it follows that $$C_2|_W=\begin{pmatrix}
1&0&0\\0&1&0\\0&c&1
\end{pmatrix}.$$ Note that, for $c=\pm1,\pm2$, the two matrices $$\begin{pmatrix}
1&-c\\0&1
\end{pmatrix}, \begin{pmatrix}
1&0\\c&1
\end{pmatrix}$$ generate a finite index subgroup of ${\mathrm{SL}}_2({{\mathbb{Z}}})$.
Now, we write the matrix representation of $C_3|_W$ with respect to the basis $\{e,w_1,w_2\}$. For, $C_3(w_1)=\gamma C\gamma^{-1}(v)=\gamma C(-ce_n+v')=\gamma(-c(e_n+v)+v')=\gamma((-ce_n+v')-cv)=\gamma(\gamma^{-1}(v)-cv)=v-c\gamma (v)=w_1-cw_3$ where $v'$ is a linear combination of the vectors $e_1,e_2,\ldots,e_{n-1}$ and hence it is fixed under the action of $C$. If we write $C_3(w_1)=l_1 e+l_2w_1+l_3w_2$ for some $l_1,l_2,l_3\in{{\mathbb{Q}}}$, then it follows that $l_1\neq0$ as $w_1,w_2,w_3$ are linearly independent. Now, $C_3(w_2)=m_1e+m_2w_1+m_3w_2$ for some $m_1,m_2,m_3\in{{\mathbb{Q}}}$. Then, it follows that
$$C_3|_W=\begin{pmatrix}
1&l_1&m_1\\0&l_2&m_2\\0&l_3&m_3
\end{pmatrix}$$with $l_1\neq0$.
Since $C_3$ is unipotent and for $c=\pm1$ or $\pm2$, $\begin{pmatrix}
1&-c\\0&1
\end{pmatrix}, \begin{pmatrix}
1&0\\c&1
\end{pmatrix}$ generate a finite index subgroup of ${\mathrm{SL}}_2({{\mathbb{Z}}})$, the $2\times 2$ matrix $u=\begin{pmatrix}
l_2&m_2\\l_3&m_3
\end{pmatrix}$ is unipotent and hence there exists an integer $m$ such that $$u^m\in \left<\begin{pmatrix}
1&-c\\0&1
\end{pmatrix}, \begin{pmatrix}
1&0\\c&1
\end{pmatrix}\right>$$and there exists $h\in \langle C_1|_W,C_2|_W\rangle$ such that $$(C_3|_W)^m\cdot h^{-1}=\begin{pmatrix}
1&t_1&t_2\\0&1&0\\0&0&1\end{pmatrix}
\in \langle C_1|_W,C_2|_W,C_3|_W\rangle$$where $(t_1,t_2)=(l_1,m_1)(1+u+u^2+\cdots+u^{m-1})\neq (0,0)$ (since $u$ is unipotent and hence $1+u+u^2+\cdots+u^{m-1}$ is nonsingular; and $l_1\neq0$). Thus the element $(C_3|_W)^m\cdot h^{-1}$ of the group $\left< C_1|_W,C_2|_W,C_3|_W\right>$ is a non-trivial element of the unipotent radical of ${\mathrm{Sp}}_W$ and it now follows from [@SV Theorem 1.2] that the group $\Gamma(f,g)$ satisfying the hypotheses of Proposition \[Proposition\], is arithmetic.
Sage Code {#se:sage}
=========
In this section we present the program that aided us in detecting arithmetic hypergeometric groups. The program is written in SageMath, version 8.9 [@sage]: the computations are quite elementary and could have been performed by programs in other languages as well but SageMath is an open-source which is why we chose to use it.
The program is designed to take two polynomials $f,g$ and an integer $k$, and find whether there exists some $\gamma\in\Gamma(f,g)$ satisfying the hypotheses of Proposition \[Proposition\] and that can be written as a product of at most $k$ matrices in $\{A,B,A^{-1},B^{-1}\}$, where $A$ and $B$ are the companion matrices of $f$ and $g$ respectively. Example values have already been inserted, with $f=\Phi_{1}(x)^{6}$, $g=\Phi_{3}(x)^{2}\Phi_{6}(x)$ and $k=9$, corresponding to the parameters appearing in entry $17$ of Table A; an interested reader only needs to modify these values to use the code for themselves (see the lines immediately after “`# Here the main program starts`”). The code is commented throughout, to improve legibility and verifications.
#####
# Given an integer k and two polynomials f,g of the same degree (say n),
# the program takes their companion matrices A,B and the vector
# v=(A^(-1)B-I)e_n, then it finds out whether there is
# a product M of at most k matrices in {A,B,A^(-1),B^(-1)}
# such that the n-th entry of the vector Mv is in {+1,-1,+2,-2}
# and the three vectors M^(-1)v,v,Mv are linearly independent.
#####
# The following subroutine converts the vector vf of the coefficients
# of a polynomial f into the companion matrix of f.
def companion_internal(vf):
le=len(vf)-1
M=matrix(le,le)
M[0,le-1]=-vf[0]
i=1
while i<le:
M[i,i-1]=1
M[i,le-1]=-vf[i]
i+=1
return M
#####
# The following subroutine converts a polynomial func
# into its companion matrix.
def companion(func):
return companion_internal(func.list())
#####
# The following subroutine takes two nxn-matrices A,B and returns
# the vector v=(A^(-1)B-I)e_n.
def othervec(A,B,n):
en=vector([0]*n)
en[n-1]=1
return (A^(-1)*B-matrix.identity(n))*en
#####
# The following subroutine checks if the last entry of a vector v
# is in {+1,-1,+2,-2}.
def checklast(v):
entry=v[len(v)-1]
if entry==1 or entry==-1 or entry==2 or entry==-2:
return True
else:
return False
#####
# The following subroutine checks if M^(-1)v,v,Mv are
# linearly independent (in Q). It returns True if they
# are, and False if they are not.
def independent(M,v):
E=QQ^len(v)
return not E.are_linearly_dependent([M^(-1)*v,v,M*v])
#####
# The following subroutine tries both the "checklast" subroutine
# on a vector v and the "independent" subroutine on v and a matrix M.
# If both are True, it returns [True,M]; if at least one is False,
# it returns [False].
def tryone(M,v,s):
if checklast(M*v):
if independent(M,v):
return [True,M,s]
return [False]
return [False]
#####
# The following is the main subroutine: given an integer k, a vector v,
# 5 matrices A,B,C,D,M (with C=A^(-1),D=B^(-1)), a string s
# of A,B,A^(-1),B^(-1) corresponding to M, and an indicator that says
# what the last matrix in the decomposition of M was, first it checks
# whether M and v satisfy the two desired properties; then if they
# do not and |s|<k, it calls recursively the same subroutine for
# M*A,M*B,M*A^(-1),M*B^(-1) (actually, only 3 of them, the ones that
# do not involve a X*X^(-1) at the end). If at any point there is one M
# that satisfies the two desired properties, it returns [True,M,s];
# if there is not, it returns [False].
def tryall(s,k,A,B,C,D,M,v,lastguy):
check=tryone(M,v,s)
if len(s)<k:
if lastguy==0:
Next=[['A',A],['B',B],['A^(-1)',C],['B^(-1)',D]]
elif lastguy=='A':
Next=[['A',A],['B',B],['B^(-1)',D]]
elif lastguy=='B':
Next=[['A',A],['B',B],['A^(-1)',C]]
elif lastguy=='A^(-1)':
Next=[['B',B],['A^(-1)',C],['B^(-1)',D]]
elif lastguy=='B^(-1)':
Next=[['A',A],['A^(-1)',C],['B^(-1)',D]]
else:
return 'Error'
indnext=0
while indnext<len(Next):
if check==[False]:
check=tryall(s+[Next[indnext][0]],k,A,B,C,D, \
M*Next[indnext][1],v,Next[indnext][0])
indnext+=1
return check
#####
# Here the main program starts.
# Define f here.
f=cyclotomic_polynomial(1)^6
# Define g here.
g=cyclotomic_polynomial(3)^2*cyclotomic_polynomial(6)
# Define k here: this is the maximal length that one wants to check.
k=9
# Printing the input.
print('== Input ==')
print('Polynomial f:')
print(f)
print('Polynomial g:')
print(g)
A=companion(f)
B=companion(g)
C=A^(-1)
D=B^(-1)
v=othervec(A,B,f.degree(x))
print('Matrix A:')
print(A)
print('Matrix B:')
print(B)
print('Matrix A^(-1):')
print(C)
print('Matrix B^(-1):')
print(D)
print('Vector v:')
print(v)
print('We try up to length',k,'and see if there is a matrix that works.')
# Here the main subroutine is called.
final=tryall([],k,A,B,C,D,matrix.identity(f.degree(x)),v,0)
# If "final" is [False], it means that there was no product of at most
# k instances of A,B,A^(-1),B^(-1) such that the conditions are satisfied.
# Otherwise, "final" is [True,M,s] where M is the product matrix itself
# and s is a string of A,B,A^(-1),B^(-1) that represents M.
print(' ')
print('== Output ==')
if final[0]==False:
print('There is no word of at most',k,'letters that respects all conditions.')
elif final[0]==True:
print('There is a word respecting all conditions!')
s=final[2]
le=len(s)
strin=''
i=0
while i<le:
strin=strin+s[i]
i+=1
print('Product:')
print(strin)
print('Matrix M to which it corresponds:')
print(final[1])
print('Vectors Mv,v,M^(-1)v:')
print(final[1]*v)
print(v)
print(final[1]^(-1)*v)
else:
print('There is some error in this code.')
Table A: hypergeometric groups in ${\mathrm{Sp}}(6)$ with $\alpha=(0,0,0,0,0,0)$ {#sec:mum}
================================================================================
In this section, we consider an interesting family of hypergeometric groups with a maximally unipotent monodromy, [i.e. ]{}with respect to the pairs $f,g$ of degree six polynomials where $f=(x-1)^6$ ([i.e. ]{}$\alpha=(0,0,0,0,0,0)$ in this case), $g$ is a product of cyclotomic polynomials and satisfying $g(0)=1, g(1)\neq 0$ and $f,g$ form a primitive pair. Our computations show that there are 40 such qualified pairs which we list in the table below.
In the table below the second column records all the possible $\beta=(\beta_1,\beta_2,\ldots,\beta_6)$ which determines $g$ as $g=\prod_{j=1}^6(x-e^{2\pi i\beta_j})$, the third column keeps track of the absolute value of the leading coefficient of the difference polynomial $f-g$, denoted by $|lc(f-g)|$, the fourth column describes the vector $v=(A^{-1}B-I)(e_6)$ with $e_6=(0,0,0,0,0,1)$ and in the fifth column we provide a $\gamma\in \Gamma(f,g)$ for which the hypotheses of Proposition \[Proposition\] are satisfied (when we know such a $\gamma$). In the last column the reader will find the $18$ groups whose arithmeticity has been proved in this article, are marked with “Yes", and the $22$ examples whose arithmeticity or thinness is unknown, are marked with “??". Notice that for all the $40$ groups listed below $|lc(f-g)| \geq 3$, so the criterion of [@SV Theorem 1.1] cannot be applied in these cases. However, the arithmeticity of the $18$ of the 40 hypergeometric groups follows from Proposition \[Proposition\].
It has been expected that many of the examples from this family will correspond to the mirrors of true Calabi-Yau 5-folds, for details see [@GMP]. One such interesting example will be the septic case, [i.e. ]{}the parameter $\beta=\big(\frac{1}{7},\frac{2}{7},\frac{3}{7},\frac{4}{7},\frac{5}{7},\frac{6}{7}\big)$. The reader may think of it as interesting as the quintic case of degree four associated to the parameters $\alpha=(0,0,0,0)$ and $\beta=(\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5}\big)$. These are also known in the literature as the members of the Dwork family, for quick details see Appendix-I and III in [@Katz] where one can think about the quintic and septic cases in particular while keeping the values of $d=n=5$ and $d=n=7$, respectively, in [@Katz].
S.No. $\beta$ $|lc(f-g)|$ $v$ $\gamma$ Arithmetic
------- ---------------------------------------------------------------------------------------------- ------------- ---------------------------- ------------------------------------ ------------
1 $\big(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2}\big)$ 12 $(-12, 0, -40, 0, -12, 0)$ ?? ??
2 $\big(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{3},\frac{2}{3}\big)$ 11 $(-11, 4, -34, 4, -11, 0)$ ?? ??
3 $\big(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{4},\frac{3}{4}\big)$ 10 $(-10, 8, -28, 8, -10, 0)$ ?? ??
4 $\big(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{6},\frac{5}{6}\big)$ 9 $(-9, 12, -22, 12, -9, 0)$ ?? ??
5 $\big(\frac{1}{2},\frac{1}{2},\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3}\big)$ 10 $(-10, 7, -30, 7, -10, 0)$ ?? ??
6 $\big(\frac{1}{2},\frac{1}{2},\frac{1}{3},\frac{2}{3},\frac{1}{4},\frac{3}{4}\big)$ 9 $(-9, 10, -26, 10, -9, 0)$ ?? ??
7 $\big(\frac{1}{2},\frac{1}{2},\frac{1}{3},\frac{2}{3},\frac{1}{6},\frac{5}{6}\big)$ 8 $(-8, 13, -22, 13, -8, 0)$ ?? ??
8 $\big(\frac{1}{2},\frac{1}{2},\frac{1}{4},\frac{1}{4},\frac{3}{4},\frac{3}{4}\big)$ 8 $(-8, 12, -24, 12, -8, 0)$ ?? ??
9 $\big(\frac{1}{2},\frac{1}{2},\frac{1}{4},\frac{3}{4},\frac{1}{6},\frac{5}{6}\big)$ 7 $(-7, 14, -22, 14, -7, 0)$ ?? ??
10 $\big(\frac{1}{2},\frac{1}{2},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5}\big)$ 9 $(-9, 11, -24, 11, -9, 0)$ ?? ??
11 $\big(\frac{1}{2},\frac{1}{2},\frac{1}{6},\frac{1}{6},\frac{5}{6},\frac{5}{6}\big)$ 6 $(-6, 15, -22, 15, -6, 0)$ ?? ??
12 $\big(\frac{1}{2},\frac{1}{2},\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8}\big)$ 8 $(-8, 14, -20, 14, -8, 0)$ ?? ??
13 $\big(\frac{1}{2},\frac{1}{2},\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10}\big)$ 7 $(-7, 15, -20, 15, -7, 0)$ ?? ??
14 $\big(\frac{1}{2},\frac{1}{2},\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12}\big)$ 8 $(-8, 15, -18, 15, -8, 0)$ ?? ??
15 $\big(\frac{1}{3},\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{2}{3}\big)$ 9 $(-9, 9, -27, 9, -9, 0)$ ?? ??
16 $\big(\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{1}{4},\frac{3}{4}\big)$ 8 $(-8, 11, -24, 11, -8, 0)$ ?? ??
17 $\big(\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{1}{6},\frac{5}{6}\big)$ 7 $(-7, 13, -21, 13, -7, 0)$ $A^2 B A^{-1}B^{4}A$ Yes
18 $\big(\frac{1}{3},\frac{2}{3},\frac{1}{4},\frac{1}{4},\frac{3}{4},\frac{3}{4}\big)$ 7 $(-7, 12, -22, 12, -7, 0)$ $AB^{-1}A^3 B^3 A B^{-3}$ Yes
19 $\big(\frac{1}{3},\frac{2}{3},\frac{1}{4},\frac{3}{4},\frac{1}{6},\frac{5}{6}\big)$ 6 $(-6, 13, -20, 13, -6, 0)$ $AB^2 AB^5 A B^{-1}A B^{-2}$ Yes
20 $\big(\frac{1}{3},\frac{2}{3},\frac{1}{6},\frac{5}{6},\frac{1}{6},\frac{5}{6}\big)$ 5 $(-5,13,-19,13,-5,0)$ $B^{3}$ Yes
21 $\big(\frac{1}{3},\frac{2}{3},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5}\big)$ 8 $(-8, 12, -23, 12, -8, 0)$ ?? ??
22 $\big(\frac{1}{3},\frac{2}{3},\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8}\big)$ 7 $(-7,14,-20,14,-7,0)$ $AB^{5}$ Yes
23 $\big(\frac{1}{3},\frac{2}{3},\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10}\big)$ 6 $(-6, 14, -19, 14, -6, 0)$ $B^{6}A^{2}B^{4}A^{-1}$ Yes
24 $\big(\frac{1}{3},\frac{2}{3},\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12}\big)$ 7 $(-7, 15, -19, 15, -7, 0)$ ?? ??
25 $\big(\frac{1}{4},\frac{1}{4},\frac{1}{4},\frac{3}{4},\frac{3}{4},\frac{3}{4}\big)$ 6 $(-6,12,-20,12,-6,0)$ $B^{2}A$ Yes
26 $\big(\frac{1}{4},\frac{1}{4},\frac{3}{4},\frac{3}{4},\frac{1}{6},\frac{5}{6}\big)$ 5 $(-5, 12, -18, 12, -5, 0)$ $B^2 A^3 B^{-3}$ Yes
27 $\big(\frac{1}{4},\frac{3}{4},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5}\big)$ 7 $(-7, 13, -22, 13, -7, 0)$ $A^4 B^4 A (AB)^{-1}$ Yes
28 $\big(\frac{1}{4},\frac{3}{4},\frac{1}{6},\frac{5}{6},\frac{1}{6},\frac{5}{6}\big)$ 4 $(-4,11,-16,11,-4,0)$ $BA^{-1}B^{6}A$ Yes
29 $\big(\frac{1}{4},\frac{3}{4},\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8}\big)$ 6 $(-6, 14, -20, 14, -6, 0)$ $A^2 B^2 A^{-1} B^4 A B^{-1}A^{2}$ Yes
30 $\big(\frac{1}{4},\frac{3}{4},\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10}\big) $ 5 $(-5, 13, -18, 13, -5, 0)$ $AB^2 A^{-1}B^{3}AB^{-1}$ Yes
31 $\big(\frac{1}{4},\frac{3}{4},\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12}\big)$ 6 $(-6, 15, -20, 15, -6, 0)$ ?? ??
32 $\big(\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5},\frac{1}{6},\frac{5}{6}\big)$ 6 $(-6,14,-21,14,-6,0)$ $AB^{5}$ Yes
33 $\big(\frac{1}{6},\frac{5}{6},\frac{1}{6},\frac{5}{6},\frac{1}{6},\frac{5}{6}\big)$ 3 $(-3,9,-13,9,-3,0)$ $A^2 B^4$ Yes
34 $\big(\frac{1}{6},\frac{5}{6},\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8}\big)$ 5 $(-5,14,-20,14,-5,0)$ $B^{-4}AB^{4}$ Yes
35 $\big(\frac{1}{6},\frac{5}{6},\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10}\big)$ 4 $(-4,12,-17,12,-4,0)$ $B^{4} A$ Yes
36 $\big(\frac{1}{6},\frac{5}{6},\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12}\big)$ 5 $(-5,15,-21,15,-5,0)$ $BA^{-1}B^{6}AB^{-5}$ Yes
37 $\big(\frac{1}{7},\frac{2}{7},\frac{3}{7},\frac{4}{7},\frac{5}{7},\frac{6}{7}\big)$ 7 $(-7, 14, -21, 14, -7, 0)$ ?? ??
38 $\big(\frac{1}{9},\frac{2}{9},\frac{4}{9},\frac{5}{9},\frac{7}{9},\frac{8}{9}\big)$ 6 $(-6, 15, -21, 15, -6, 0)$ ?? ??
39 $\big(\frac{1}{14},\frac{3}{14},\frac{5}{14},\frac{9}{14},\frac{11}{14},\frac{13}{14}\big)$ 5 $(-5, 14, -19, 14, -5, 0)$ ?? ??
40 $\big(\frac{1}{18},\frac{5}{18},\frac{7}{18},\frac{11}{18},\frac{13}{18},\frac{17}{18}\big)$ 6 $(-6, 15, -19, 15, -6, 0)$ $A^4 B^4 A (A^{2}B)^{-1}$ Yes
In the $22$ cases whose arithmeticity or thinness is unknown, we have applied the SAGE program, written in Section \[se:sage\], with $k=15$: this shows in particular that, if one of the corresponding groups is arithmetic and a $\gamma$ satisfying the hypotheses of Proposition \[Proposition\] exists, then such a $\gamma$ has to be written as a product of at least $16$ matrices in $\{A,B,A^{-1},B^{-1}\}$. Among them, there are also $5$ cases (entries 1, 8, 15, 37, 38) for which arithmeticity cannot be proved through Proposition \[Proposition\]: in these cases, the gcd of the coordinates of the vector $v$ is larger than $2$, which implies that no $\gamma(v)$ can have $\pm 1,\pm 2$ in the last entry.
Table B: More examples of arithmetic hypergeometric groups in ${\mathrm{Sp}}(6)$ {#arithmeticexamples}
================================================================================
In this table we list all the $143$ pairs of the parameters $\alpha,\beta$ for which the leading coefficients of the difference polynomials $f-g$ have absolute values bigger than $2$, so the criterion of [@SV Theorem 1.1] cannot be applied in these cases but still the arithmeticity of the corresponding hypergeometric groups follows from Proposition \[Proposition\]. Here the vector $v=(A^{-1}B-I)(e_6)$, $e_6=(0,0,0,0,0,1)$ and $\gamma\in \Gamma(f,g)$ for which the hypotheses of Proposition \[Proposition\] are satisfied. Note that the values $lc(f-g)$, listed in the third column of Table A in the previous section, are nothing else but the first nonzero entry of the vectors $v$ and therefore we avoid to list these values in all the tables to follow, from Table B onwards.
S.No. $\alpha$ $\beta$ $v$ $\gamma$
------------- ----------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------- ---------------------------- ---------------------------------- --
1 $(0,0,0,0,\frac{1}{2},\frac{1}{2})$ $\big(\frac{1}{3},\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{2}{3}\big)$ $(-5, -7, -3, -7, -5, 0)$ $A B^{-1} A B^{2}A$
2 $(0,0,0,0,\frac{1}{2},\frac{1}{2})$ $\big(\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{1}{4},\frac{3}{4}\big)$ $(-4, -5, 0, -5, -4, 0)$ $ B^{4} A$
3 $(0,0,0,0,\frac{1}{2},\frac{1}{2})$ $\big(\frac{1}{3},\frac{2}{3},\frac{1}{4},\frac{3}{4},\frac{1}{4},\frac{3}{4}\big)$ $(-3,-4,2,-4,-3,0)$ $B^2$
4 $(0,0,0,0,\frac{1}{2},\frac{1}{2})$ $\big(\frac{1}{3},\frac{2}{3},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5}\big)$ $(-4,-4,1,-4,-4,0)$ $B^2A^{-1}B^3$
5 $(0,0,0,0,\frac{1}{2},\frac{1}{2})$ $\big(\frac{1}{3},\frac{2}{3},\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8}\big)$ $(-3,-2,4,-2,-3,0)$ $B^2$
6 $(0,0,0,0,\frac{1}{2},\frac{1}{2})$ $\big(\frac{1}{3},\frac{2}{3},\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12}\big)$ $(-3,-1,5,-1,-3,0)$ $B^2$
7 $(0,0,0,0,\frac{1}{2},\frac{1}{2})$ $\big(\frac{1}{4},\frac{3}{4},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5}\big)$ $(-3, -3, 2, -3, -3, 0)$ $A^3 B^{-2} A^{-1} B A^{-1}$
8 $(0,0,0,0,\frac{1}{2},\frac{1}{2})$ $\big(\frac{1}{4},\frac{3}{4},\frac{1}{6},\frac{5}{6},\frac{1}{6},\frac{5}{6}\big)$ $(0,-5,8,-5,0,0)$ $B^3$
9 $(0,0,0,0,\frac{1}{2},\frac{1}{2})$ $\big(\frac{1}{6},\frac{5}{6},\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10}\big)$ $(0,-4,7,-4,0,0)$ $B^3$
10 $(0,0,0,0,\frac{1}{2},\frac{1}{2})$ $\big(\frac{1}{7},\frac{2}{7},\frac{3}{7},\frac{4}{7},\frac{5}{7},\frac{6}{7}\big)$ $(-3,-2,3,-2,-3,0)$ $B^2$
11 $(0,0,0,0,\frac{1}{3},\frac{2}{3})$ $\big(\frac{1}{2},\frac{1}{2},\frac{1}{4},\frac{3}{4},\frac{1}{6},\frac{5}{6}\big)$ $(-4, 2, -4, 2, -4, 0)$ $AB^{2}ABAB^2 A^2$
12 $(0,0,0,0,\frac{1}{3},\frac{2}{3})$ $\big(\frac{1}{2},\frac{1}{2},\frac{1}{6},\frac{5}{6},\frac{1}{6},\frac{5}{6}\big)$ $(-3,3,-4,3,-3,0)$ $BA^2B^2A$
13 $(0,0,0,0,\frac{1}{3},\frac{2}{3})$ $\big(\frac{1}{4},\frac{1}{4},\frac{1}{4},\frac{3}{4},\frac{3}{4},\frac{3}{4}\big)$ $(-3, 0, -2, 0, -3, 0)$ $ABAB^{2}A^{-2}$
14 $(0,0,0,0,\frac{1}{3},\frac{2}{3})$ $\big(\frac{1}{4},\frac{3}{4},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5}\big)$ $(-4,1,-4,1,-4,0)$ $B^3$
15 $(0,0,0,0,\frac{1}{3},\frac{2}{3})$ $\big(\frac{1}{4},\frac{3}{4},\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8}\big)$ $(-3,2,-2,2,-3,0)$ $B^2$
16 $(0,0,0,0,\frac{1}{3},\frac{2}{3})$ $\big(\frac{1}{4},\frac{3}{4},\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12}\big)$ $(-3,3,-2,3,-3,0)$ $A^3B^3 A^{-2}B^{-1}$
17 $(0,0,0,0,\frac{1}{3},\frac{2}{3})$ $\big(\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5},\frac{1}{6},\frac{5}{6}\big)$ $(-3,2,-3,2,-3,0)$ $B^2$
18 $(0,0,0,0,\frac{1}{3},\frac{2}{3})$ $\big(\frac{1}{6},\frac{5}{6},\frac{1}{6},\frac{5}{6},\frac{1}{6},\frac{5}{6}\big)$ $(0,-3,5,-3,0,0)$ $BA^2B^2A$
19 $(0,0,0,0,\frac{1}{3},\frac{2}{3})$ $\big(\frac{1}{18},\frac{5}{18},\frac{7}{18},\frac{11}{18},\frac{13}{18},\frac{17}{18}\big)$ $(-3,3,-1,3,-3,0)$ $AB^{-4}$
20 $(0,0,0,0,\frac{1}{4},\frac{3}{4})$ $\big(\frac{1}{3},\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{2}{3}\big)$ $(-7,1,-15,1,-7,0)$ $ A(BA^3 B^3)^{-1}AB^4 A$
21 $(0,0,0,0,\frac{1}{4},\frac{3}{4})$ $\big(\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{1}{6},\frac{5}{6}\big)$ $(-5,5,-9,5,-5,0)$ $B^4$
22 $(0,0,0,0,\frac{1}{4},\frac{3}{4})$ $\big(\frac{1}{3},\frac{2}{3},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5}\big)$ $(-6,4,-11,4,-6,0)$ $AB^5 A^7 B^{-1}$
23 $(0,0,0,0,\frac{1}{4},\frac{3}{4})$ $\big(\frac{1}{3},\frac{2}{3},\frac{1}{6},\frac{1}{6},\frac{5}{6},\frac{5}{6}\big)$ $(-3,5,-7,5,-3,0)$ $B^2$
24 $(0,0,0,0,\frac{1}{4},\frac{3}{4})$ $\big(\frac{1}{3},\frac{2}{3},\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8}\big)$ $(-5, 6, -8, 6, -5, 0)$ $A^2 BAB^4 AB^{-1}$
25 $(0,0,0,0,\frac{1}{4},\frac{3}{4})$ $\big(\frac{1}{3},\frac{2}{3},\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10}\big)$ $(-4,6,-7,6,-4,0)$ $A^2 B^3 A B^{-1}$
26 $(0,0,0,0,\frac{1}{4},\frac{3}{4})$ $\big(\frac{1}{3},\frac{2}{3},\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12}\big)$ $(-5,7,-7,7,-5,0)$ $ A^3 B^{-1}A B^6 A^{-1}B A$
27 $(0,0,0,0,\frac{1}{4},\frac{3}{4})$ $\big(\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5},\frac{1}{6},\frac{5}{6}\big)$ $(-4,6,-9,6,-4,0)$ $B^5$
28 $(0,0,0,0,\frac{1}{4},\frac{3}{4})$ $\big(\frac{1}{6},\frac{5}{6},\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8}\big)$ $(-3,6,-8,6,-3,0)$ $B^3$
29 $(0,0,0,0,\frac{1}{4},\frac{3}{4})$ $\big(\frac{1}{6},\frac{5}{6},\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12}\big)$ $(-3,7,-9,7,-3,0)$ $B^5$
30 $(0,0,0,0,\frac{1}{4},\frac{3}{4})$ $\big(\frac{1}{14},\frac{3}{14},\frac{5}{14},\frac{9}{14},\frac{11}{14},\frac{13}{14}\big)$ $(-3,6,-7,6,-3,0)$ $B^3$
31 $(0,0,0,0,\frac{1}{4},\frac{3}{4})$ $\big(\frac{1}{18},\frac{5}{18},\frac{7}{18},\frac{11}{18},\frac{13}{18},\frac{17}{18}\big)$ $(-4, 7, -7, 7, -4, 0)$ $AB^4 A^2$
32 $(0,0,0,0,\frac{1}{6},\frac{5}{6})$ $\big(\frac{1}{3},\frac{2}{3},\frac{1}{4},\frac{1}{4},\frac{3}{4},\frac{3}{4}\big)$ $(-6, 8, -16, 8, -6, 0)$ $AB^3 A B^3 A B^{-1}$
33 $(0,0,0,0,\frac{1}{6},\frac{5}{6})$ $\big(\frac{1}{3},\frac{2}{3},\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8}\big)$ $(-6, 10, -14, 10, -6, 0)$ $AB^4 A$
34 $(0,0,0,0,\frac{1}{6},\frac{5}{6})$ $\big(\frac{1}{3},\frac{2}{3},\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10}\big)$ $(-5, 10, -13, 10, -5, 0)$ $A^2 B^3 A$
35 $(0,0,0,0,\frac{1}{6},\frac{5}{6})$ $\big(\frac{1}{3},\frac{2}{3},\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12}\big)$ $(-6, 11, -13, 11, -6, 0)$ $ABA^{-1}B^6 A^2 B^{-1}$
36 $(0,0,0,0,\frac{1}{6},\frac{5}{6})$ $\big(\frac{1}{4},\frac{1}{4},\frac{1}{4},\frac{3}{4},\frac{3}{4},\frac{3}{4}\big)$ $(-5,8,-14,8,-5,0)$ $B^3$
37 $(0,0,0,0,\frac{1}{6},\frac{5}{6})$ $\big(\frac{1}{4},\frac{3}{4},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5}\big)$ $(-6, 9, -16, 9, -6, 0)$ $A^2 B^{-2}AB^4 AB^{-1}$
38 $(0,0,0,0,\frac{1}{6},\frac{5}{6})$ $\big(\frac{1}{4},\frac{3}{4},\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8}\big)$ $(-5, 10, -14, 10, -5, 0)$ $AB^{-1}A B^4 A^{-1}B^{-1}$
39 $(0,0,0,0,\frac{1}{6},\frac{5}{6})$ $\big(\frac{1}{4},\frac{3}{4},\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10}\big)$ $(-4,9,-12,9,-4,0)$ $B^3$
40 $(0,0,0,0,\frac{1}{6},\frac{5}{6})$ $\big(\frac{1}{14},\frac{3}{14},\frac{5}{14},\frac{9}{14},\frac{11}{14},\frac{11}{14}\big)$ $(-4, 10, -13, 10, -4, 0)$ $A^4 B A B A^{-1}B^{-3}A^{-2}$
41 $(0,0,0,0,\frac{1}{6},\frac{5}{6})$ $\big(\frac{1}{18},\frac{5}{18},\frac{7}{18},\frac{11}{18},\frac{13}{18},\frac{17}{18}\big)$ $(-5, 11, -13, 11, -5, 0)$ $A^4 B^4 A B^{-2}$
42 $(0,0,\frac{1}{2},\frac{1}{2},\frac{1}{3},\frac{2}{3})$ $\big(\frac{1}{4},\frac{3}{4},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5}\big)$ $(0,-3,-4,-3,0,0)$ $B^3$
43 $(0,0,\frac{1}{2},\frac{1}{2},\frac{1}{3},\frac{2}{3})$ $\big(\frac{1}{4},\frac{3}{4},\frac{1}{6},\frac{1}{6},\frac{5}{6},\frac{5}{6}\big)$ $(3,-5,2,-5,3,0)$ $B^2$
44 $(0,0,\frac{1}{2},\frac{1}{2},\frac{1}{3},\frac{2}{3})$ $\big(\frac{1}{6},\frac{5}{6},\frac{1}{6},\frac{5}{6},\frac{1}{6},\frac{5}{6}\big)$ $(4, -7, 5, -7, 4, 0)$ $AB^3 A^{-1} B^{-1} AB^{-1}$
45 $(0,0,\frac{1}{2},\frac{1}{2},\frac{1}{3},\frac{2}{3})$ $\big(\frac{1}{6},\frac{5}{6},\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10}\big)$ $(3,-4,1,-4,3,0)$ $B^2$
46 $(0,0,\frac{1}{2},\frac{1}{2},\frac{1}{4},\frac{3}{4})$ $\big(\frac{1}{3},\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{2}{3}\big)$ $(-3,-7,-7,-7,-3,0)$ $B^2$
47 $(0,0,\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3})$ $(\frac{1}{2},\frac{1}{2},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5})$ $(-3,-4,-6,-4,-3,0)$ $B^2A^2B^2A^{-1}$
48 $(0,0,\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3})$ $\big(\frac{1}{4},\frac{1}{4},\frac{1}{4},\frac{3}{4},\frac{3}{4},\frac{3}{4}\big)$ $(0,-3,-2,-3,0,0)$ $B^3$
49 $(0,0,\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3})$ $(\frac{1}{6},\frac{1}{6},\frac{1}{6},\frac{5}{6},\frac{5}{6},\frac{5}{6})$ $(3,-6,5,-6,3,0)$ $B^2A^2B^2$
50 $(0,0,\frac{1}{3},\frac{2}{3},\frac{1}{4},\frac{3}{4})$ $(\frac{1}{2},\frac{1}{2},\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8})$ $(-3,0,-2,0,-3,0)$ $A^3$
51 $(0,0,\frac{1}{3},\frac{2}{3},\frac{1}{4},\frac{3}{4})$ $(\frac{1}{2},\frac{1}{2},\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12})$ $(-3,1,0,1,-3,0)$ $A^3$
52 $(0,0,\frac{1}{3},\frac{2}{3},\frac{1}{6},\frac{5}{6})$ $(\frac{1}{2},\frac{1}{2},\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10})$ $(-3, 2, -2, 2, -3, 0)$ $(BA^3)^2 BA$
53 $(0,0,\frac{1}{3},\frac{2}{3},\frac{1}{6},\frac{5}{6})$ $(\frac{1}{2},\frac{1}{2},\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12})$ $(-4,2,0,2,-4,0)$ $A^4$
54 $(0,0,\frac{1}{3},\frac{2}{3},\frac{1}{6},\frac{5}{6})$ $(\frac{1}{4},\frac{3}{4},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5})$ $(-3,0,-4,0,-3,0)$ $B^3$
55 $(0,0,\frac{1}{4},\frac{1}{4},\frac{3}{4},\frac{3}{4})$ $(\frac{1}{2},\frac{1}{2},\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8})$ $(-4, 2, -4, 2, -4, 0)$ $A^2BA^{3}$
56 $(0,0,\frac{1}{4},\frac{1}{4},\frac{3}{4},\frac{3}{4})$ $(\frac{1}{2},\frac{1}{2},\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10})$ $(-3, 3, -4, 3, -3, 0)$ $A^3$
57 $(0,0,\frac{1}{4},\frac{1}{4},\frac{3}{4},\frac{3}{4})$ $(\frac{1}{2},\frac{1}{2},\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12})$ $(-4,3,-2,3,-4,0)$ $A^2B^2$
58 $(0,0,\frac{1}{4},\frac{1}{4},\frac{3}{4},\frac{3}{4})$ $\big(\frac{1}{3},\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{2}{3}\big)$ $(-5, -3, -11, -3, -5, 0)$ $B^7A$
59 $(0,0,\frac{1}{4},\frac{1}{4},\frac{3}{4},\frac{3}{4})$ $\big(\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{1}{6},\frac{5}{6}\big)$ $(-3,1,-5,1,-3,0)$ $B^4$
60 $(0,0,\frac{1}{4},\frac{1}{4},\frac{3}{4},\frac{3}{4})$ $(\frac{1}{3},\frac{2}{3},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5})$ $(-4,0,-7,0,-4,0)$ $AB^{-5}$
61 $(0,0,\frac{1}{4},\frac{1}{4},\frac{3}{4},\frac{3}{4})$ $(\frac{1}{3},\frac{2}{3},\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8})$ $(-3,2,-4,2,-3,0)$ $B^8A^4$
62 $(0,0,\frac{1}{4},\frac{1}{4},\frac{3}{4},\frac{3}{4})$ $(\frac{1}{7},\frac{2}{7},\frac{3}{7},\frac{4}{7},\frac{5}{7},\frac{6}{7})$ $(-3, 2, -5, 2, -3, 0)$ $(A^2 B)^{2}A$
63 $(0,0,\frac{1}{4},\frac{3}{4},\frac{1}{6},\frac{5}{6})$ $(\frac{1}{2},\frac{1}{2},\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10})$ $(-4, 5, -6, 5, -4, 0)$ $(A^3 B)^2 A^2$
64 $(0,0,\frac{1}{4},\frac{3}{4},\frac{1}{6},\frac{5}{6})$ $(\frac{1}{2},\frac{1}{2},\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12})$ $(-5, 5, -4, 5, -5, 0)$ $A^4$
65 $(0,0,\frac{1}{4},\frac{3}{4},\frac{1}{6},\frac{5}{6})$ $(\frac{1}{3},\frac{2}{3},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5})$ $(-5, 2, -9, 2, -5, 0)$ $AB^{-6}A^2$
66 $(0,0,\frac{1}{4},\frac{3}{4},\frac{1}{6},\frac{5}{6})$ $(\frac{1}{3},\frac{2}{3},\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8})$ $(-4, 4, -6, 4, -4, 0)$ $AB^4 A$
67 $(0,0,\frac{1}{4},\frac{3}{4},\frac{1}{6},\frac{5}{6})$ $(\frac{1}{3},\frac{2}{3},\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10})$ $(-3, 4, -5, 4, -3, 0)$ $A^4 B^3 A$
68 $(0,0,\frac{1}{4},\frac{3}{4},\frac{1}{6},\frac{5}{6})$ $(\frac{1}{3},\frac{2}{3},\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12})$ $(-4, 5, -5, 5, -4, 0)$ $A^4$
69 $(0,0,\frac{1}{4},\frac{3}{4},\frac{1}{6},\frac{5}{6})$ $(\frac{1}{9},\frac{2}{9},\frac{4}{9},\frac{5}{9},\frac{7}{9},\frac{8}{9})$ $(-3, 5, -7, 5, -3, 0)$ $ABA^3 B^{-2}A^{-3}$
70 $(0,0,\frac{1}{4},\frac{3}{4},\frac{1}{6},\frac{5}{6})$ $(\frac{1}{18},\frac{5}{18},\frac{7}{18},\frac{11}{18},\frac{13}{18},\frac{17}{18})$ $(-3,5,-5,5,-3,0)$ $A^3$
71 $(0,0,\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5})$ $(\frac{1}{2},\frac{1}{2},\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8})$ $(-3, -1, 0, -1, -3, 0)$ $A^2 B A^3 B A^2$
72 $(0,0,\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5})$ $(\frac{1}{2},\frac{1}{2},\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12})$ $(-3,0,2,0,-3,0)$ $A^3$
73 $(0,0,\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5})$ $\big(\frac{1}{3},\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{2}{3}\big)$ $(-4,-6,-7,-6,-4,0)$ $B^3$
74 $(0,0,\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5})$ $\big(\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{1}{4},\frac{3}{4}\big)$ $(-3,-4,-4,-4,-3,0)$ $B^2$
75 $(0,0,\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5})$ $(\frac{1}{4},\frac{1}{4},\frac{3}{4},\frac{3}{4},\frac{1}{6},\frac{5}{6})$ $(0,-3,2,-3,0,0)$ $B^3$
76 $(0,0,\frac{1}{6},\frac{1}{6},\frac{5}{6},\frac{5}{6})$ $\big(\frac{1}{3},\frac{2}{3},\frac{1}{4},\frac{3}{4},\frac{1}{4},\frac{3}{4}\big)$ $(-5,5,-12,5,-5,0)$ $B^{20}A^2$
77 $(0,0,\frac{1}{6},\frac{1}{6},\frac{5}{6},\frac{5}{6})$ $(\frac{1}{3},\frac{2}{3},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5})$ $(-6, 5, -13, 5, -6, 0)$ $A^2 B^5 A^2 B A^{-1} B^{-1}A^2$
78 $(0,0,\frac{1}{6},\frac{1}{6},\frac{5}{6},\frac{5}{6})$ $(\frac{1}{3},\frac{2}{3},\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8})$ $(-5, 7, -10, 7, -5, 0)$ $AB^{4}A$
79 $(0,0,\frac{1}{6},\frac{1}{6},\frac{5}{6},\frac{5}{6})$ $(\frac{1}{3},\frac{2}{3},\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10})$ $(-4, 7, -9, 7, -4, 0)$ $A^5 B^{-3}$
80 $(0,0,\frac{1}{6},\frac{1}{6},\frac{5}{6},\frac{5}{6})$ $(\frac{1}{3},\frac{2}{3},\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12})$ $(-5,8,-9,8,-5,0)$ $ABA^{-30}$
81 $(0,0,\frac{1}{6},\frac{1}{6},\frac{5}{6},\frac{5}{6})$ $(\frac{1}{4},\frac{1}{4},\frac{1}{4},\frac{3}{4},\frac{3}{4},\frac{3}{4})$ $(-4,5,-10,5,-4,0)$ $B^3$
82 $(0,0,\frac{1}{6},\frac{1}{6},\frac{5}{6},\frac{5}{6})$ $(\frac{1}{4},\frac{3}{4},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5})$ $(-5,6,-12,6,-5,0)$ $B^5$
83 $(0,0,\frac{1}{6},\frac{1}{6},\frac{5}{6},\frac{5}{6})$ $(\frac{1}{4},\frac{3}{4},\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8})$ $(-4, 7, -10, 7, -4, 0)$ $A^4 B A^{-1}B^4 A B^{-1}$
84 $(0,0,\frac{1}{6},\frac{1}{6},\frac{5}{6},\frac{5}{6})$ $(\frac{1}{4},\frac{3}{4},\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10})$ $(-3,6,-8,6,-3,0)$ $B^3$
85 $(0,0,\frac{1}{6},\frac{1}{6},\frac{5}{6},\frac{5}{6})$ $(\frac{1}{4},\frac{3}{4},\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12})$ $(-4,8,-10,8,-4,0)$ $(A^5B^2)^3$
86 $(0,0,\frac{1}{6},\frac{1}{6},\frac{5}{6},\frac{5}{6})$ $(\frac{1}{14},\frac{3}{14},\frac{5}{14},\frac{9}{14},\frac{11}{14},\frac{13}{14})$ $(-3,7,-9,7,-3,0)$ $B^3$
87 $(0,0,\frac{1}{6},\frac{1}{6},\frac{5}{6},\frac{5}{6})$ $(\frac{1}{18},\frac{5}{18},\frac{7}{18},\frac{11}{18},\frac{13}{18},\frac{17}{18})$ $(-4, 8, -9, 8, -4, 0)$ $A^4 B A^{-1}B^{-4}$
88 $(0,0,\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8})$ $\big(\frac{1}{3},\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{2}{3}\big)$ $(-5, -5, -7, -5, -5, 0)$ $AB^3 A^2$
89 $(0,0,\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8})$ $\big(\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{1}{4},\frac{3}{4}\big)$ $(-4,-3,-4,-3,-4,0)$ $B^3$
90 $(0,0,\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8})$ $\big(\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{1}{6},\frac{5}{6}\big)$ $(-3,-1,-1,-1,-3,0)$ $B^2$
91 $(0,0,\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8})$ $\big(\frac{1}{3},\frac{2}{3},\frac{1}{4},\frac{3}{4},\frac{1}{4},\frac{3}{4}\big)$ $(-3,-2,-2,-2,-3,0)$ $B^2$
92 $(0,0,\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8})$ $(\frac{1}{3},\frac{2}{3},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5})$ $(-4,-2,-3,-2,-4,0)$ $B^4$
93 $(0,0,\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8})$ $(\frac{1}{3},\frac{2}{3},\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12})$ $(-3,1,1,1,-3,0)$ $AB^7$
94 $(0,0,\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8})$ $(\frac{1}{4},\frac{3}{4},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5})$ $(-3,-1,-2,-1,-3,0)$ $B^2$
95 $(0,0,\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8})$ $(\frac{1}{4},\frac{3}{4},\frac{1}{6},\frac{1}{6},\frac{5}{6},\frac{5}{6})$ $(0,-3,4,-3,0,0)$ $B^3$
96 $(0,0,\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8})$ $(\frac{1}{7},\frac{2}{7},\frac{3}{7},\frac{4}{7},\frac{5}{7},\frac{6}{7})$ $(-3,0,-1,0,-3,0)$ $B^3$
97 $(0,0,\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10})$ $\big(\frac{1}{3},\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{2}{3}\big)$ $(-6, -2, -11, -2, -6, 0)$ $B^4 A^3 B^4 A$
98 $(0,0,\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10})$ $\big(\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{1}{4},\frac{3}{4}\big)$ $(-5,0,-8,0,-5,0)$ $AB^7$
99 $(0,0,\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10})$ $\big(\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{1}{6},\frac{5}{6}\big)$ $(-4,2,-5,2,-4,0)$ $B^7$
100 $(0,0,\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10})$ $\big(\frac{1}{3},\frac{2}{3},\frac{1}{4},\frac{3}{4},\frac{1}{4},\frac{3}{4}\big)$ $(-4,1,-6,1,-4,0)$ $B^3$
101 $(0,0,\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10})$ $\big(\frac{1}{3},\frac{2}{3},\frac{1}{4},\frac{3}{4},\frac{1}{6},\frac{5}{6}\big)$ $(-3,2,-4,2,-3,0)$ $B^2$
102 $(0,0,\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10})$ $(\frac{1}{3},\frac{2}{3},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5})$ $(-5,1,-7,1,-5,0)$ $A^2B^{10}$
103 $(0,0,\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10})$ $(\frac{1}{3},\frac{2}{3},\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8})$ $(-4,3,-4,3,-4,0)$ $(AB^{-5}A)^4$
104 $(0,0,\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10})$ $(\frac{1}{3},\frac{2}{3},\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12})$ $(-4, 4, -3, 4, -4, 0)$ $BAB^6 A^{-1}BA$
105 $(0,0,\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10})$ $(\frac{1}{4},\frac{1}{4},\frac{1}{4},\frac{3}{4},\frac{3}{4},\frac{3}{4})$ $(-3,1,-4,1,-3,0)$ $B^2$
106 $(0,0,\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10})$ $(\frac{1}{4},\frac{3}{4},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5})$ $(-4,2,-6,2,-4,0)$ $B^{14}$
107 $(0,0,\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10})$ $(\frac{1}{4},\frac{3}{4},\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8})$ $(-3,3,-4,3,-3,0)$ $B^3$
108 $(0,0,\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10})$ $(\frac{1}{4},\frac{3}{4},\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12})$ $(-3,4,-4,4,-3,0)$ $AB^2 A^2 B^3 (BAB^2 A^2)^{-1}$
109 $(0,0,\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10})$ $(\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5}, \frac{1}{6},\frac{5}{6})$ $(-3,3,-5,3,-3,0)$ $B^3$
110 $(0,0,\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10})$ $(\frac{1}{9},\frac{2}{9},\frac{4}{9},\frac{5}{9},\frac{7}{9},\frac{8}{9})$ $(-3, 4, -5, 4, -3, 0)$ $AB^2 A^2 B^3 A^{-1}$
111 $(0,0,\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10})$ $(\frac{1}{18},\frac{5}{18},\frac{7}{18},\frac{11}{18},\frac{13}{18},\frac{17}{18})$ $(-3,4,-3,4,-3,0)$ $B^4$
112 $(0,0,\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12})$ $\big(\frac{1}{3},\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{2}{3}\big)$ $(-5, -6, -5, -6, -5, 0)$ $B^2 A$
113 $(0,0,\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12})$ $\big(\frac{1}{3},\frac{2}{3},\frac{1}{3},\frac{2}{3},\frac{1}{4},\frac{3}{4}\big)$ $(-4, -4, -2, -4, -4, 0)$ $A^3 B^2 A B^{-3}$
114 $(0,0,\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12})$ $\big(\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{1}{6},\frac{5}{6}\big)$ $(-3,-2,1,-2,-3,0)$ $BA$
115 $(0,0,\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12})$ $(\frac{1}{3},\frac{2}{3},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5})$ $(-4,-3,-1,-3,-4,0)$ $B^2A$
116 $(0,0,\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12})$ $(\frac{1}{3},\frac{2}{3},\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8})$ $(-3,-1,2,-1,-3,0)$ $B^2$
117 $(0,0,\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12})$ $(\frac{1}{4},\frac{3}{4},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5})$ $(-3,-2,0,-2,-3,0)$ $B^2$
118 $(0,0,\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12})$ $(\frac{1}{4},\frac{3}{4},\frac{1}{6},\frac{1}{6},\frac{5}{6},\frac{5}{6})$ $(0,-4,6,-4,0,0)$ $A^3$
119 $(0,0,\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12})$ $(\frac{1}{6},\frac{5}{6},\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10})$ $(0,-3,5,-3,0,0)$ $A^3$
120 $(0,0,\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12})$ $(\frac{1}{7},\frac{2}{7},\frac{3}{7},\frac{4}{7},\frac{5}{7},\frac{6}{7})$ $(-3,-1,1,-1,-3,0)$ $B^3$
121 $(\frac{1}{3},\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{2}{3})$ $\big(\frac{1}{4},\frac{1}{4},\frac{1}{4},\frac{3}{4},\frac{3}{4},\frac{3}{4}\big)$ $(3,3,7,3,3,0)$ $B^3$
122 $(\frac{1}{3},\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{2}{3})$ $\big(\frac{1}{4},\frac{1}{4},\frac{3}{4},\frac{3}{4},\frac{1}{6},\frac{5}{6}\big)$ $(4,3,9,3,4,0)$ $B^{14}$
123 $(\frac{1}{3},\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{2}{3})$ $\big(\frac{1}{4},\frac{3}{4},\frac{1}{6},\frac{1}{6},\frac{5}{6},\frac{5}{6}\big)$ $(5, 2, 11, 2, 5, 0)$ $B^4 A^{-1}B A$
124 $(\frac{1}{3},\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{2}{3})$ $\big(\frac{1}{4},\frac{3}{4},\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8}\big)$ $(3,5,7,5,3,0)$ $A^3$
125 $(\frac{1}{3},\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{2}{3})$ $\big(\frac{1}{4},\frac{3}{4},\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10}\big)$ $(4,4,9,4,4,0)$ $B^5$
126 $(\frac{1}{3},\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{2}{3})$ $\big(\frac{1}{4},\frac{3}{4},\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12}\big)$ $(3,6,7,6,3,0)$ $A^3$
127 $(\frac{1}{3},\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{2}{3})$ $\big(\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5}, \frac{1}{6},\frac{5}{6}\big)$ $(3,5,6,5,3,0)$ $B^3A^2$
128 $(\frac{1}{3},\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{2}{3})$ $\big(\frac{1}{6},\frac{5}{6},\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8}\big)$ $(4, 5, 7, 5, 4, 0)$ $A^3 B^{-3}$
129 $(\frac{1}{3},\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{2}{3})$ $\big(\frac{1}{6},\frac{5}{6},\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10}\big)$ $(5, 3, 10, 3, 5, 0)$ $A^7 B A^{-1}B^{-1}$
130 $(\frac{1}{3},\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{2}{3})$ $\big(\frac{1}{6},\frac{5}{6},\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12}\big)$ $(4, 6, 6, 6, 4, 0)$ $AB^{2}A^{-1}B^{-1}A^{3}BA^{-1}$
131 $(\frac{1}{3},\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{2}{3})$ $\big(\frac{1}{14},\frac{3}{14},\frac{5}{14},\frac{9}{14},\frac{11}{14},\frac{13}{14}\big)$ $(4, 5, 8, 5, 4, 0)$ $A^2 B^{-1}A^{-3}B^{-1}$
132 $(\frac{1}{3},\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{2}{3})$ $\big(\frac{1}{18},\frac{5}{18},\frac{7}{18},\frac{11}{18},\frac{13}{18},\frac{15}{18}\big)$ $(3,6,8,6,3,0)$ $A^3$
133 $(\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{1}{4},\frac{3}{4})$ $\big(\frac{1}{6},\frac{5}{6},\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8}\big)$ $(3,3,4,3,3,0)$ $A^3$
134 $(\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{1}{4},\frac{3}{4})$ $\big(\frac{1}{6},\frac{5}{6},\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10}\big)$ $(4,1,7,1,4,0)$ $A^7$
135 $(\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{1}{4},\frac{3}{4})$ $\big(\frac{1}{6},\frac{5}{6},\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12}\big)$ $(3,4,3,4,3,0)$ $A^2$
136 $(\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{1}{4},\frac{3}{4})$ $\big(\frac{1}{14},\frac{3}{14},\frac{5}{14},\frac{9}{14},\frac{11}{14},\frac{13}{14}\big)$ $(3,3,5,3,3,0)$ $A^3$
137 $(\frac{1}{3},\frac{2}{3},\frac{1}{4},\frac{1}{4},\frac{3}{4},\frac{3}{4})$ $\big(\frac{1}{6},\frac{5}{6},\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10}\big)$ $(3,0,5,0,3,0)$ $A^3$
138 $(\frac{1}{3},\frac{2}{3},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5})$ $\big(\frac{1}{4},\frac{3}{4},\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10}\big)$ $(3,1,5,1,3,0)$ $B^4$
139 $(\frac{1}{3},\frac{2}{3},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5})$ $\big(\frac{1}{6},\frac{5}{6},\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8}\big)$ $(3,2,3,2,3,0)$ $B^4$
140 $(\frac{1}{3},\frac{2}{3},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5})$ $\big(\frac{1}{6},\frac{5}{6},\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10}\big)$ $(4,0,6,0,4,0)$ $B^5$
141 $(\frac{1}{3},\frac{2}{3},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5})$ $\big(\frac{1}{6},\frac{5}{6},\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12}\big)$ $(3,3,2,3,3,0)$ $A^3$
[*142*]{}\* $(\frac{1}{3},\frac{2}{3},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5})$ $\big(\frac{1}{14},\frac{3}{14},\frac{5}{14},\frac{9}{14},\frac{11}{14},\frac{13}{14}\big)$ $(3,2,4,2,3,0)$ $A^4$
143 $(\frac{1}{4},\frac{1}{4},\frac{1}{4},\frac{3}{4},\frac{3}{4},\frac{3}{4})$ $\big(\frac{1}{9},\frac{2}{9},\frac{4}{9},\frac{5}{9},\frac{7}{9},\frac{8}{9}\big)$ $(0,3,-1,3,0,0)$ $A^3$
The arithmeticity of entry $142$, marked with an asterisk in the table above, has already been proved in [@DFH] by computing its index. See entries 468 and 534 of [@DFH Table 2], which represent the same group up to conjugation.
Table-C: Examples of hypergeometric groups in ${\mathrm{Sp}}(6)$ for which arithmeticity follows from Singh and Venkataramana [@SV]
===================================================================================================================================
We list here the possible pairs of the parameters $\alpha,\beta$ for which the arithmeticity of the corresponding hypergeometric groups is determined by [@SV Theorem 1.1]. That is, for these cases the leading coefficients of the difference polynomials $f-g$ have absolute values $1$ or $2$, so the arithmeticity of the corresponding hypergeometric groups follows directly from [@SV Theorem 1.1].
Table-D: Other Open cases
=========================
Following our study, we find that there are 86 pairs of the parameters $\alpha,\beta$ for which the leading coefficients of the difference polynomials $f-g$ have absolute values bigger than $2$, and therefore the criterion of [@SV Theorem 1.1] cannot be applied in these cases. In addition, we are also not able to find $\gamma\in\Gamma(f,g)$ which could satisfy the hypotheses of Proposition \[Proposition\]. Out of these, 22 are already listed in Table A in Section \[sec:mum\]. Here we list the remaining 64 pairs of parameters $\alpha,\beta$ for which the arithmeticity or thinness of the associated hypergeometric groups is still unknown.
All of them have been verified by the SAGE program written in Section \[se:sage\] for the values of $k$ up to $14$. If a $\gamma$ satisfying the hypotheses of Proposition \[Proposition\] exists for one of these cases, it must be a product of at least $15$ matrices in $\{A,B,A^{-1},B^{-1}\}$.
S.No. $\alpha$ $\beta$ S.No. $\alpha$ $\beta$
------- ----------------------------------------------------------------------------- ----------------------------------------------------------------------------------------- ------- ----------------------------------------------------------------------------- ------------------------------------------------------------------------------------------
1 $\big(0,0,0,0,\frac{1}{2},\frac{1}{2}\big)$ $\big(\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{1}{6},\frac{5}{6}\big)$ 2 $(0,0,0,0,\frac{1}{3},\frac{2}{3})$ $\big(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{4},\frac{3}{4}\big)$
3 $(0,0,0,0,\frac{1}{3},\frac{2}{3})$ $\big(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{6},\frac{5}{6}\big)$ 4 $(0,0,0,0,\frac{1}{3},\frac{2}{3})$ $\big(\frac{1}{2},\frac{1}{2},\frac{1}{4},\frac{3}{4},\frac{1}{4},\frac{3}{4}\big)$
5 $(0,0,0,0,\frac{1}{3},\frac{2}{3})$ $\big(\frac{1}{2},\frac{1}{2},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5}\big)$ 6 $(0,0,0,0,\frac{1}{3},\frac{2}{3})$ $\big(\frac{1}{2},\frac{1}{2},\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8}\big)$
7 $(0,0,0,0,\frac{1}{3},\frac{2}{3})$ $\big(\frac{1}{2},\frac{1}{2},\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10}\big)$ 8 $(0,0,0,0,\frac{1}{3},\frac{2}{3})$ $\big(\frac{1}{2},\frac{1}{2},\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12}\big)$
9 $(0,0,0,0,\frac{1}{3},\frac{2}{3})$ $\big(\frac{1}{7},\frac{2}{7},\frac{3}{7},\frac{4}{7},\frac{5}{7},\frac{6}{7}\big)$ 10 $(0,0,0,0,\frac{1}{3},\frac{2}{3})$ $\big(\frac{1}{9},\frac{2}{9},\frac{4}{9},\frac{5}{9},\frac{7}{9},\frac{8}{9}\big)$
11 $(0,0,0,0,\frac{1}{4},\frac{3}{4})$ $\big(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{3},\frac{2}{3}\big)$ 12 $(0,0,0,0,\frac{1}{4},\frac{3}{4})$ $\big(\frac{1}{2},\frac{1}{2},\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3}\big)$
13 $(0,0,0,0,\frac{1}{4},\frac{3}{4})$ $\big(\frac{1}{2},\frac{1}{2},\frac{1}{3},\frac{2}{3},\frac{1}{6},\frac{5}{6}\big)$ 14 $(0,0,0,0,\frac{1}{4},\frac{3}{4})$ $\big(\frac{1}{2},\frac{1}{2},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5}\big)$
15 $(0,0,0,0,\frac{1}{4},\frac{3}{4})$ $\big(\frac{1}{2},\frac{1}{2},\frac{1}{6},\frac{5}{6},\frac{1}{6},\frac{5}{6}\big)$ 16 $(0,0,0,0,\frac{1}{4},\frac{3}{4})$ $\big(\frac{1}{2},\frac{1}{2},\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8}\big)$
17 $(0,0,0,0,\frac{1}{4},\frac{3}{4})$ $\big(\frac{1}{2},\frac{1}{2},\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10}\big)$ 18 $(0,0,0,0,\frac{1}{4},\frac{3}{4})$ $\big(\frac{1}{2},\frac{1}{2},\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12}\big)$
19 $(0,0,0,0,\frac{1}{4},\frac{3}{4})$ $\big(\frac{1}{7},\frac{2}{7},\frac{3}{7},\frac{4}{7},\frac{5}{7},\frac{6}{7}\big)$ 20 $(0,0,0,0,\frac{1}{4},\frac{3}{4})$ $\big(\frac{1}{9},\frac{2}{9},\frac{4}{9},\frac{5}{9},\frac{7}{9},\frac{8}{9}\big)$
21 $(0,0,0,0,\frac{1}{6},\frac{5}{6})$ $\big(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{3},\frac{2}{3}\big)$ 22 $(0,0,0,0,\frac{1}{6},\frac{5}{6})$ $\big(\frac{1}{2},\frac{1}{2},\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3}\big)$
23 $(0,0,0,0,\frac{1}{6},\frac{5}{6})$ $\big(\frac{1}{2},\frac{1}{2},\frac{1}{3},\frac{2}{3},\frac{1}{4},\frac{3}{4}\big)$ 24 $(0,0,0,0,\frac{1}{6},\frac{5}{6})$ $\big(\frac{1}{2},\frac{1}{2},\frac{1}{4},\frac{3}{4},\frac{1}{4},\frac{3}{4}\big)$
25 $(0,0,0,0,\frac{1}{6},\frac{5}{6})$ $\big(\frac{1}{2},\frac{1}{2},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5}\big)$ 26 $(0,0,0,0,\frac{1}{6},\frac{5}{6})$ $\big(\frac{1}{2},\frac{1}{2},\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8}\big)$
27 $(0,0,0,0,\frac{1}{6},\frac{5}{6})$ $\big(\frac{1}{2},\frac{1}{2},\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10}\big)$ 28 $(0,0,0,0,\frac{1}{6},\frac{5}{6})$ $\big(\frac{1}{2},\frac{1}{2},\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12}\big)$
29 $(0,0,0,0,\frac{1}{6},\frac{5}{6})$ $\big(\frac{1}{3},\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{2}{3}\big)$ 30 $(0,0,0,0,\frac{1}{6},\frac{5}{6})$ $\big(\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{1}{4},\frac{3}{4}\big)$
31 $(0,0,0,0,\frac{1}{6},\frac{5}{6})$ $\big(\frac{1}{3},\frac{2}{3},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5}\big)$ 32 $(0,0,0,0,\frac{1}{6},\frac{5}{6})$ $\big(\frac{1}{4},\frac{3}{4},\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12}\big)$
33 $(0,0,0,0,\frac{1}{6},\frac{5}{6})$ $\big(\frac{1}{7},\frac{2}{7},\frac{3}{7},\frac{4}{7},\frac{5}{7},\frac{6}{7}\big)$ 34 $(0,0,0,0,\frac{1}{6},\frac{5}{6})$ $\big(\frac{1}{9},\frac{2}{9},\frac{4}{9},\frac{5}{9},\frac{7}{9},\frac{8}{9}\big)$
35 $(0,0,\frac{1}{3},\frac{2}{3},\frac{1}{4},\frac{3}{4})$ $(\frac{1}{2},\frac{1}{2},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5})$ 36 $(0,0,\frac{1}{3},\frac{2}{3},\frac{1}{6},\frac{5}{6})$ $(\frac{1}{2},\frac{1}{2},\frac{1}{4},\frac{1}{4},\frac{3}{4},\frac{3}{4})$
37 $(0,0,\frac{1}{3},\frac{2}{3},\frac{1}{6},\frac{5}{6})$ $(\frac{1}{2},\frac{1}{2},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5})$ 38 $(0,0,\frac{1}{3},\frac{2}{3},\frac{1}{6},\frac{5}{6})$ $(\frac{1}{2},\frac{1}{2},\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8})$
39 $(0,0,\frac{1}{3},\frac{2}{3},\frac{1}{6},\frac{5}{6})$ $(\frac{1}{7},\frac{2}{7},\frac{3}{7},\frac{4}{7},\frac{5}{7},\frac{6}{7})$ 40 $(0,0,\frac{1}{4},\frac{1}{4},\frac{3}{4},\frac{3}{4})$ $\big(\frac{1}{2},\frac{1}{2},\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3}\big)$
41 $(0,0,\frac{1}{4},\frac{1}{4},\frac{3}{4},\frac{3}{4})$ $(\frac{1}{2},\frac{1}{2},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5})$ 42 $(0,0,\frac{1}{4},\frac{1}{4},\frac{3}{4},\frac{3}{4})$ $(\frac{1}{3},\frac{2}{3},\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12})$
43 $(0,0,\frac{1}{4},\frac{3}{4},\frac{1}{6},\frac{5}{6})$ $\big(\frac{1}{2},\frac{1}{2},\frac{1}{3},\frac{2}{3},\frac{1}{3},\frac{2}{3}\big)$ 44 $(0,0,\frac{1}{4},\frac{3}{4},\frac{1}{6},\frac{5}{6})$ $(\frac{1}{2},\frac{1}{2},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5})$
45 $(0,0,\frac{1}{4},\frac{3}{4},\frac{1}{6},\frac{5}{6})$ $(\frac{1}{2},\frac{1}{2},\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8})$ 46 $(0,0,\frac{1}{4},\frac{3}{4},\frac{1}{6},\frac{5}{6})$ $\big(\frac{1}{3},\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{2}{3}\big)$
47 $(0,0,\frac{1}{4},\frac{3}{4},\frac{1}{6},\frac{5}{6})$ $(\frac{1}{7},\frac{2}{7},\frac{3}{7},\frac{4}{7},\frac{5}{7},\frac{6}{7})$ 48 $(0,0,\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5})$ $\big(\frac{1}{2},\frac{1}{2},\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3}\big)$
49 $(0,0,\frac{1}{6},\frac{1}{6},\frac{5}{6},\frac{5}{6})$ $\big(\frac{1}{2},\frac{1}{2},\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3}\big)$ 50 $(0,0,\frac{1}{6},\frac{1}{6},\frac{5}{6},\frac{5}{6})$ $(\frac{1}{2},\frac{1}{2},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5})$
51 $(0,0,\frac{1}{6},\frac{1}{6},\frac{5}{6},\frac{5}{6})$ $(\frac{1}{2},\frac{1}{2},\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8})$ 52 $(0,0,\frac{1}{6},\frac{1}{6},\frac{5}{6},\frac{5}{6})$ $(\frac{1}{2},\frac{1}{2},\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12})$
53 $(0,0,\frac{1}{6},\frac{1}{6},\frac{5}{6},\frac{5}{6})$ $\big(\frac{1}{3},\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{2}{3}\big)$ 54 $(0,0,\frac{1}{6},\frac{1}{6},\frac{5}{6},\frac{5}{6})$ $\big(\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{1}{4},\frac{3}{4}\big)$
55 $(0,0,\frac{1}{6},\frac{1}{6},\frac{5}{6},\frac{5}{6})$ $(\frac{1}{7},\frac{2}{7},\frac{3}{7},\frac{4}{7},\frac{5}{7},\frac{6}{7})$ 56 $(0,0,\frac{1}{6},\frac{1}{6},\frac{5}{6},\frac{5}{6})$ $(\frac{1}{9},\frac{2}{9},\frac{4}{9},\frac{5}{9},\frac{7}{9},\frac{8}{9})$
57 $(0,0,\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8})$ $(\frac{1}{2},\frac{1}{2},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5})$ 58 $(0,0,\frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8})$ $(\frac{1}{2},\frac{1}{2},\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12})$
59 $(0,0,\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10})$ $(\frac{1}{2},\frac{1}{2},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5})$ 60 $(0,0,\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10})$ $(\frac{1}{2},\frac{1}{2},\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12})$
61 $(0,0,\frac{1}{10},\frac{3}{10},\frac{7}{10},\frac{9}{10})$ $(\frac{1}{7},\frac{2}{7},\frac{3}{7},\frac{4}{7},\frac{5}{7},\frac{6}{7})$ 62 $(0,0,\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{11}{12})$ $\big(\frac{1}{3},\frac{2}{3},\frac{1}{4},\frac{3}{4},\frac{1}{4},\frac{3}{4}\big)$
63 $(\frac{1}{3},\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{2}{3})$ $\big(\frac{1}{6},\frac{1}{6},\frac{1}{6},\frac{5}{6},\frac{5}{6},\frac{5}{6}\big)$ 64 $(\frac{1}{3},\frac{1}{3},\frac{2}{3},\frac{2}{3},\frac{1}{3},\frac{2}{3})$ $\big(\frac{1}{9},\frac{2}{9},\frac{4}{9},\frac{5}{9},\frac{7}{9},\frac{8}{9}\big)$
Acknowledgements {#acknowledgements .unnumbered}
================
The first author would like to thank Albrecht Klemm for pointing out the possible connections of the various members of the family listed in Table A with Calabi-Yau 5-folds and referring to an important article [@GMP].
The first and the third author take this opportunity to thank Wadim Zudilin for several discussions on the subject during their visit to MPIM, Bonn on various occasions. The first author extends his thanks to Peter Sarnak for interesting conversations about hypergeometric groups and his encouragement at the initial stage of this work during his visit at CIRM, Luminy in December 2016.
The work of the first and the second author is financially supported by ERC Consolidator grant 648329 (GRANT). The work of the third author is supported in part by the DST-INSPIRE Faculty Fellowship No. DST/INSPIRE/04/2015/000794 and the SEED Grant No. RD/0515-IRCCSH0-035 (IITBombay).
|
---
abstract: 'In this paper, we propose a general framework for the asymptotic analysis of node-based verification-based algorithms. In our analysis we tend the signal length $\boldsymbol{n}$ to infinity. We also let the number of non-zero elements of the signal $\boldsymbol{k}$ scale linearly with $\boldsymbol{n}$. Using the proposed framework, we study the asymptotic behavior of the recovery algorithms over random sparse matrices (graphs) in the context of compressive sensing. Our analysis shows that there exists a success threshold on the density ratio $\boldsymbol{k/n}$, before which the recovery algorithms are successful, and beyond which they fail. This threshold is a function of both the graph and the recovery algorithm. We also demonstrate that there is a good agreement between the asymptotic behavior of recovery algorithms and finite length simulations for moderately large values of $\boldsymbol{n}$.'
author:
- |
Yaser Eftekhari, Amir H. Banihashemi, Ioannis Lambadaris\
Carleton University, Department of Systems and Computer Engineering, Ottawa, ON, Canada
bibliography:
- 'arXiv.bib'
title: 'An Efficient Approach Toward the Asymptotic Analysis of Node-Based Recovery Algorithms in Compressed Sensing'
---
Introduction
============
Compressive sensing was introduced with the idea to represent a signal $\underline{V}\in\mathbb{R}^n$ having $k$ non-zero elements with measurements $\underline{C}\in\mathbb{R}^m$, where $k<m\ll n$ and yet be able to recover the original signal $\underline{V}$ back [@D06; @CRTFeb06]. In the measuring process, also referred to as *encoding*, signal elements are mapped to measurements through a linear transformation represented by the matrix multiplication $\underline{C} = \underline{V}\textbf{G}$, where the matrix $\textbf{G}\in \mathbb{R}^{n\times m}$ is referred to as the *sensing matrix*. This linear mapping can also be characterized by a bipartite graph [@XH07], referred to as the *sensing graph*.
In the recovery process, also referred to as *decoding*, based on the knowledge of the measurements and sensing matrix, we try to estimate the original signal. For given $n,m,$ and $k$, a decoder is called *successful* if it recovers the original signal thoroughly. Two performance measures namely, density ratio $\gamma \triangleq k/n$ and oversampling ratio $r_o \triangleq m/k$ are used in order to measure and compare the performance of the recovery algorithms in the context of compressive sensing[^1].
Researchers have worked intensively in the following main areas: 1) designing **G** for given $k$ and $n$, in order to reduce the number of measurements $m$ required for a successful recovery, 2) improving the recovery algorithms for given $n$ and $m$ to be able to reconstruct signals with larger density ratio, i.e., signals with more non-zero components, and 3) analyzing performance measures of different recovery algorithms in the asymptotic regime (as $n \rightarrow \infty$) in order to compare different algorithms and be able to give an estimate of the performance for finite $n$.
Donoho in [@D06] and Candès *et. al.* in [@CRTFeb06] used sensing matrices with i.i.d. Gaussian entries and the $\ell_1$ norm minimization of the signal estimate as the reconstruction method. Their random sensing matrices contain mostly non-zero elements, which makes the encoding computationally intense. Inspired by the good performance of sparse matrices in channel coding, some researchers (e.g. [@SBB206] and [@ZP08]) used sparse matrices, as the sensing matrix.
From the viewpoint of recovery complexity, the $\ell_1$ minimization algorithm has a computational complexity of $O(n^3)$. To reduce this complexity, some researchers used iterative algorithms as the decoder. For example, the authors in [@SBB206] used an iterative algorithm with a computational complexity of $O(n\cdot\log n)$ over regular bipartite graphs, while Xu and Hassibi in [@XH07] discussed a different iterative algorithm with a complexity of $O(n)$ based on a class of sparse graphs called *expander graphs* [@HLW06]. Authors in [@DM109] and [@DM209] proposed and analyzed an iterative thresholding algorithm over dense graphs with complexity between $O(n\log n)$ and $O(n^2)$, depending on the sensing matrix used. The two verification-based (VB) iterative algorithms originally proposed by [@LM05] in the context of channel coding, were analyzed in [@ZP09; @ZP07; @ZP07J] for the case when $k/n \rightarrow 0 \text{ as } n \rightarrow \infty$ in the context of compressed sensing. In [@LMPDK08] and [@APT09] asymptotic analysis of some iterative message-passing algorithms over sparse sensing matrices can be found. The sensing matrices used in [@ZP09; @ZP07; @ZP07J; @LMPDK08; @APT09] are all sparse.
Our main goal in this paper is to develop a framework for the asymptotic analysis (as $n \rightarrow \infty$) of VB algorithms over sparse random sensing matrices and extend it to include recovery algorithms of similar nature such as [@XH07]. In our work we show that the overall computational complexity of the analysis is linear in the number of iterations. We will also show, through simulation, that VB algorithms when applied to signals with moderate lengths (in the order of $10^5$), are in good agreement with the asymptotic results. Using our approach we can perform a comprehensive study and comparison of performance/complexity trade-off of different VB recovery algorithms over a variety of sparse graphs.
The rest of the paper is organized as follows. In section \[Defs\], we present notations, definitions and assumptions used throughout the paper. We will also introduce VB algorithms in more detail. In section \[enc\] the encoding process and input distributions are described. Decoding algorithms are described in section \[Decoding\]. The analysis framework and its generalization will be introduced in sections \[analysis\] and \[generalization\], respectively. Simulation results will be presented in section \[simulation\].
Background {#Defs}
==========
Bipartite Sensing Graph
-----------------------
In general, the sensing matrix **G** can be thought as the weighted incidence matrix of a weighted bipartite graph. In this case, the element in row $i$ and column $j$ of **G** is the coefficient of the $i^\text{th}$ signal element ($v_i$) in the linear combination resulting the $j^\text{th}$ measurement $c_j$. If the weights are all 1 ($\in \mathbb{R}$), then **G** will reduce to the incidence matrix of a bipartite graph.
Consider a bipartite graph with node sets $\mathcal{V}$ and $\mathcal{C}$. Following channel coding terminology, we will call $\mathcal{V}$ the set of *variable nodes* and $\mathcal{C}$ the set of *check nodes*. In the compressive sensing context, signal components and measurements are mapped to variable nodes and check nodes, respectively. We will interchangeably use the terms *variable nodes* and *signal elements* as well as *check nodes* and *measurements*.
In *regular* bipartite graphs, each variable node (check node) is incident to the same number $d_v$ ($d_c$) of check nodes (variable nodes). The numbers $d_v$ and $d_c$ are called *variable node degree* and *check node degree*, respectively. All graphs discussed in this paper are sparse regular bipartite graphs denoted by the pair $(d_v,d_c)$ and simply referred to as *graphs*.
For a variable node $v_i$ we use the notation $\mathcal{M}(v_i)\subset \mathcal{C}$ to denote the set of check nodes incident to it. The graph composed of a subset $\mathcal{V}^*$ of variable nodes, their neighboring check nodes $\mathcal{M}(\mathcal{V}^*)$ and all the edges in between is called the *subgraph induced by $\mathcal{V}^*$*.
Verification Based Algorithms {#VB}
-----------------------------
Two iterative algorithms over bipartite graphs are proposed and analyzed in [@LM05] for packet-based error correction in the context of channel coding. In these algorithms, a variable node can be in one of the two states: “verified” or “unknown”. Under certain circumstances, a variable node is verified and a value is assigned to it. This variable node, then contributes to the verification of other nodes. The decoding process continues until either the unknown variable nodes become verified entirely, or the process makes no further progress. Due to the verification nature of the procedure, the two algorithms in [@LM05] are called *verification-based* (VB) algorithms. When used in the context of compressive sensing, we would like to see VB algorithms to correctly verify signal elements in each iteration. Indeed, in section \[enc\], we define sufficient conditions for VB algorithms to result in the original signal.
As noted in [@ZP07J], authors in [@LM05] defined the two VB algorithms using node-based (NB) representation but analyzed them using message-based (MB) representation. In the NB representation, the “verified” state of a variable node is a property of the node itself. In the MB representation, however, the state is reflected in the outgoing messages from a variable node. Therefore, in contrast to the NB case, multiple different states may exist for the same variable node. In [@ZP07J], authors showed that for one of the algorithms, the two versions perform the same. But for the other algorithm, the NB version outperforms the MB one (in compressive sensing this implies that NB version can successfully recover signals with larger density ratios).
A well-known method to analyze such iterative algorithms in coding theory is density evolution [@RU01]. However, as density evolution can only be applied to the MB representations, authors in [@ZP07J] used differential equations to analyze the NB versions in the case where $n \rightarrow \infty$. Applying their analysis to $(d_v,d_c)$ graphs, the number of differential equations is roughly $(d^2_c+3d_c)/2$, which becomes intractable for large $d_c$. Therefore, authors used numerical calculations to see the success/failure of the NB algorithms.
In the context of compressive sensing, authors in [@ZP09] analyzed the MB-VB algorithms using density evolution for super-sparse signals ($k/n \rightarrow 0 \text{ as } n \rightarrow \infty$). In our work, we analyze NB-VB algorithms in the regime where $n \rightarrow \infty$ and $k$ grows linearly with $n$. In section \[analysis\] we show that the complexity of our methodology is less than the one used in [@ZP07J].
Encoding and Input Distribution {#enc}
===============================
Let $\mathcal{K}$ denote the set of non-zero elements in the original signal. We refer to this set as the *support set*. In general, there are two ways to define signal elements in compressive sensing:
1. Let $k = |\mathcal{K}| = \gamma n$ be a deterministic value. Out of $n$ signal elements, $k$ of them are selected at random as the support set. The value of each non-zero element is then an i.i.d. random variable with probability distribution $f$.
2. Let $\alpha$, referred to as *density factor*, be the probability that a signal element belongs to $\mathcal{K}$. By fixing $\alpha$, each of the $n$ signal elements has a value i.i.d. with the following distribution: it is zero with probability $1-\alpha$ or follows a distribution $f$ with probability $\alpha$. In this case, $k$ and $\gamma \triangleq k/n$ are random variables. Furthermore, $E[\gamma] = \alpha$ and $E[k] = \alpha n$, where $E[\cdot]$ denotes the expected value.
When $n \rightarrow \infty$, as a consequence of law of large numbers, both cases (1) and (2) provide the same results. In the rest of the paper, we adopt the second model. In this paper, we show that using NB-VB recovery algorithms in the asymptotic regime as $n \rightarrow \infty$, a limiting value exists for $\alpha$, before which the recovery algorithm is successful and beyond which it is not. Henceforth, we refer to this limit as *success threshold*.
The weights of the bipartite graph, corresponding to the non-zero entries of the sensing matrix **G**, can be drawn i.i.d. according to a distribution $g$. In this work, we make the assumption that at least one of the distributions $f$ or $g$ is continuous. Similar conditions have been used in [@ZP08] and [@SBB206]. As a consequence, we introduce and prove Theorem \[unique\] below.
\[unique\] Let $c_i$ and $c_j$ be two distinct check nodes and $\mathcal{V}_i$ and $\mathcal{V}_j$ be their corresponding set of incident variable nodes in $\mathcal{K}$; i.e., $\mathcal{V}_i=\mathcal{M}(c_i)\cap \mathcal{K}$ and $\mathcal{V}_j=\mathcal{M}(c_j)\cap \mathcal{K}$. Suppose at least one of the distributions $f$ or $g$ described before is continuous. If $c_i = 0$ then $\mathcal{V}_i$ is the empty set with probability one. Moreover if $\mathcal{V}_i\neq \mathcal{V}_j$ then: $$\Pr\left(c_i = c_j\right) = 0.$$
Before proving the theorem, let’s state the *Uniqueness of Samples* fact, which is used in the proof.
Let $x_i$ and $x_j$ be two independent samples drawn from a continuous distribution. It follows that: $$\Pr\left(x_i = x_j\right) = 0.$$ In other words, no two independent continuous samples will have the same value, almost surely. More generally, if $c$ denotes any constant, then $$\Pr\left(x_i = c\right) = 0.$$
The value of a check node $c_j$ is $\sum_{i:v_i\in\mathcal{M}(c_j)}{w_{ij}v_i}$, where $w_{ij}$ is the weight associated with the edge connecting the variable node $v_i$ to check node $c_j$. Thus, a check node will have a continuous distribution whenever at least one of its neighboring variable nodes belongs to the support set, and will be zero otherwise. The proof is then complete according to the *Uniqueness of Samples* fact.
Based on Theorem \[unique\] the following statements are correct with probability one (almost surely):
- if two check nodes $c_i$ and $c_j$ have the same non-zero value, they are both neighbor to the same elements of the set $\mathcal{K}$, i.e., $\{\mathcal{M}(c_i) \cap \mathcal{K}\} \equiv \{\mathcal{M}(c_j) \cap \mathcal{K}\}$.
- if the value of a check node $c_i$ is zero, none of its neighboring variable nodes belongs to the set $\mathcal{K}$, i.e., $\{\mathcal{M}(c_i) \cap \mathcal{K}\} \equiv \emptyset$.
In VB algorithms, as we will see in the next section, variable nodes are verified based on similar statements as S1 and S2. Therefore, the assumption on the continuity of $f$ or $g$ is a sufficient condition for the algorithms to converge to the true original signal. Henceforth, we assume that all the weights of the bipartite graph (and therefore the entries of **G**) are in $\{0,1\}$, and the distribution $f$ is continuous over $\mathbb{R}$.
Decoding Process and Recovery Algorithms {#Decoding}
========================================
The decoder, knowing the measurements and the sensing matrix, tries to recover the original signal. So, neither the density factor $\alpha$ nor the support set is known at the decoder.
In this section we discuss the first algorithm (LM1) used in [@ZP09] (here referred to as LM) and the algorithm used in [@SBB206] (referred to as SBB). These two algorithms are the original VB algorithms in the context of compressive sensing. With the description given in section \[VB\], the algorithm in [@XH07], referred to as XH, falls in the category of VB algorithms as well. In the original XH, at each iteration, only one variable node is verified. Here we propose and discuss a parallel version of this algorithm. For the case where $n \rightarrow \infty$ and $d_v \geq 5$, analyzing the set of all variable nodes that can be verified potentially at each iteration of the original XH, it can be shown that the verification of one variable node does not exclude another variable node from this set. Therefore, both versions of XH perform identically in terms of success threshold. The parallel version, however, is considerably faster.
As the last algorithm, we reveal the support set to the decoder and use the conventional peeling decoder in [@LMSS01]. We will refer to this algorithm as *Genie*. The Genie performance will be an upper bound on the performance of VB algorithms[^2].
The description of these four algorithms follows next. Except for the Genie, all variable nodes are initially “unknown”. Before the first iteration, all variable nodes that have at least one neighboring check node with the value equal to 0, are removed from the graph. In each iteration of the four algorithms, when a variable node is verified, its verified value is subtracted from the value of all neighboring check node. The node is then removed from the graph along with all edges adjacent to it. Check nodes with degree 0 are also removed from the graph. At any iteration, the algorithms stop if either all variable nodes are verified, or the algorithm makes no further progress.
At each iteration $\ell$, the algorithms proceed as follows:
**LM**
- find the degree of each check node. Verify all variable nodes that have at least one check node of degree one with the value of singly-connected check nodes.
**SBB**
1. sequentially go through all variable nodes. For each variable node $v$, look for two check nodes $c_i, c_j \in \mathcal{M}(v), j\neq i$ with identical value $g$.
2. Verify all variable nodes $v'$ adjacent to either $c_i$ or $c_j$, i.e., $\forall v': v' \in \{\mathcal{M}(c_i)\cup\mathcal{M}(c_j)\} - \{\mathcal{M}(c_i)\cap\mathcal{M}(c_j)\}$, to zero.
3. If $v$ is the only variable node connected to both $c_i$ and $c_j$ ($v \equiv \{\mathcal{M}(c_i)\cap\mathcal{M}(c_j)\}$) verify it with the value $g$.
For the sake of presentation we have presented simplified versions of LM and SBB algorithms. In section \[generalization\], the necessary modifications that should be made in the analysis to deal with the original algorithms are discussed.
**XH**
- find all variable nodes $v_i$ such that for each $v_i$, $\lceil d_v/2 \rceil$ or more of its neighboring check nodes ($\mathcal{M}(v_i)$) have the same value $g_i$. For each such variable node, verify $v_i$ by the value $g_i$.
**Genie Algorithm**
- in the subgraph induced by the unverified variable nodes in the set $\mathcal{K}$, verify all variable nodes that have at least one check node of degree one with the value of singly-connected check nodes.
Asymptotic Analysis Framework {#analysis}
=============================
To describe the analysis framework, we assume a ($d_v,d_c$) graph. Let $\mathcal{V}^* \subset \mathcal{V}$ be a subset of variable nodes and $\mathcal{G}^*$ be the left-regular graph induced by the set $\mathcal{V}^*$. We denote the set of check nodes with degree $i,1\leq i\leq d_c$ in $\mathcal{G}^*$ by $\mathcal{N}_i$. This partitioning is depicted in Figure \[N1\] for $\mathcal{V}^* \equiv \mathcal{K}$. For mathematical convenience we let $\mathcal{N}_0$ be the set of check nodes that have been removed from the induced subgraph. Clearly, $\mathcal{C} = \bigcup_{i=0}^{d_c}\mathcal{N}_i$. Further, let $\mathcal{X}_i \subset \mathcal{V}^*,1\leq i\leq d_v$ be the set of variable nodes that have $i$ edges connected to the set $\mathcal{N}_1$. Figure \[K1\] shows the partitioning of $\mathcal{V}^* \equiv \mathcal{K}$ to $\mathcal{X}_i$s.
At this stage, we model the verification process of Genie, LM, SBB and XH algorithms using the sets $\mathcal{X}_i$’s in the asymptotic regime where $n \rightarrow \infty$. This verification model is presented and proved in Theorem \[Model\_Iter\]. In this theorem, $\mathcal{K}' \triangleq \mathcal{K}\cup\mathcal{K}_\Delta$, where $\mathcal{K}_\Delta$ is the set of zero-valued variable nodes, in which all variable nodes have $d_v$ edges connected to the set $\mathcal{N}_S \triangleq \bigcup_{i=1}^{d_c} \mathcal{N}_i$.
![Partitioning variable nodes in $\mathcal{K}$ based on the number of their neighbors in $\mathcal{N}_1$.[]{data-label="K1"}](N1){height="100"}
![Partitioning variable nodes in $\mathcal{K}$ based on the number of their neighbors in $\mathcal{N}_1$.[]{data-label="K1"}](K1){height="100"}
\[Model\_Iter\] In each iteration, a variable node is verified asymptotically almost surely if and only if it belongs to the set $\bigcup_{i=\beta}^{d_v}\mathcal{X}_i$, where $\beta$ equals 1, 2, $\lceil d_v/2 \rceil$ for the Genie, SBB and XH, respectively. In these cases $\mathcal{V}^* \equiv \mathcal{K}$.\
In each iteration of the LM algorithm a variable node is verified asymptotically almost surely if and only if it belongs to the set $\bigcup_{i=1}^{d_v}\mathcal{X}_i$. In this case $\mathcal{V}^* \equiv \mathcal{K}'$.
We first prove the theorem for the SBB algorithm. The proof can be used also for the XH algorithm with no major changes. A variable node $v$ is resolved in the SBB algorithm if and only if it is the only unresolved variable node attached to at least two check nodes $c_{i_1}$ and $c_{i_2}$ with the same value. If $c_{i_1},c_{i_2}\in\mathcal{N}_1$, then by definition $v\in\left\{\mathcal{X}_2,\mathcal{X}_3,\cdots,\mathcal{X}_{d_v}\right\}$.\
To prove the converse, assume that at iteration $\ell$, $v\in\mathcal{X}_2$ for simplicity. The only way that this variable node is not resolved in this iteration is that it shares the two singly-connected check nodes with at least one zero-valued variable node $v'$. However, this means that the two variable nodes $v,v'$ form a cycle of length 4. According to [@MWW04], a random regular graph has a fixed number of short cycles regardless of its size. Thus, tending the number of variable nodes to infinity, the probability that two variable nodes $v,v'$ form a cycle of length 4 goes to zero. In other words, variable node $v\in\left\{\mathcal{X}_2,\mathcal{X}_3,\cdots,\mathcal{X}_{d_v}\right\}$ is resolved in the SBB algorithm asymptotically almost surely. This completes the proof.
In the LM algorithm, after removing variable nodes with at least one check node with the value equal to zero, the remaining variable nodes in the graph are in the set $\mathcal{K}'$. This justifies the use of $\mathcal{K}'$. The rest of the proof follows as before.
\[cor\] For a ($d_v,d_c$) graph, the success threshold is the highest for Genie, followed by SBB and lastly XH. This is because the number of $\mathcal{X}_i$ sets contributing to the verification of variable nodes decreases in the same order.
Corollary \[cor\] is also verified in the simulation section \[simulation\]. Theorem \[Model\_Iter\] allows us to model the sensing graph and its evolution in the four algorithms with a graph induced by the support set $\mathcal{K}$ or $\mathcal{K}'$ along with the evolution of the sets $\mathcal{X}_i$ and $\mathcal{N}_j$ in the asymptotic regime. To formulate the evolution of the sets $\mathcal{N}_i$ and $\mathcal{X}_i$, we denote by $\raisebox{2pt}{$p$}^{(\ell)}_{\mathcal{N}_i}$ ($\raisebox{2pt}{$p$}^{(\ell)}_{\mathcal{X}_i}$), the set of probabilities that a check node (variable node) belongs to the set $\mathcal{N}_i$ ($\mathcal{X}_i$) at iteration $\ell$. The superscript $(\ell)$ denotes the iteration number $\ell$. Also, we denote by $\alpha^{(\ell)}$ the probability that a variable node belongs to the unverified set $\mathcal{K}^{(\ell)}$. An iteration $\ell \geq 1$ starts by knowing the probabilities $\alpha^{(\ell)}$, $\raisebox{2pt}{$p$}^{(\ell)}_{\mathcal{N}_i}$ and $\raisebox{2pt}{$p$}^{(\ell)}_{\mathcal{X}_i}$, continues by the calculation of $\alpha^{(\ell+1)}$, and ends with the calculation of $\raisebox{2pt}{$p$}^{(\ell+1)}_{\mathcal{N}_i}$ and $\raisebox{2pt}{$p$}^{(\ell+1)}_{\mathcal{X}_i}$.
Using this analysis approach we are able to track the evolution of $\alpha^{(\ell)}$ with iteration for a given initial density factor $\alpha^{(0)}$. The analysis proceeds until either the probability $\alpha^{(\ell)}$ decreases monotonically to zero as the number of iterations increase, or it is bounded away from zero for any number of iterations. In the first case the algorithm succeeds in recovering the original signal entirely, while in the second case it fails. By examining different values of $\alpha^{(0)}$, the success threshold, defined as the supremum value of $\alpha^{(0)}$ for which the signal can be fully recovered as $n\rightarrow\infty$ and $\ell\rightarrow\infty$, can be determined for different $(d_v,d_c)$ pairs.
In what follows, we show the algorithm to find the update rules for different probabilities. The formulas are calculated using combinatorial enumeration and probabilistic arguments. The proofs can be found in Appendix \[Details\].
1. Based on the set of probabilities $\raisebox{2pt}{$p$}^{(\ell)}_{\mathcal{X}_i}$, find the probability $\raisebox{2pt}{$p$}^{(\ell)}_{r}$, that a variable node is verified in iteration $\ell+1$ from $\raisebox{2pt}{$p$}^{(\ell)}_{r} = \sum_{i=\beta}^{d_v}{\raisebox{2pt}{$p$}^{(\ell)}_{\mathcal{X}_i}}$. The value of $\beta$ can be 1, or 2, or $\lceil d_v/2 \rceil$ as in Theorem \[Model\_Iter\]. The probability $\alpha^{(\ell+1)}$ then follows from $\alpha^{(\ell+1)} = \alpha^{(\ell)}(1-\raisebox{2pt}{$p$}^{(\ell)}_r)$.
2. Find the set of probabilities $\raisebox{2pt}{$p$}^{(\ell+1)}_{\mathcal{N}_j},j=0,\cdots,d_c$ from $\raisebox{2pt}{$p$}^{(\ell+1)}_{\mathcal{N}_j} = \sum_{i=j}^{d_c}{p_{\mathcal{N}_{i}}^{(\ell)}p_{\mathcal{N}_{ij}}^{(\ell)}}$, where: $$\raisebox{2pt}{$p$}^{(\ell)}_{\mathcal{N}_{10}} = \displaystyle\frac{\alpha^{(\ell)}d_c\displaystyle\sum_{i=\beta}^{d_v}{i\raisebox{2pt}{$p$}^{(\ell)}_{\mathcal{X}_i}}}{d_v \raisebox{2pt}{$p$}^{(\ell)}_{\mathcal{N}_1}},\hspace{1cm}\raisebox{2pt}{$p$}^{(\ell)}_{\mathcal{N}_{11}} = 1-\raisebox{2pt}{$p$}^{(\ell)}_{\mathcal{N}_{10}}, \hspace{1cm}
\raisebox{2pt}{$p$}^{(\ell)}_{\mathcal{N}_{ij}} = {i\choose{i-j}} \left(A\right)^{i-j}\left(1-A\right)^j,\hspace{10pt} i\geq2.$$
3. Find the set of probabilities $\raisebox{2pt}{$p$}^{(\ell+1)}_{\mathcal{X}_i},i=0,\cdots,d_v$ from $\raisebox{2pt}{$p$}^{(\ell+1)}_{\mathcal{X}_i} = \frac{1}{N}\sum_{j=0}^{\min\{i,\beta-1\}}{p_{\mathcal{X}_j}^{(\ell)}p_{\mathcal{X}_{ji}}^{(\ell)}}$, where: $$\raisebox{2pt}{$p$}_{\mathcal{X}_{ij}}^{(\ell)} = {d_v-i\choose j-i}\left(B\right)^{j-i}\left(1-B\right)^{d_v-j}, \hspace{10pt} N = 1-p_r^{(\ell)}.$$ The quantities $A$ and $B$ are given by: $$\hspace{-10pt}
A =
\frac{d_v \raisebox{2pt}{$p$}^{(\ell)}_{r} - \displaystyle\sum_{i=\beta}^{d_v}{i\raisebox{2pt}{$p$}^{(\ell)}_{\mathcal{X}_i}}}{d_v\left(1 - \frac{\displaystyle\raisebox{2pt}{$p$}^{(\ell)}_{\mathcal{N}_1}}{\displaystyle\alpha^{(\ell)} d_c}\right)},
\hspace{10pt}
B = \frac{\displaystyle\sum_{j=2}^{d_c}{\raisebox{2pt}{$p$}_{\mathcal{N}_j}^{(\ell)}\raisebox{2pt}{$p$}_{\mathcal{N}_{j1}}^{(\ell)}}}{\displaystyle\sum_{j=2}^{d_c}{\raisebox{2pt}{$p$}_{\mathcal{N}_j}^{(\ell)}\raisebox{2pt}{$p$}_{\mathcal{N}_{j1}}^{(\ell)}}+\displaystyle\sum_{i=2}^{d_c}{i\raisebox{2pt}{$p$}_{\mathcal{N}_i}^{(\ell+1)}}}.$$
The initial probabilities $\raisebox{2pt}{$p$}^{(0)}_{\mathcal{N}_i}$ and $\raisebox{2pt}{$p$}^{(0)}_{\mathcal{X}_i}$ for Genie, SBB and XH are as follows. These probabilities for the LM are rather more involved and can be seen in Appendix \[Details\]. $$\raisebox{2pt}{$p$}^{(0)}_{\mathcal{N}_i} = {d_c\choose i} \left(\alpha^{(0)}\right)^i \left(1-\alpha^{(0)}\right)^{d_c-i},\hspace{20pt}i=0,\cdots,d_c.$$ $$\raisebox{2pt}{$p$}^{(0)}_{\mathcal{X}_i} = {d_v\choose i}\left(p^{{(0)}}\right)^i\left(1-p^{(0)}\right)^{d_v-i},\hspace{20pt}i=0,\cdots,d_v,$$ where, $p^{{(0)}} = \raisebox{2pt}{$p$}^{(0)}_{\mathcal{N}_1}/\alpha^{(0)}d_c$, and $\alpha^{(0)} = \alpha$ (density factor). The number of update rules in each iteration, is almost the same as the one in [@ZP07J]. Therefore, the overall computational complexity of the analysis is linear in $\ell$. However, the calculation of the updates is based on simple calculations as opposed to solving differential equations as in [@ZP07J].
Generalization of the Framework {#generalization}
===============================
The extra step in the original LM and SBB algorithms is as follows: at each iteration, look for check nodes with value equal to zero. If such a check node is found, verify the neighboring variable nodes with the value zero.
To analyze the original algorithms, the set of variable nodes is divided in two sets $\mathcal{K}$ and $\mathcal{K}'$ as in the simplified LM. The set of check nodes is then categorized into sets $\mathcal{N}_{ij}$, where $i$ and $j$ indicate the number of edges between the check node and the sets $\mathcal{K}$ and $\mathcal{K}'$, respectively. The recursive formulas for the new setup can be found using the same methodology as before.
Simulation Results {#simulation}
==================
In this section, we present simulation results obtained by running the recovery algorithms over random regular bipartite graphs to recover sparse signals of finite length $n$. We also present analytical results obtained by running the mathematical analysis described in Section \[analysis\] for the asymptotic regime of $n\rightarrow\infty$. The comparison of the results shows that there is a good agreement between simulation and analytical results for moderately large $n = 10^5$.
In all simulations, signal elements (variables) are drawn from a Gaussian distribution. The regular bipartite graphs are constructed randomly with no parallel edges and all the edge weights equal to one. Each simulation point is generated by averaging over 1000 random instances of the input signal.
For the analytical results, we consider the recovery algorithm successful if $\alpha^{(\ell)} < 10^{-7}$. To calculate the success threshold, a binary search is performed until the separation between start and end of the search region is less than $10^{-4}$.
As the first experiment, we apply XH, SBB and LM algorithms to four randomly constructed (5,6) regular graphs with $n=\{3, 15, 100, 1000\}\times 10^3$. The success rate of the algorithms vs. the initial density factor $\alpha^{(0)}$ are shown in Figure \[VarLength\]. From the figure, we can see that, for all algorithms, by increasing $n$, the transition part of the curves becomes sharper such that the curves for $n=10^6$ practically look like a step function. In the figure we have also given the success threshold of the algorithms for $(5,6)$ graphs, obtained based on the proposed analysis, as arrows. As can be seen, the thresholds match very well with the waterfall region of the simulation curves.
In Table \[success\_threshold\_1\], we have listed the analytical success thresholds of the iterative recovery algorithms for graphs with different $d_v$ and $d_c$. The results for XH and SBB algorithms on $(3,4)$ graphs are missing as these algorithms perform poorly on such graphs. As expected, for every graph, the Genie algorithm has the best performance. This is followed by SBB, LM and XH algorithms, respectively. Careful inspection of the results in Table \[success\_threshold\_1\] indicates that the oversampling ratio $r_o$, improves consistently by decreasing both $d_v$ and $d_c$ values. In fact, among the results presented in Table \[success\_threshold\_1\], the application of the Genie and LM to $(3,4)$ graphs results in the lowest oversampling ratio ($r_o = d_v/\alpha d_c$) of $\approx 1.16$ and $\approx 2.51$, respectively. Note that the success threshold of the Genie over regular graphs is far from the optimal achievable success threshold $d_v/d_c$ proved in [@WV09].
To further investigate the degree of agreement between our asymptotic theoretical analysis and finite-length simulation results, we have presented in Figure \[G4G6Evolution100k\] the evolution of density factor $\alpha^{(\ell)}$ with iterations $\ell$ for the four algorithms over a $(5,6)$ graph with $n=10^5$. For each algorithm, two values of $\alpha^{(0)}$ are selected: one above and one below the success threshold presented in Table \[success\_threshold\_1\]. The theoretical results are shown by solid lines while simulations are presented with dotted lines. As one can see, the two sets of results are in close agreement particularly for the cases where $\alpha^{(0)}$ is above the threshold and for smaller values of $\ell$.
![Success Ratio of Algorithms XH, LM and SBB vs. $\alpha^{(0)}$ for $(5,6)$ graphs with $n=3K, 15K, 100K \text{ and } 1000K$. Analytical thresholds are shown by arrows.[]{data-label="VarLength"}](VarLength){height="280"}
![Evolution of $\alpha^{(\ell)}$ vs. iteration number $\ell$ for the four recovery algorithms over a $(5,6)$ graph with $n=100K$.[]{data-label="G4G6Evolution100k"}](G4G6Evolution100k){height="230"}
$(d_v,d_c)$ $(3,4)$ $(5,6)$ $(5,7)$ $(5,8)$ $(7,8)$
------------- --------- --------- --------- --------- ---------
XH - 0.1846 0.1552 0.1339 0.1435
SBB - 0.3271 0.2783 0.2421 0.3057
LM 0.2993 0.2541 0.2011 0.1646 0.2127
Genie 0.6474 0.5509 0.4786 0.4224 0.4708
: Success Thresholds for different graphs and algorithms
\[success\_threshold\_1\]
Detailed Description of the Analysis Framework {#Details}
==============================================
***B.1. General Setup***\
To derive the formulation for the general framework, we assume that we are at iteration $\ell$. The state of the system at this iteration is fully characterized by the set $\mathcal{K}^{(\ell)}$ and probabilities $\raisebox{2pt}{$p$}^{(\ell)}_{\mathcal{X}_i}$, $\raisebox{2pt}{$p$}^{(\ell)}_{\mathcal{N}_i}$, and $\alpha^{(\ell)}$.\
The probabilities $\raisebox{2pt}{$p$}^{(\ell)}_{\mathcal{N}_i}$ denote the probability of a check node having $i$ connected edges to the set $\mathcal{K}^{(\ell)}$.\
The probabilities $\raisebox{2pt}{$p$}^{(\ell)}_{\mathcal{X}_i}$ denote the probability of a variable node in the set $\mathcal{K}^{(\ell)}$ having $i$ connected edges to the set $\mathcal{N}_1^{(\ell)}$.\
The probability $\alpha^{(\ell)}$ denotes the probability of a variable node belonging to the set $\mathcal{K}^{(\ell)}$.\
Throughout the analysis, the head and tail of an edge $e$ will be denoted by $h_e$ and $t_e$, respectively. As the direction of edges is of no consequence to our analysis, without loss of generality, we assign the head to the variable side and the tail to the check side.\
***B.2. Derivation of Formulas***\
To find the probability that a variable node is resolved, we first need to characterize the set of variable nodes resolved by each algorithm. A careful inspection of the iterative algorithms under consideration and based on Theorem \[Model\_Iter\] in section \[analysis\], in general, the variable nodes in the set $\mathcal{R}_\mathcal{X} \triangleq \left\{\mathcal{X}_\beta\cup\mathcal{X}_{\beta+1}\cup\cdots\cup\mathcal{X}_{d_v}\right\}$ are recovered and those in the set $\mathcal{R}^{c}_\mathcal{X} \triangleq \left\{\mathcal{X}_0\cup\mathcal{X}_1\cup\cdots\cup\mathcal{X}_{\beta-1}\right\}$ are left intact, where the value $\beta$ depends on the algorithm.\
Thus, the probability $\raisebox{2pt}{$p$}^{(\ell)}_{r}$ of a variable node in $\mathcal{K}^{(\ell)}$ being recovered is: $$\raisebox{2pt}{$p$}^{(\ell)}_{r} = \sum_{\mathcal{X}_i\in\mathcal{R}_{\mathcal{X}}}{\raisebox{2pt}{$p$}^{(\ell)}_{\mathcal{X}_i}}.
\label{eq:v}$$ Therefore, according to the total probability theorem, the probability of a variable node $v$ remaining unresolved, i.e., $v\in\mathcal{K}^{(\ell+1)}$, is: $$\alpha^{(\ell+1)} = \alpha^{(\ell)}\left(1-\raisebox{2pt}{$p$}^{(\ell)}_{r}\right).$$ When a variable node is recovered, its $d_v$ edges along with the variable node itself are removed from the subgraph induced by $\mathcal{K}^{(\ell)}$ and therefore, check nodes incident to these removed edges would face a reduction in their degree. We denote by $p^{(\ell)}_{\mathcal{N}_{ij}}$ the probability that a check node $c$ turns from degree $i$ in iteration $\ell$ (i.e. $c\in\mathcal{N}^{(\ell)}_i$) to degree $j\leq i$ in iteration $\ell+1$ (i.e. $c\in\mathcal{N}^{(\ell+1)}_j$). This happens if out of $i$ edges emanating from $c$ and incident to the set of unresolved variable nodes $\mathcal{K}^{(\ell)}$, $i-j$ of them are removed.\
On the other side of the graph, when a variable node $v\in\mathcal{X}_i$ is recovered (i.e., $\mathcal{X}_i\subset\mathcal{R}_\mathcal{X}$), by definition, out of $d_v$ edges emanating from $v$, $i$ are connected to the set $\mathcal{N}_1$ and $d_v-i$ are connected to the set $\mathcal{R}_\mathcal{N}\triangleq\left\{\mathcal{N}_2\cup\mathcal{N}_3\cup\cdots\cup\mathcal{N}_{d_c}\right\}$.\
In the asymptotic case, as $n$ grows large, we assume that the graph has a random structure in every iteration. Therefore, for each recovered variable node $v$, the set of $i$ and $d_v-i$ removed edges are distributed uniformly with respect to the check nodes in $\mathcal{N}_1$ and $\mathcal{R}_\mathcal{N}$, respectively. As we have two sets $\mathcal{N}_1$ and $\mathcal{R}_\mathcal{N}$ to deal with, we differentiate between $p^{(\ell)}_{\mathcal{N}_{10}}$ and $p^{(\ell)}_{\mathcal{N}_{ij}}$ ($i>1$). Once the probabilities $p^{(\ell)}_{\mathcal{N}_{10}}$ and $p^{(\ell)}_{\mathcal{N}_{ij}}$ are found, the new check node degree distribution $p_{\mathcal{N}_j}^{(\ell+1)}$ with respect to the subgraph induced by $\mathcal{K}^{(\ell+1)}$ can then be derived using the total probability law: $$p_{\mathcal{N}_j}^{(\ell+1)} = \sum_{i=j}^{d_c}{p_{\mathcal{N}_{i}}^{(\ell)}p_{\mathcal{N}_{ij}}^{(\ell)}},\hspace{20pt}j=0,\cdots,d_c.$$ To find the probabilities $p^{(\ell)}_{\mathcal{N}_{10}}$ and $p^{(\ell)}_{\mathcal{N}_{ij}}$, $i>1$, we denote by $\raisebox{2pt}{$p$}_{d=1}$ and $\raisebox{2pt}{$p$}_{d>1}$ the conditional probabilities that an edge in the induced subgraph is removed given that it is incident to a check node in the set $\mathcal{N}_1$ and $\mathcal{R}_\mathcal{N}$, respectively. It then follows that: $$p^{(\ell)}_{\mathcal{N}_{10}} = {1\choose{1}} \left(\raisebox{2pt}{$p$}^{(\ell)}_{d=1}\right)^{1}\left(1-\raisebox{2pt}{$p$}^{(\ell)}_{d=1}\right)^0 = \raisebox{2pt}{$p$}^{(\ell)}_{d=1},\hspace{1cm}p^{(\ell)}_{\mathcal{N}_{11}} = 1-p^{(\ell)}_{\mathcal{N}_{10}}.$$ and $$p^{(\ell)}_{\mathcal{N}_{ij}} = {i\choose{i-j}} \left(p^{(\ell)}_{d>1}\right)^{i-j}\left(1-p^{(\ell)}_{d>1}\right)^j,\hspace{20pt} i=2,\cdots,d_c,\hspace{20pt} j=0,\cdots,i.$$ The probability $\raisebox{2pt}{$p$}^{(\ell)}_{d=1}$ can then be calculated as follows: $$\begin{aligned}
\raisebox{2pt}{$p$}^{(\ell)}_{d=1} &= \Pr[h_e\in\mathcal{R}^{(\ell)}_\mathcal{X}|t_e\in\mathcal{N}^{(\ell)}_1,h_e\in\mathcal{K}^{(\ell)}]
= \displaystyle\sum_{ \mathcal{X}_i\in\mathcal{R}_{\mathcal{X}}}{\Pr[h_e\in\mathcal{X}^{(\ell)}_i|t_e\in\mathcal{N}^{(\ell)}_1,h_e\in\mathcal{K}^{(\ell)}]},\\
&= \displaystyle\sum_{ \mathcal{X}_i\in\mathcal{R}_{\mathcal{X}}}{\frac{\Pr[t_e\in\mathcal{N}^{(\ell)}_1|h_e\in\mathcal{X}^{(\ell)}_i,h_e\in\mathcal{K}^{(\ell)}] \Pr[h_e\in\mathcal{X}^{(\ell)}_i|h_e\in\mathcal{K}^{(\ell)}]}{\Pr[t_e\in\mathcal{N}^{(\ell)}_1|h_e\in\mathcal{K}^{(\ell)}]}}
= \displaystyle\sum_{\mathcal{X}_i\in\mathcal{R}_{\mathcal{X}}}{\frac{\frac{i}{d_v}\raisebox{2pt}{$p$}^{(\ell)}_{\mathcal{X}_i}}{p^{(\ell)}}} = \displaystyle\frac{\displaystyle\sum_{\mathcal{X}_i\in\mathcal{R}_{\mathcal{X}}}{i\raisebox{2pt}{$p$}^{(\ell)}_{\mathcal{X}_i}}}{d_vp^{(\ell)}}.\end{aligned}$$ where, $p^{(\ell)}$ is the probability of an edge $e$ being adjacent to a check node in $\mathcal{N}^{(\ell)}_1$ conditioned on the fact that it is adjacent to a variable node in $\mathcal{K}^{(\ell)}$ (refer to Figure \[K1\]). By using Bayes’ rule, this probability is calculated as: $$p^{(\ell)} = \Pr[t_e\in \mathcal{N}^{(\ell)}_1|h_e\in\mathcal{K}^{(\ell)}] = \frac{\Pr[h_e\in\mathcal{K}^{(\ell)}|t_e\in \mathcal{N}^{(\ell)}_1]\Pr[t_e\in \mathcal{N}^{(\ell)}_1]}{\Pr[h_e\in\mathcal{K}^{(\ell)}]} = \frac{\frac{1}{d_c}\times \raisebox{2pt}{$p$}^{(\ell)}_{\mathcal{N}_1}}{\alpha^{(\ell)}} = \frac{\raisebox{2pt}{$p$}^{(\ell)}_{\mathcal{N}_1}}{\alpha^{(\ell)} d_c}.
\label{eq:p}$$ The probability $\raisebox{2pt}{$p$}^{(\ell)}_{d>1}$ can be computed following similar steps: $$\begin{aligned}
\raisebox{2pt}{$p$}^{(\ell)}_{d>1} &= \Pr[h_e\in\mathcal{R}^{(\ell)}_\mathcal{X}|t_e\in\mathcal{R}^{(\ell)}_\mathcal{N},h_e\in\mathcal{K}^{(\ell)}]
= \displaystyle\sum_{ \mathcal{X}_i\in\mathcal{R}_{\mathcal{X}}}{\Pr[h_e\in\mathcal{X}^{(\ell)}_i|t_e\in\mathcal{R}^{(\ell)}_\mathcal{N},h_e\in\mathcal{K}^{(\ell)}]}\\
&= \displaystyle\sum_{ \mathcal{X}_i\in\mathcal{R}_{\mathcal{X}}}{\frac{\Pr[t_e\in\mathcal{R}^{(\ell)}_\mathcal{N}|h_e\in\mathcal{X}^{(\ell)}_i,h_e\in\mathcal{K}^{(\ell)}] \Pr[h_e\in\mathcal{X}^{(\ell)}_i|h_e\in\mathcal{K}^{(\ell)}]}{\Pr[t_e\in\mathcal{R}^{(\ell)}_\mathcal{N}|h_e\in\mathcal{K}^{(\ell)}]}}\\
&= \displaystyle\sum_{ \mathcal{X}_i\in\mathcal{R}_{\mathcal{X}}}{\frac{\left(1-\Pr[t_e\in\mathcal{N}^{(\ell)}_1|h_e\in\mathcal{X}^{(\ell)}_i,h_e\in\mathcal{K}^{(\ell)}]\right) \Pr[h_e\in\mathcal{X}^{(\ell)}_i|h_e\in\mathcal{K}^{(\ell)}]}{\left(1-\Pr[t_e\in\mathcal{N}^{(\ell)}_1|h_e\in\mathcal{K}^{(\ell)}]\right)}}\\
&= \displaystyle\sum_{\mathcal{X}_i\in\mathcal{R}_{\mathcal{X}}}{\frac{\left(1-\frac{i}{d_v}\right)\raisebox{2pt}{$p$}^{(\ell)}_{\mathcal{X}_i}}{1-p^{(\ell)}}} = \displaystyle\frac{\displaystyle\sum_{\mathcal{X}_i\in\mathcal{R}_{\mathcal{X}}}{\raisebox{2pt}{$p$}^{(\ell)}_{\mathcal{X}_i}}-\displaystyle\sum_{ \mathcal{X}_i\in\mathcal{R}_{\mathcal{X}}}{\frac{i}{d_v}\raisebox{2pt}{$p$}^{(\ell)}_{\mathcal{X}_i}}}{1-p^{(\ell)}}\\
&=
\frac{d_v \raisebox{2pt}{$p$}^{(\ell)}_{r}- \displaystyle\sum_{\mathcal{X}_i\in\mathcal{R}_{\mathcal{X}}}{i\raisebox{2pt}{$p$}^{(\ell)}_{\mathcal{X}_i}}}{d_v\left(1 - p^{(\ell)}\right)}.\end{aligned}$$ Given $p^{(\ell+1)}_{\mathcal{N}_i}$, the updated set $\mathcal{K}^{(\ell+1)}$ should be re-partitioned into the sets $\mathcal{X}^{(\ell+1)}_i$. By definition, a variable node $v$ in $\mathcal{X}_i$ has $i$ connections to $\mathcal{N}_1$ and $d_v-i$ connections to the set $\mathcal{R}_\mathcal{N}$. Therefore, if one of the adjacent check nodes of $v$ in $\mathcal{R}^{(\ell)}_\mathcal{N}$ turns to a check node in $\mathcal{N}^{(\ell+1)}_1$, $v$ will move from $\mathcal{X}^{(\ell)}_{i}$ to $\mathcal{X}^{(\ell+1)}_{i+1}$. This is shown in Figure \[rec\].\
![Configuration of $\mathcal{N}_1$ after recovery.[]{data-label="x"}](Drawing3){height="100"}
![Configuration of $\mathcal{N}_1$ after recovery.[]{data-label="x"}](x){height="80"}
We denote by $\mathcal{N}^{(\ell)+}_1$ the set of check nodes that move from $\mathcal{R}^{(\ell)}_\mathcal{N}$ to $\mathcal{N}^{(\ell+1)}_1$. The configuration of $\mathcal{N}^{(\ell+1)}_1$, $\mathcal{N}^{(\ell)}_{11}$ and $\mathcal{N}^{(\ell)+}_1$ is depicted in Figure \[x\].\
We also refer to the set of edges that have their tail in the set $\mathcal{R}^{(\ell)}_\mathcal{N}$ as *free edges*. Due to the random structure of the graph assumed in the asymptotic case, edges connected to the set $\mathcal{N}^{+}_1$ are uniformly distributed with respect to free edges.\
It can be seen that the probability $\raisebox{2pt}{$p$}^{(\ell)}_{\mathcal{X}_{ij}}$ defined as the probability of a variable node $v\in \mathcal{X}^{(\ell)}_i$ turning to $v\in\mathcal{X}^{(\ell+1)}_j$ is calculated by: $$\label{pxij}
p_{\mathcal{X}_{ij}}^{(\ell)} = \Pr[v\in\mathcal{X}^{(\ell+1)}_j|v\in\mathcal{X}^{(\ell)}_i,v\in\mathcal{K}^{(\ell+1)}] = {d_v-i\choose j-i}\left(\raisebox{2pt}{$p$}^{(\ell)}_{x}\right)^{j-i}\left(1-\raisebox{2pt}{$p$}^{(\ell)}_{x}\right)^{d_v-j}, \hspace{20pt} j=i,\cdots,d_v,\hspace{10pt} i=0,\cdots,\beta-1.$$ where $\beta$ is the algorithm dependent parameter defined in conjunction with the set $\mathcal{R}_\mathcal{X}$, and $\raisebox{2pt}{$p$}^{(\ell)}_{x}$ is defined as the probability that a free edge corresponds to the set $\mathcal{N}^{(\ell)+}_1$. Note that such an edge will have its head in the set $\mathcal{R}^c_\mathcal{X}$ because the set $\mathcal{R}_\mathcal{X}$ is completely resolved.\
Thus, based on the total probability law we have: $$\raisebox{2pt}{$p$}^{(\ell+1)}_{\mathcal{X}_j}= \frac{\displaystyle\sum_{i=0}^{\min\{j,\beta-1\}}{\raisebox{2pt}{$p$}^{(\ell)}_{\mathcal{X}_i} \raisebox{2pt}{$p$}^{(\ell)}_{\mathcal{X}_{ij}}}}{1-\raisebox{2pt}{$p$}^{(\ell)}_{r}}, \hspace{20pt} j=0,\cdots,d_v.
\label{p_x_1}$$ The denominator is just a normalization factor making $\raisebox{2pt}{$p$}^{(\ell+1)}_{\mathcal{X}_j}$ a valid probability measure. It is derived as: $$\displaystyle\sum_{j=0}^{d_v}{\raisebox{2pt}{$p$}^{(\ell+1)}_{\mathcal{X}_j}}=\sum_{ \mathcal{X}_i\in\mathcal{R}^c_{\mathcal{X}}}{\sum_{j=i}^{d_v}{\raisebox{2pt}{$p$}^{(\ell)}_{\mathcal{X}_i} \raisebox{2pt}{$p$}^{(\ell)}_{\mathcal{X}_{ij}}}} = \sum_{\mathcal{X}_i\in\mathcal{R}^c_{\mathcal{X}}}{p_{\mathcal{X}_i}^{(\ell)}} = 1-\raisebox{2pt}{$p$}^{(\ell)}_{r}.$$ The probability $\raisebox{2pt}{$p$}^{(\ell)}_{x}$ is calculated as follows: [ $$\begin{aligned}
\raisebox{3pt}{$p$}^{(\ell)}_x &=& \Pr[t_e\in\mathcal{N}^{(\ell)+}_1|h_e\in\mathcal{K}^{(\ell+1)},t_e\notin\mathcal{N}_{11}^{(\ell)}],\nonumber\\
&=& \frac{\raisebox{2pt}{$p$}^{(\ell)+}_{\mathcal{N}_1}}{\raisebox{2pt}{$p$}^{(\ell)+}_{\mathcal{N}_1} + \displaystyle\sum_{i=2}^{d_c}{i\raisebox{2pt}{$p$}^{(\ell+1)}_{\mathcal{N}_i}}}.
\label{p_x}\end{aligned}$$ ]{} where, $$\raisebox{2pt}{$p$}^{(\ell)+}_{\mathcal{N}_1} = \sum_{j=2}^{d_c}{\raisebox{2pt}{$p$}^{(\ell)}_{\mathcal{N}_j} \raisebox{2pt}{$p$}^{(\ell)}_{\mathcal{N}_{j1}}}.$$ This probability will be inserted in (\[pxij\]) for the calculation of $\raisebox{3pt}{$p$}^{(\ell)}_{\mathcal{X}_{ij}}$.\
***B.3. Pre-phase Iteration for Genie, XH and SBB Algorithms***\
The initial density factor is denoted by $\alpha^{(0)}$. In a random graph, an edge emanates from a variable node in the set $\mathcal{K}^{(0)}$ with probability $\alpha^{(0)}$. Therefore, $\raisebox{2pt}{$p$}^{(0)}_{\mathcal{N}_i}$, the probability of a check node being in the set $\mathcal{N}^{(0)}_i$, is given by the following binomial distribution. $$\raisebox{2pt}{$p$}^{(0)}_{\mathcal{N}_i} = {d_c\choose i} \left(\alpha^{(0)}\right)^i \left(1-\alpha^{(0)}\right)^{d_c-i},\hspace{20pt}i=0,\cdots,d_c.
\label{p_n}$$ To find the probability $\raisebox{2pt}{$p$}^{(0)}_{\mathcal{X}_i}$, we need the probability $p^{(0)}$ defined and calculated in equation (\[eq:p\]). Knowing $p^{(0)}$ the probability $\raisebox{2pt}{$p$}^{(0)}_{\mathcal{X}_i}$ will follow a binomial distribution as follows: $$\raisebox{2pt}{$p$}^{(0)}_{\mathcal{X}_i} = {d_v\choose i}\left(p^{{(0)}}\right)^i\left(1-p^{(0)}\right)^{d_v-i},\hspace{20pt}i=0,\cdots,d_v.
\label{eq:x}$$
***B.4. Pre-phase Iterations for LM Algorithm***\
In this section we drop all the superscripts representing the iteration number for the ease of notation. It will be introduced when there is a potential ambiguity.\
Starting from the initial density factor $\alpha^{(0)}$, probability $\raisebox{2pt}{$p$}^{(0)}_{\mathcal{N}_i}$ of check degree distribution in the induced subgraph by the set $\mathcal{K}^{(0)}$ can be calculated from (\[p\_n\]).\
In the first iteration of the LM algorithm, the variable nodes adjacent to at least one zero-valued check node are set to zero. The set of remaining variable nodes are called *potential support set* and are denoted by $\mathcal{K}'$. This set is a combination of the real support set $\mathcal{K}$ and an additional set $\mathcal{K}_\Delta$; The set of all zero-valued variable nodes that have $d_v$ connections to the nonzero check nodes $\mathcal{N}_{\neq 0}$. The probability $\raisebox{2pt}{$p$}_{\mathcal{K}_\Delta}$ that a variable node belongs to the set $\mathcal{K}_\Delta$ is calculated as: $$\raisebox{3pt}{$p$}_{\mathcal{K}_\Delta} = \raisebox{2pt}{$p$}^{d_v}_{\Delta}.$$ where, $\raisebox{2pt}{$p$}_{\Delta}$ is the probability that an edge from zero-valued variable nodes terminates in $\mathcal{N}_{\neq 0}$. This probability is: $$\begin{aligned}
\raisebox{2pt}{$p$}_{\Delta} &= \Pr[t_e\in\mathcal{N}_{\neq 0}|h_e\notin\mathcal{K}] = 1 - \Pr[t_e\in\mathcal{N}_0|h_e\notin\mathcal{K}],\\
&= 1 - \frac{\Pr[h_e\notin\mathcal{K}|t_e\in\mathcal{N}_0]\Pr[t_e\in\mathcal{N}_0]}{\Pr[h_e\notin\mathcal{K}]} = 1 - \frac{p\raisebox{-5pt}{$\scriptstyle{\mathcal{N}_0}$}}{1-\alpha},\\
&= 1- \left(1-\alpha\right)^{d_c-1}.\end{aligned}$$ In this algorithm we have to group the check nodes based on the number of connections they have to the potential support set, rather than the original support set. This brings sets, denoted by $\mathcal{N}'_i$, into play that reflect the effect of $\mathcal{K}_\Delta$ on the degree distribution of check nodes. As $\mathcal{K}_\Delta$ does not change the size of $\mathcal{N}_{\neq 0}$, to calculate the probability of each subset $\mathcal{N}'_i$, we calculate the probability of each set $\mathcal{N}_i$ as before and then account for the effect of $\mathcal{K}_\Delta$ on changing the degrees. This process can be seen in Figure \[NNpConversion\].
With an abuse of notation, we will denote by $\raisebox{2pt}{$p$}_{\mathcal{N}_{ij}}$ the probability that a check nodes is transfered from $\mathcal{N}_i$ to $\mathcal{N}'_j$. This probability can be calculated as follows: $$\raisebox{3pt}{$p$}_{\mathcal{N}_{ij}} = {{d_c-i}\choose{j-i}}\left(p'\right)^{j-i}\left(1-p'\right)^{d_c - j},\hspace{20pt}i=1,\cdots,d_c,\hspace{20pt}j=i,\cdots,d_c.$$ where, $p'$ is the probability that a free edge from $\mathcal{N}_{\neq 0}$ goes to $\mathcal{K}_\Delta$ and is calculated as: $$\begin{aligned}
p' &= \Pr[h_e\in\mathcal{K}_\Delta|t_e\in\mathcal{N}_{\neq 0},h_e\notin\mathcal{K}] = \frac{\Pr[t_e\in\mathcal{N}_{\neq 0}|h_e\in\mathcal{K}_\Delta]\Pr[h_e\in\mathcal{K}_\Delta|h_e\notin\mathcal{K}]}{\Pr[t_e\in\mathcal{N}_{\neq 0}|h_e\notin\mathcal{K}]} = \frac{1\times
\raisebox{3pt}{$p$}_{\mathcal{K}_\Delta}}{\raisebox{3pt}{$p$}_{\Delta}} = \raisebox{3pt}{$p$}_{\Delta}^{d_v-1}.\end{aligned}$$ Thus, $$\raisebox{3pt}{$p$}_{\mathcal{N}'_0} = \raisebox{3pt}{$p$}_{\mathcal{N}_0}, \hspace{1cm} \raisebox{3pt}{$p$}_{\mathcal{N}'_j} = \sum_{i=1}^{j}{\raisebox{3pt}{$p$}_{\mathcal{N}_i} \raisebox{3pt}{$p$}_{\mathcal{N}_{ij}}} ,\hspace{20pt}j=1,\cdots,d_c.$$ Variable nodes in $\mathcal{K}_\Delta$ are not connected to the set $\mathcal{N}_1'$. Thus, the support set $\mathcal{K}$ is divided into $\mathcal{X}_i$ according to equation (\[eq:x\]). The only difference is that $\raisebox{3pt}{$p$}_{\mathcal{N}'_1}$ should be used instead of $\raisebox{2pt}{$p$}^{(0)}_{\mathcal{N}_1}$ in the calculation of $p^{(0)}$. Also, variable nodes in $\mathcal{K}_\Delta$ will contribute to $\mathcal{X}'_0$. This means that: $$\raisebox{2pt}{$p$}_{\mathcal{X}'_0} = \raisebox{2pt}{$p$}_{\mathcal{X}_0} + \raisebox{2pt}{$p$}_{\mathcal{K}_\Delta}.$$ In this algorithm, like the Genie, $\beta = 1$ and therefore: $$\raisebox{2pt}{$p$}_{r} = \sum_{i=1}^{d_v}{\raisebox{2pt}{$p$}_{\mathcal{X}'_i}},\hspace{1cm}\alpha^{(1)}=\alpha^{(0)}\left(1-\raisebox{2pt}{$p$}_{r}\right).$$ The pre-phase in the LM algorithm has two steps before we can use the general formulation presented in section \[analysis\]. To find out the probabilities $\raisebox{2pt}{$p$}^{(1)}_{\mathcal{N}'}$, we use the intermediate probabilities $\raisebox{2pt}{$p$}_{\mathcal{N}'_{ij}}$ to denote the probability that a check node goes from $\mathcal{N}_i^{'(0)}$ to $\mathcal{N}_j^{'(1)}$. The complication is because in the first iteration all the released edges come from $\mathcal{K}^{(0)}$ and should not affect the connections that $\mathcal{N}_i^{'(0)}$s have with $\mathcal{K}_\Delta$. The only way to deal with this case is to go back to $\mathcal{N}_{ij}$. The general picture is depicted in Figure \[Dependencies\].
Note that:
1. $\mathcal{N}_{ij}$ has $j-i$ connections to $\mathcal{K}_\Delta$, $i$ connections to $\mathcal{K}$ and $d_c-j$ free edges.
2. Check nodes may change from $\mathcal{N}_j^{(0)}$ to $\mathcal{N}^{'(1)}_q$ when there are $j-q$ connections to $\mathcal{K}$; i.e., $j \geq q$.
Thus, to go from $\mathcal{N}_i^{(0)}$ to $\mathcal{N}^{'(1)}_q$, we need to go from $\mathcal{N}_i^{(0)}$ to intermediate $\mathcal{N}_j^{'(0)}$, where $j-q< i\leq j$, and then from $\mathcal{N}_j^{'(0)}$ to $\mathcal{N}^{'(1)}_q$. Thus, the overall formula would be: $$\raisebox{2pt}{$p$}^{(1)}_{\mathcal{N}'_q} = \sum_{j=\max\{2,q\}}^{d_c} \sum_{i=\max\{1,j-q\}}^{j}{\raisebox{2pt}{$p$}^{(0)}_{\mathcal{N}_i} \raisebox{2pt}{$p$}^{(0)}_{\mathcal{N}_{ij}}{i\choose{j-q}}\left(\raisebox{2pt}{$p$}_{f}\right)^{j-q}\left(1-\raisebox{2pt}{$p$}_{f}\right)^{i-j+q}} ,\hspace{20pt}q=1,\cdots,d_c$$ $$\raisebox{2pt}{$p$}^{(1)}_{\mathcal{N}'_0} = \sum_{i=1}^{d_c}{\raisebox{2pt}{$p$}^{(0)}_{\mathcal{N}_i} \raisebox{2pt}{$p$}^{(0)}_{\mathcal{N}_{ii}}\left(\raisebox{2pt}{$p$}_{f}\right)^{i}}$$ where $\raisebox{2pt}{$p$}_{f}$ is as follows: $$\begin{aligned}
\raisebox{2pt}{$p$}_{f} &= \Pr[h_e\in\mathcal{X}_r|t_e\in\mathcal{N}_u,h_e\in\mathcal{K}]\\
&= \frac{\Pr[t_e\in\mathcal{N}_u|h_e\in\mathcal{X}_r,h_e\in\mathcal{K}]\Pr[h_e\in\mathcal{X}_r|h_e\in\mathcal{K}]}{\Pr[t_e\in\mathcal{N}_u|h_e\in\mathcal{K}]}\\
&= \frac{\left(1-\displaystyle\frac{\displaystyle\sum_{i=1}^{d_v}{i \raisebox{2pt}{$p$}_{\mathcal{X}'_i}}}{d_v \raisebox{2pt}{$p$}_{r}} \right)p\raisebox{-5pt}{$\scriptstyle r$}}{1-p^{(0)}}\\
&= \frac{\raisebox{2pt}{$p$}_{r}-p^{(0)}}{1-p^{(0)}}\end{aligned}$$ where $\mathcal{N}_u=\left\{\{\mathcal{N}_1\cup\mathcal{N}_2\cup\cdots\cup\mathcal{N}_{d_c}\}\backslash \mathcal{N}^{'(0)}_1\right\}$.\
From this point forth, the formulation presented in the general framework can be used with the probabilities $p_{\mathcal{X}'}$ and $p_{\mathcal{N}'}$ replacing $p_{\mathcal{X}}$ and $p_{\mathcal{N}}$.
[^1]: For successful decoding clearly we need $r_o \geq 1$. It is desirable to have this parameter as small as possible. Indeed, in the asymptotic case ($n \rightarrow \infty$), $r_o = 1$ is achievable. This has been proved in [@WV09].
[^2]: The performance of Genie is the same as the performance of peeling algorithm over BEC.
|
---
author:
- |
$^{1,2 \dag}$, R. Aloisio$^{1,2}$, M. Bertaina$^{3,4}$, F. Bisconti$^{3,4}$, F. Fenu$^{3,4}$, F. Salamida$^{2,5}$\
Gran Sasso Science Institute, L’Aquila, Italy\
INFN, Laboratori Nazionali Gran Sasso, Assergi (L’Aquila), Italy\
Universita di Torino, Torino, Italy\
INFN Torino, Torino, Italy\
Universita dell’Aquila, Dipartimento di Scienze Fisiche e Chimiche, L’Aquila, Italy\
E-mail:
title: A More Complete Phenomenology of Tau Lepton Induced Air Showers
---
Introduction
============
Tau neutrinos are produced in oscillations of cosmic neutrinos as they travel from their sources to Earth. These neutrinos should produce a flux of tau leptons after propagation through the Earth, where they undergo charged and neutral current interactions, as well as energy losses, and possible decay/regeneration. The exiting tau lepton can decay in the atmosphere, producing an upward moving extensive air shower, referred to as an UAS. UAS are interesting to neutrino astronomers, as they offer a method for probing large detection areas fairly easily, either by observing the side of thick mountains, or by observing Earth’s surface from altitude. As of yet, there has not been a confirmed measurement of an astrophysical tau lepton, but the observation of a UAS would help confirm the current cosmic origin interpretation of the IceCube neutrino flux.
For the experiments seeking to measure UAS, event estimations usually assume shower development similar to proton or gamma induced air showers with some average fractional energy deposition into decay products (roughly $50\%$ of the primary tau lepton energy), taking into account the decay length of the tau in the atmosphere. This simplification is useful for two reasons. 1) The tau lepton has multiple decay branches, of which $65\%$ are hadronic, and an additional $18\%$ are electromagnetic and 2) The physics of proton and gamma induced cosmic ray air showers are fairly well known and well parameterized, due in large part to their frequent study in collider and air shower experiments. However, this begs the question: what information is lost in approximating tau showers as “decay delayed” proton showers and does it strongly affect the number of possible neutrino events that can be recorded? The tau decay has a rich phenomonology. Additionally, the products resulting from a tau lepton decay do not have trivial energy distributions, nor are they distributed identically. The fractional energy distributions of the hadronic channel, and leptonic channels of the tau decay are shown in figure \[Daughter\_Distributions\] as calculated with Pythia simulations [@pythia].
[R]{}[0.5]{} {width="\linewidth"}
As of yet, there has not been a definitive effort to parameterize high energy tau lepton induced air showers. This work serves as a first step to quantify how tau lepton induced air showers are different from those induced by conventional primaries and what (if any) implications there may be for future tau neutrino observatories.
Comparison to Conventional Primaries
====================================
To simulate tau lepton air showers, we use a modified version of CORSIKA-75600 [@corsika], which allows us initiation of showers at true ground level. The geometry we use for our simulations is upgoing, and nearly perfectly horizontal (to allow for a sufficient upper bound on the total slant depth to use in longitudinal profiles). We have also modified the decay times of tau leptons in CORSIKA such that they decay immediately. In this manner, we monitor the shower induced by the tau decay products without the possibility of decay outside the atmosphere. This is also to say that we ignore the ionization energy losses of the tau lepton through air, as it is a very small effect. Later, we adjust the longitudinal profiles by defining a tau decay point, distributed exponentially with mean $c \tau_{\mathrm{lifetime}} E_{\tau}/m_{\tau} \approx 5~\mathrm{km}(E_{\tau}/10^{17}~\mathrm{eV})$. The atmospheric model we use is that of the US Standard Atmosphere.
We simulate 1000 tau lepton events for energies between 1 PeV to 10 EeV, spaced equally by 1 in log space, and for each shower, record the 4 Gaisser-Hillas fit parameters ($\mathrm{N}_{\mathrm{max}}$–number of charged particles at the shower maximum, $X_{\mathrm{max}}$–atmospheric depth of the shower maximum, $X_{0}$–shower shaping parameter commonly thought of as the depth of first interaction, and $\Lambda$–shower shaping parameter which determines thickness and asymmetry of the profile) of the longitudinal charged particle profile using a least squares fit [@Gaisser]. Note that in a high energy tau decay, which may initiate deep into the atmosphere, the only parameters to be affected are $X_{\mathrm{max}}$ and $X_{0}$, which will leave the width and the shape of the shower intact. We then do the same for gamma and proton primaries in the same energy ranges.
To make a fair comparison against the tau intiated showers, we perform a sampling process on the proton and gamma initiated showers. We sample a fractional energy of the tau decay which goes into showering products (hadrons and electrons) from the Pythia simulations shown in figure \[Daughter\_Distributions\] and multiply by the primary energy of interest. From this sampled energy, we calculate the different shower parameters from proton and gamma showers using the data we previously generated. For each primary energy, we sample 1000 different events.
--------- -----
(a) (b)
\[6pt\]
(c) (d)
\[6pt\]
(e) (f)
\[6pt\]
--------- -----
In figure \[Tau\_Comp\], we plot as a function of energy the average $X_{\mathrm{max}}$, shower width W, and $\mathrm{N}_{\mathrm{max}}$ as well as RMS values for showers initiated by tau leptons, protons, gammas, and sampled protons and gammas. W is calculated via $2 \sigma \sqrt{2 \mathrm{ln} 2} \left[ 1+\frac{\mathrm{ln} 2}{18} z^{2}\right]$, where $z = \sqrt{\Lambda/(X_{\mathrm{max}}-X_{0})}$ and $\sigma =\sqrt{\Lambda (X_{\mathrm{max}}-X_{0})} $[@Lipari]. Excluding those points which come from muon induced showers via external cuts (this will be detailed in the following section) and across nearly all energies, the average $X_{\mathrm{max}}$ of a tau shower is slightly larger than that of proton showers (and significantly less than that of gamma initiated showers) of comparable energy. Similarly, the average shower width of tau initiated showers is also, on average, lower than that of proton showers and higher than that of gamma initiated showers. The average $\mathrm{N}_{\mathrm{max}}$ for a tau lepton shower is very well approximated by taking $50\%$ of $\mathrm{N}_{\mathrm{max}}$ of either proton or gamma initiated showers, as many estimations do.
Tau lepton induced air showers will deposit roughly the same energy into cascading particles as proton and gamma induced air showers of comparable energy. However, they will deposit their energy, on average, deeper in the atmosphere than proton showers, and shallower than gamma showers. Similarly, for energies under $10^{18}$ eV, tau showers deposit energy over a shorter path length than proton showers, and longer path length than gamma showers. This behavior changes at higher energies due to the LPM effect on high energy gammas [@Heck]. Because the tau lepton decays electromagnetically and hadronically (with many electromagnetic showers initiated also by the decay of the $\pi^{0}$ to $2\gamma$), the most accurate shower parameterization for tau leptons will be a weighted average of the parameterizations for proton and gamma showers. However, we note that if one wants to approximate with only one species, protons are, in general, a better approximation than gammas.
Muons
=====
Muons are an interesting component of the tau shower phenomenology that is usually overlooked, despite taking up nearly 20 percent of the tau decay spectrum. Because of their comparably low interaction cross sections, it is assumed that they will not interact in the atmosphere in a noticeable way. An estimation using standardized muon cross sections demonstrates that this is not such a straightforward conclusion, especially for experiments which observe large portions of the atmosphere, as balloon and space based instruments intend to do.
For a first approximation, we shall only consider processes which may deposit large amounts of the primary muon energy in a single interaction (nuclear and electronic bremsstrahlung emission, photonuclear interactions, and electron positron pair production), as they are the most relevant processes for energies above 1 PeV and high fractional energy depositions (which may then develop into a shower) [@muon1] [@muon2]. Additionally, we will ignore the possibility of muon decay, as the decay length of a muon is $6.25*10^{6} \Big(\frac{E}{1~\mathrm{PeV}} \Big)$ km, whereas the maximum path length through Earth’s atmosphere is roughly 1200 km. The differential muon cross sections are shown in figure \[Muon\_Cross\_Sections\].
-- --
-- --
Above 1PeV, the nuclear bremsstrahlung cross section is nearly independent of muon energy, whereas the electronic brehmsstrahlung and pair production cross sections increase as the log of energy, and the photonuclear cross section increases as the log of energy squared. Across our energy range, this will lead to slight increases of $25\%$ for electronic brehmsstrahlung and pair production and $60\%$ for photonuclear, which we will ignore for brevity here, but indicate that for higher energy muons the probability to interact is even higher than what is listed here. This is to say that our values included here should be considered a lower bound on muon shower probabilities. To verify that our cross sections well describe reality, we simulate 1000 1 PeV muon showers in CORSIKA and calculate the cumulative probability that a muon deposits at least a fractional energy $v$ in a full atmosphere, and then compare to the analytical solution via:
$$\mathrm{P}_{\mathrm{int}} (X) = 1-e^{-X \sigma N} \quad\text{where}\quad
\sigma = \int_{v}^{1} \frac{d \sigma}{d v} dv$$
With $N$ being the number of targets in one gram of air. The cumulative interaction probability as a function of fractional energy deposit calculated from these CORSIKA showers and from the muon cross sections is demonstrated in figure \[Full\_Atmos\_Prob\]a, which shows good agreement. The cumulative interaction probability as a function of depth is shown in figure \[Full\_Atmos\_Prob\]b for various fractional energy depositions.
-- --
-- --
Figure \[Full\_Atmos\_Prob\] shows that a non-negligible percentage of muons experience large energy losses inside Earth’s atmosphere, which begins a conventional particle cascade, with over $14\%$ of muons depositing more than $10\%$ of their energy inside a full atmosphere, to state one numeric example. Therefore, a flux of high energy tau leptons exiting the Earth from neutrino interactions guarantees a flux of high energy muons, which have a non-negligible chance to develop comparably strong particle cascades initiated at significantly greater depths in the atmosphere.
This allows us to draw some preliminary conclusions as to which experimental designs this muon component may be a relevant background or even a detectable signal. For muons to develop observable showers, very large amounts of atmosphere are necessary, independent of initial muon energy. Experiments which aim to observe tau neutrinos with energies greater than $10^{17}$ eV through the UAS technique must necessarily observe large path lengths through the atmosphere so as not to exclude tau leptons with long decay lengths. The balloon based EUSO-SPB2, and satellite based POEMMA, will view the full Earth atmosphere at the limb, for instance [@JFK]. This makes ultra-high energy tau neutrino observatories an ideal candidate to observe also showers induced by muons. For instance, we note that in the Cherenkov emission channel, for a specific angular and energy range, this signal from the muonic decay channel of the tau could be stronger than that of showers initiated by the hadronic (and electronic) decay channel [@me].
Cherenkov Emission from Tau Lepton Showers
------------------------------------------
Atmospheric extinction of visible light is remarkably strong for wavelengths below 450 nm, where Cherenkov emission is relevant. Thus, it is difficult to measure any Cherenkov photons resulting from charged particles in the lower atmosphere (exponentially moreso for low inclination angles) [@Atmosphere]. However, the first interaction point of muons can be of the order of the total thickness of the atmosphere, so any Cherenkov emission from the shower they produce is subject to significantly less attenuation (compared to showers initiated by hadronic or electronic decay products), as well as a more focused Cherenkov cone and a shorter distance between point of creation and point of detection. All of these effects aid in producing a significantly higher photon flux for a high altitude instrument. Cherenkov profiles of tau leptons both with and without atmospheric attenuation and geometric correction for a space based instrument are shown in figure \[Cherenkov\_From\_Tau\] to help illustrate this effect.
-- --
-- --
We have briefly explained here the idea that high energy muons may be detectable by a space or balloon based instrument via Cherenkov emission from a charged particle shower, and for a certain angular range, can be even brighter than the corresponding showers from the hadronic decay of the tau lepton. We have begun work to show the effect of including the muon decay branch of tau decays has on conventional estimates of tau neutrino sensitivity. Additionally, we have begun work to explore the viability of measuring the muon neutrino flux using the UAS technique, noting that the severe energy losses of the muon in the Earth may be counterbalanced by the improved signal quality [@me]. This is beyond the scope of these proceedings, but it is an important conclusion to state which would not have been realized without analyzing the tau in detail.
Summary
=======
In this work, we simulated thousands of upward tau lepton showers in CORSIKA for energies between $10^{15}$ eV to $10^{19}$ eV and compared them to showers initiated by proton and gamma primaries using the Gaisser-Hillas parameterization [@Gaisser]. We found that approximation of the hadronic decay channel of the tau as a proton or gamma primary of sampled equivalent energy is reliable and reproducible, and estimations based on this assumption are also likely safe from significant scrutiny. Although, we do note that, in general, it is more accurate to use a proton parameterization than a gamma parameterization for these purposes.
Additionally, we analyzed the muon decay channel of the tau, and determined that it is not fair to treat it as a negligible signal as it often is, especially for experiments which seek to observe tau neutrinos with energies greater than $10^{17}$ eV. We illustrated that not only do a significant number of muons conventionanlly shower in large amounts of atmosphere, but that these showers may often appear as stronger signals than their hadronic counterparts. From this, we have begun work to show the effect this has on tau neutrino detection via the UAS approach, and to explore whether muon neutrinos can be detected in the same manner [@me].
[99]{} T. Sj$\ddot{\mathrm{o}}$strand, S. Ask, J. R. Christiansen, R. Corke, N. Desai, P. Ilten, S. Mrenna, S. Prestel, C. O. Ras- mussen, and P. Z. Skands, Computer Physics Commu- nications 191, 159 (2015), arXiv:1410.3012 \[hep-ph\]. Heck, D.; Knapp, J.; Capdevielle, J. N.; Schatz, G.; Thouw, T., Forschungszentrum Karlsruhe Report FZKA 6019 (1998) Gaisser, T., Engel, R., Resconi, E. (2016). Cosmic Rays and Particle Physics. Cambridge: Cambridge University Press. doi:10.1017/CBO9781139192194 P. Lipari, “Universality in the longitudinal development of Cosmic Ray showers”, Nuclear and Particle Physics Proccedings, Vol 279-281 M. Risse, P. Homola, R. Engel, D. Gora, D. Heck, J. Pekala, B. Wilczynska, H. Wilczynski, Czechosolvak Journal of Physics, Vol. 51 (2001), No. 0 D. E. Groom, N. V. Mokhov, S. Striganov, Atomic Data and Nucelear Data Tables, Vol 76, No. 2, July 2001 W. Lohmann, R. Kopp, R. Voss, CERN 85-03, Experimental Physics Division, 21 March 1985 M. H. Reno, J. F. Krizmanic, T. M. Venters, “Cosmic tau neutrino detection via Cherenkov signals from air showers from Earth-emergening taus”, arXiv:1902.11287v2 \[astro-ph.HE\]. L. Elterman, UV, Visible, and IR Attenuation for Alti-tudes to 50 km, 1968, Tech. Rep. AFCRL-68-0153 (Air Force Cambridge Research Laboratories, 1968). A. Cummings, R. Aloisio, M. Bertaina, F. Bisconti, F. Fenu, F. Salamida, In Preparation
|
---
abstract: 'The light neutrino mass spectrum and mixing matrix of seesaw model including three right-handed neutrinos are studied for the most general case. An approximate formulae for mass eigenvalues, mixing matrix, and CP violation of neutrino oscillations are given.'
address: ' Graduate School of Science, Hiroshima University, Higashi-Hiroshima, Japan, 739-8526'
author:
- 'Takuya Morozumi [^1]'
title: ' Mass eigenstates and mass eigenvalues of seesaw$^*$'
---
Introduction
============
It is my pleasure to write a small contribution for Professor Gustavo fest. I visited him at Lisbon when I was a postdoctoral fellow of Rockefeller University. Since then, I learned lots from Gustavo and his colleagues about SU(2) singlet quark model [@branco], and the diagonalization of mass matrix etc. So it is appropriate for me to write on the neutrino masses for seesaw model in which SU(2) singlet heavy neutral leptons are introduced.
Mass eigenvalue equation for light neutrinos
============================================
In seesaw [@Minkowski; @Yana; @Gellmann; @MohaSen], the effective mass term for light neutrinos is given by the famous formulae, $m_{eff}=-m_D \frac{1}{M} m_D^T$. What I want to talk is about mass eigenvalues of $m_{eff}$. It is more convenient to work for Hermite matrix, $H=m_{eff}
m_{eff}^{\dagger}$. By denoting $\lambda$ as mass eigenvalues squared, the eigenvalue equation becomes, (H- )=0. The equation is a qubic equation which is given by, \^3- \^2 a + b -c =0, where, a&=&[Tr]{} H=H\_[11]{}+H\_[22]{}+ H\_[33]{},\
b&=& H\_[11]{} (H\_[22]{} H\_[33]{} -|H\_[23]{}|\^2) + H\_[22]{} (H\_[33]{} H\_[11]{} -|H\_[13]{}|\^2) +H\_[33]{} (H\_[11]{} H\_[22]{} -|H\_[12]{}|\^2),\
c&=& [det]{} H. The coefficients are related to mass squared eigenvalues $n_1^2, n_2^2, n_3^2$ as, a&=&n\_3\^2+n\_2\^2+n\_1\^2 3 [|n]{}\^2\
b&=& n\_3\^2 n\_2\^2 +n\_1\^2 n\_2\^2+ n\_2\^2 n\_3\^2\
&=& 3 [|n]{}\^4 + (n\_1\^2-|[n]{}\^2)(n\_2\^2-|[n]{}\^2) +(n\_2\^2-|[n]{}\^2)(n\_3\^2-|[n]{}\^2) (n\_3\^2-|[n]{}\^2)(n\_1\^2-|[n]{}\^2),\
c&=& n\_3\^2 n\_2\^2 n\_1\^2 = |[n]{}\^6+(n\_1\^2-|[n]{}\^2) (n\_2\^2-|[n]{}\^2)(n\_3\^2-|[n]{}\^2) +|[n]{}\^2 \[ (n\_2\^2-|[n]{}\^2) (n\_3\^2-|[n]{}\^2)\
&+&(n\_3\^2-|[n]{}\^2)(n\_1\^2-|[n]{}\^2) +(n\_1\^2-|[n]{}\^2)(n\_2\^2-|[n]{}\^2)\]. It is interesting to see except $\bar{n}^2$, $n_i^2-\bar{n}^2$ $(i=1,2,3)$ can be written in terms of the mass squared differences which are measured in oscillation experiments. Now let us write each coefficient $a,b$ and $c$ in terms of the element of $m_D$ and $M$ explicitly. Without loss of generality, we can take $m_D$ is a general $3 \times 3$ complex matrix and $M$ is a $3 \times 3$ real diagonal matrix. We introduce a decomposition. [@Fujihara] &&m\_D=([**m\_[D1]{}**]{}, [**m\_[D2]{}**]{}, [**m\_[D3]{}**]{}) =([u\_1]{},[u\_2]{},[u\_3]{}) (
[ccc]{} m\_[D1]{} & 0 & 0\
0 & m\_[D2]{} & 0\
0 & 0 & m\_[D3]{}
)\
&&=U (m\_[D1]{}, m\_[D2]{}, m\_[D3]{}). where $\frac{\bf m_{Di}}{m_{Di}}={u_i}$ and $|{u_i}|=1$ with $m_{Di}=
|{\bf m_{Di}}|$. $u_i (i=1 \sim 3) $ is a complex vector in $C^3$. $U=({u_1},{u_2},{u_3}) $ is not unitary in general. A U\^ U= (
[ccc]{} 1 & [ u\_1\^ u\_2]{} & [u\_1\^ u\_3]{}\
& 1 & [ u\_2\^ u\_3]{}\
& & 1
), where $A$ is an Hermite matrix. One can write, m\_[eff]{}&=& -U X U\^T,\
H&=&m\_[eff]{} m\_[eff]{}\^=U X A\^ X U\^, where $X$ is a real diagonal matrix with mass dimension, X=[Diagonal]{} (X\_1,X\_2,X\_3 ) X\_i . Using the definitions, one may write the coefficients of cubic equation as, a&=&[Tr]{} (A X A\^ X)=X\_1\^2+X\_2\^2+X\_3\^2\
&+&2 [Re]{}(A\_[12]{}\^2) X\_1 X\_2 + 2 [Re]{}(A\_[23]{}\^2) X\_2 X\_3 + 2 [Re]{}(A\_[31]{}\^2) X\_3 X\_1,\
b&=& X\_1\^2 X\_2\^2 (1-|A\_[12]{}|\^2)\^2+ X\_2\^2 X\_3\^2(1-|A\_[23]{}|\^2)\^2+X\_3\^2 X\_1\^2 (1-|A\_[31]{}|\^2)\^2\
&+&2 X\_1 X\_2 X\_3\^2 [Re]{}((A\_[12]{}-A\_[13]{} A\_[32]{})\^2 ) + 2 X\_2 X\_3 X\_1\^2 [Re]{}((A\_[23]{}-A\_[21]{} A\_[13]{})\^2 )\
&+& 2 X\_3 X\_1 X\_2\^2 [Re]{}((A\_[31]{}-A\_[32]{} A\_[21]{})\^2 ),\
c&=& ([det]{} A)\^2 X\_1\^2 X\_2\^2 X\_3\^2\
&=& (1+2[Re]{} (A\_[12]{} A\_[23]{} A\_[31]{}) -|A\_[12]{}|\^2-|A\_[23]{}|\^2-|A\_[31]{}|\^2 )\^2 X\_1\^2 X\_2\^2 X\_3\^2. In this paper, we discuss the case with $X_1 \ll X_2 \ll X_3$ and $|A_{23}| < 1$. If we neglect $X_1$, the eigenvalue equation becomes quadratic as, \^2-(X\_2\^2+X\_3\^2 + 2 [Re]{}(A\_[23]{}\^2) X\_2 X\_3) + X\_2\^2 X\_3\^2 (1-|A\_[23]{}|\^2)\^2=0. Therefore, the heaviest mass squared $n_3^2$ at leading order is given as, n\_3\^2=X\_3\^2. Then, the smaller eigenvalue is given as, n\_2\^2=X\_2\^2(1-|A\_[23]{}|\^2)\^2. Finally the mass squared of the lightest state is, n\_1\^2=X\_1\^2 .
mixing matrix
=============
Now let us turn to the mixing matrix which reroduces the approximate eigenvalues given in previous sections. One may first write the $m_{eff}$ as m\_[eff]{}=-X\_3 ([u\_1, u\_2, u\_3]{}) [Diag]{}.(X\_1/X\_3,X\_2/X\_3,1) ( [ u\_1,u\_2,u\_3]{})\^T. By taking the limit $X_1/X_3 \rightarrow 0$, $m_{eff}$ becomes, m\_[eff]{}=-X\_3 (u\_2 u\_2\^T + u\_3 u\_3\^T). $m_{eff}$ has a zero eigenvalue and the corresponding state can be isolated by using a unitary rotation given as, V\_0&=&(v\_1, v\_2, v\_3),\
v\_1\^u\_2&=&v\_1\^ u\_3=0,\
v\_2\^ u\_3&=&0,\
v\_3 && u\_3. One may write, v\_2&=&,\
v\_1&=& . Using the definition, one may show, \_[X\_1 0]{} V\_0\^ m\_[eff]{} V\_0\^ =-X\_3 (
[ccc]{} 0 & 0 & 0\
0 & (1-|A\_[23]{}|\^2) & A\_[32]{}\
0 & A\_[32]{} & A\_[32]{}\^2 +1
). The final step of the diagonalization can be achieved by diagonalizaing two by two sector. We write MNS (Maki, Nakagawa and Sakata) [@MNS] matrix V as, $V=V_0 K$ and $K$ is given as, K=(
[ccc]{} 1 & 0 & 0\
0 & \_N & \_N (-i \_N)\
0 & -\_N (i \_N) & \_N
) P, where $P$ is a diagonal phase matrix which explicit form is given in Eq.(38) of [@Fujihara]. $\tan 2 \theta_N $ is also obtained by simply replacing $X_1 \rightarrow X_3$ and $u_1 \rightarrow u_3$ in Eq.(38) of [@Fujihara]. Explicitly, it is given as, 2 \_N&=& . When $X_2 < X_3$, \_N |A\_[23]{}| 1. Therefore in the leading order of $X_2/X_3$ expansion, $V=V_0$.
CP violation of neutrino oscillation
====================================
It would be interesting to see how CP violation of neutrino oscillation looks like. One may compute CP violation of neutrino oscillation [@branco2] at the leading power of the term of $X_1^n X_2^m X_3^{6-m-n}$.[@Fujihara] One find the term which is proportional to $X_2^2 X_3^4$ is the leading and obtain, =&& [Im]{} ( (m\_[eff]{} m\_[eff]{}\^)\_[e ]{} (m\_[eff]{} m\_[eff]{}\^) \_ (m\_[eff]{} m\_[eff]{}\^)\_[e]{} )\
&& =(1-|A\_[23]{}|\^2) (X\_2\^2 X\_3\^4) ([Im]{}\[(u\_[e3]{}\^ u\_[e2]{} u\_[3]{} u\_[2]{}\^) |u\_[3]{}|\^2+\
&& [Im]{}\[(u\_[3]{}\^ u\_[2]{} u\_[3]{} u\_[2]{}\^) |u\_[e 3]{}|\^2+ [Im]{}\[(u\_[3]{}\^ u\_[2]{} u\_[e 3]{} u\_[e 2]{}\^) |u\_[3]{}|\^2). As a check of the result, we may compute the Jarlskog invariant and mass squared difference separately and obtain $\Delta$. && =J (n\_1\^2-n\_2\^2) (n\_2\^2-n\_3\^2) (n\_3\^2-n\_1\^2),\
&&(n\_1\^2-n\_2\^2)(n\_2\^2-n\_3\^2)(n\_3\^2-n\_1\^2)n\_2\^2 n\_3\^4 X\_3\^4 X\_2\^2 (1-|A\_[23]{}|\^2)\^2,\
&&J=-[Im]{}[V\_[e3]{} V\_[3]{}\^ V\_[e2]{}\^ V\_[2]{}]{} -[Im]{} ( u\_[e3]{} u\_[3]{}\^ ),\
which agrees with $\Delta$ computed previously.
Summary
=======
We have studied eigenvalues and mixing matrix of seesaw model with three right-handed neutrinos. The most general eigenvalue equation which determines the light neutrino mass spectrum is given. The equation is solved by assuming the hierarchy of the three parameters $X_i (i=1 \sim 3)$ as $X_1 < X_2 < X_3$. Under the assumption, the analysis of the model with two right-handed neutrino [@Fujihara] is useful. The leading order expression of mass spectrum and MNS matrix is obtained in terms of $X_i$ and normalized yukawa vectors $u_i$. We compute the leading term of CP violation of neutrino oscillations which is a product of the Jarlskog invariant and mass differences. All the leading contribution can be extracted in analytical form which may be useful for further study on CP violation at low energy and leptogenesis. [@FY] [@Fujihara; @branco2; @FGY; @EKKMT; @EMOP] \#1\#2\#3[Astrophys. J. [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Int. J. Mod. Phys. [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Mod. Phys. Lett. [**A\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Nature [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Nucl. Phys. [**B\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Phys. Lett. [**B\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Phys. Rev. [**D\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Prog. Theor. Phys. [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Phys. Rev. [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Phys. Rev. Lett. [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Phys. Rep. [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Sov. J. Nucl. Phys. [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Z. Phys. [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[JHEP [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Euro. Phys. J. [**C\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Rev. Mod. Phys. [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Prog. Theor. Phys. [**\#1**]{}, \#2 (\#3)]{}
Acknowledgement
---------------
The author thanks Prof. M. N. Rebelo and organizers of Gustavofest for giving me a chance for writing contribution. He thanks T. Fujihara, S. K. Kang, C.S. Kim and D. Kimura for discussion. The work is supported by the kakenhi, Japan, No.16028213.
[99]{} G.C. Branco, T. Morozumi, P. A. Parada, and M. N. Rebelo, . P. Minkowski, . T. Yanagida, in the proceedings of the Workshop on Unified Theories and Baryon Number in the Universe, edited by O. Sawada and A. Sugamoto, 95 (1979). M. Gell-Mann, P. Ramond and R. Slansky, in Supergravity, P. van Nieuwenhuizen and D.Z. Freedman (eds.), North Holland Publ. Co.,(1979). R. N. Mohapatra and G. Senjanovich, . T. Fujihara, S. Kaneko, S. Kang, D. Kimura, T. Morozumi and M. Tanimoto . Z. Maki, M. Nakagawa, and S. Sakata, . G. C. Branco, T. Morozumi, B. Nobre, and M. N. Rebelo . M. Fukugita and T. Yanagida, . P.H. Frampton, S.L. Glashow, and T. Yanagida, . T. Endoh, S. Kaneko, S. K. Kang, T. Morozumi and M. Tanimoto, . T. Endoh, T. Morozumi, A. Purwanto and T. Onogi, , Erratum-ibid.D64:059904,2001.
[^1]: [email protected]. The paper is prepared as a contribution to the Symposium in Honour of Professor Gustavo C. Branco, CP Violation and Flavour Puzzle.
|
---
abstract: 'Non–classical interference of photons lies at the heart of optical quantum information processing. This effect is exploited in universal quantum gates as well as in purpose–built quantum computers that solve the BosonSampling problem. Although non–classical interference is often associated with perfectly indistinguishable photons this only represents the degenerate case, hard to achieve under realistic experimental conditions. Here we exploit tunable distinguishability to reveal the full spectrum of multi–photon non–classical interference. This we investigate in theory and experiment by controlling the delay times of three photons injected into an integrated interferometric network. We derive the entire coincidence landscape and identify transition matrix immanants as ideally suited functions to describe the generalized case of input photons with arbitrary distinguishability. We introduce a compact description by utilizing a natural basis which decouples the input state from the interferometric network, thereby providing a useful tool for even larger photon numbers.'
author:
- 'Max Tillmann$^1$, Si-Hui Tan$^2$, Sarah E. Stoeckl$^1$, Barry C. Sanders$^{3,4}$, Hubert de Guise$^5$, Ren[é]{} Heilmann$^6$, Stefan Nolte$^6$, Alexander Szameit$^6$, Philip Walther$^{1}$'
title: 'Generalized multi–photon quantum interference'
---
Introduction
============
Interference is essential to many fields of physics. Remarkably this is not only tied to a wave description in the classical domain but holds also for the quantum regime when dealing with wavefunctions and probability amplitudes. Quantum interference was experimentally confirmed in impressive single–particle interferometry experiments carried out using electrons[@Hasselbach2010], neutrons[@rauch2001neutron], atoms[@Cronin2009] and molecules[@Hornberger2012; @Eibenberger2013]. Quantum physics also allows two objects to interfere with each other. This two–particle interference is characterized by the second–order correlation function $G^{(2)}$ dating back to the pioneering work of Hanbury Brown and Twiss from the 1950s[@Brown1956]. Utilizing the bosonic nature of photons Hong, Ou and Mandel[@Hong1987] (HOM) performed a seminal $G^{(2)}$–measurement using single photons and a $50/50$ beam splitter. Initially intended as a precise measurement of the coherence time of the photons, their experiment is now at the heart of optical quantum metrology[@Giovannetti2011], quantum computing[@OBrien2007; @Aspuru-Guzik2012] and quantum communication[@Gisin2007]. Recently an intermediate model of quantum computing has refocused attention towards the findings of HOM. BosonSampling[@aaronson2011computational] utilizes even higher order correlations through the non–classical interference of a few dozen single–photons.
The recent development of quantum photonics technology[@OBrien2009] allows experiments using a growing number of photons and large, complex interferometric networks. Manipulating such large Hilbert–spaces requires well adapted tools in both theory and experiment. Although non–classical interference is often associated with perfectly indistinguishable photons this only represents the simplest case of photon states fully symmetric under permutation. Experimentally partial distinguishability is ubiquitous because the generation of indistinguishable multi–photon states currently remains a challenge. Moreover partial distinguishability is of fundamental interest highlighted by e.g. the nonmonotonicity of the quantum–to–classical transition[@Tichy2011; @Ra2013]. In the following we present a novel description for the non–classical interference of multiple photons of arbitrary distinguishability propagating through arbitrary interferometers. We introduce a symmetry–adapted and therefore natural basis with basis states acting as the normal coordinates for the description of the non–classical interference of photons. In our perspective a different interferometer just depends on a different set of normal coordinates; the non–classical interference is determined solely by the properties of the photons. Distinguishability, as the central property, is tunable by treating temporal delay as an explicit parameter thereby allowing access to the whole spectrum of non–classical interference.
Results
=======
The quantum interference of two bosons
--------------------------------------
\
In the case of two photons the Hong–Ou–Mandel dip has become a canonical implementation of an optical $G^{(2)}$–measurement. In this experiment two photons are injected into distinct input ports of a beam splitter, which is effectively an $m=2$ interferometer, where $m$ is the number of modes of the interferometer. One element of the output probability distribution corresponding to the case where the two photons exit the beam splitter in different output ports, is recorded via a coincidence measurement. In figure **1a** we show the coincidence probability $P_c$ that depends on the transformation matrix $B$, here defined by the splitting ratio of the beam splitter, and the distinguishability of the photons. In the prominent example of a balanced, i.e. $50/50$, beam splitter and perfectly indistinguishable photons the coincidence rate vanishes. The established technique to calibrate for the point of maximal non–classical interference relies on tuning the relative temporal delay $\Delta\tau$, i.e. the distinguishability between the two photons. This is described by an overlap integral which accounts for the key properties of the photons such as spectral shape, polarization, spatial mode in addition to the relative temporal delay. The coincidence probability $P_c(\Delta\tau)$ in the general case corresponds to $$\begin{aligned}
P_c(\Delta\tau) =&
\int d\omega\int d\omega' |\bra{\psi_{in}}\hat{B}^{\dagger}\hat{a}^{\dagger}_1(\omega)\hat{a}^{\dagger}_2(\omega')\ket{0}|^2\nonumber\\
=& \boldsymbol{v_2}^{\dagger} \big[\hat{R}^{(2)}(\Delta\tau)\big]\boldsymbol{v_2}\nonumber\\
=\begin{pmatrix}
{\rm per}(B) \\
{\rm det}(B)
\end{pmatrix}^{\mathlarger{\dagger}}
&\left[\frac{1}{2}
\begin{pmatrix}
1 & 0 \\
0 & 1
\end{pmatrix}
+\frac{1}{2}\zeta\text{e}^{- \xi\Delta\tau^2}
\begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix}
\right]
\begin{pmatrix}
{\rm per}(B) \\
{\rm det}(B)
\end{pmatrix}\label{eq:P112},\end{aligned}$$ where $0 \leq \zeta \leq 1$ is derived from the mode–overlap integral, $\bra{\psi_{in}}=\bra{0}\hat{a}_1(\omega)\hat{a}_2(\omega')$ is the state impinging on the beamsplitter, and $\xi$ is a factor describing the shape of the interference feature (see supplementary information for further details).
The natural basis for two photons
---------------------------------
\
In the case of two photons the non–classical interference is a second–order correlation effect and therefore dependent on the permutational symmetry of the two interfering particles. We consider a basis accounting for the permutational symmetries as a natural basis for quantum interference. Consequently we introduce a basis vector $\boldsymbol{v}$, whose components encapsulate the unitary network description $B$ in matrix functions having definite permutation properties; the first component is the permanent (per) and is fully symmetric under permutation; the second component is the determinant (det) and is fully antisymmetric under permutation. These are the only two possible symmetries when permuting two objects. By using the basis vector $\boldsymbol{v}$, we obtain an elegant and compact form of the rate matrix $\hat{R}^{(2)}(\Delta\tau)$; $\hat{R}^{(2)}(\Delta\tau)$ is a diagonal matrix and its entries depend only on properties of the input state. The ratio of its two non–zero entries, $\hat{R}^{(2)}_{11}$ and $\hat{R}^{(2)}_{22}$, are revealing the nature of the non–classical interference of two photons of arbitrary coherence. For indistinguishable photons and zero temporal delay $\Delta\tau$, $\hat{R}^{(2)}_{22}$ is also zero and the output probability is proportional to the permanent of $B$ only. The permutational symmetry of identical bosonic particles, e.g. photons, is reflected in transition amplitudes determined by a permutational symmetric function - the permanent. Temporal delays larger than the coherence time of the photons, $\Delta\tau\gg\tau_c$, result in complete loss of coherence. In this case, often characterized as classical behaviour of the photons, both matrix entries contribute equally, $\hat{R}^{(2)}_{11}=\hat{R}^{(2)}_{22}=0.5$. In this case the state is an equal mixture of symmetric and antisymmetric parts and so does not exhibit any of the indistinguishability features associated with quantum interference. The analysis above can be generalized to the quantum interference of two photons in larger interferometric networks. In this case the two input ports and the two ports in which the photons exit such a network define $2\times2$ scattering submatrices $B^*$. While the basis vector $\boldsymbol{v}$ now contains matrix functions of $B^*$ the rate matrix $\hat{R}^{(2)}(\Delta\tau)$ stays identical, independent of $B^*$. Figure **1b** highlights how this natural basis cleanly separates effects arising due to distinguishability in the input state from effects of the interferometric network. The advantage becomes increasingly evident for the non–classical interference of more than two photons.
![**Two–photon non–classical interference.** Two photons of temporal coherence $\tau_c$ enter a beam splitter through different input ports. (**a**) The coincidence probability $P_c$ that they leave in two different output ports is plotted with respect to a relative temporal delay $\Delta\tau$. This delay is used to tune the distinguishability of the otherwise identical photons. The blue curve shows the output probability for a $50/50$ beam splitter, and the green one for a $67/33$ beam splitter. (**b**) depicts the contribution of the permanent (per) and determinant (det). It is the same for both beam splitters because this description is independent of the interferometer. In the case for zero delay ($\Delta\tau=0$) only the permanent contributes. By explicitly calculating the permanent, which is zero for a $50/50$ beam splitter, the vanishing output probability (**a**) for zero delay is obtained.[]{data-label="FIG:HOM"}](figure1.pdf){width="45.00000%"}
The quantum interference of three bosons
----------------------------------------
\
Consider a scenario where two photons are nearly indistinguishable and the third is delayed significantly. Adding a third photon leads to situations that can no longer be understood by the weighted sum of the permanent and determinant. In order to describe such a behaviour a more general matrix function, the immanant, is necessary[@Tan2013; @Guise2014]. The immanant[@Littlewood1934] expands the concept of the permanent and determinant to mixed permutation symmetries and is defined as $\text{imm}(M)=\sum_{\sigma} \chi(\sigma)\prod_i M_{i\sigma(i)}$ for $M_{ij}$ matrix elements of $M$, with $\chi(\sigma)$ the character of permutation $\sigma$. The permanent, for which every $\chi(\sigma)=1$, and the determinant, for which $\chi(\sigma)=\text{sgn}(\sigma)$, are special cases of the immanant (for an intuitive explanation of these matrix functions see figure **2**).
![**The permanent, the determinant and the immanant.** The permanent (per) and the determinant (det) are special cases of the immanant (imm), a matrix function. The three functions behave differently under odd permutations of the matrix. The permanent is symmetric under permutation of e.g. columns, depicted in (**a**). Permuting the yellow and green columns and calculating the permanent of this permuted and the original matrix yields the same result, A. Contrarily, the determinant of a permuted matrix will show a sign change compared to the determinant of the original matrix (**b**). Therefore the determinant is antisymmetric under odd permutations. Immanants (**c**) can describe both cases above, but their strength lies in covering mixed permutation symmetry. Calculating the immanant of a matrix and of another one with, e.g. the yellow and green columns flipped will give different results, C and D respectively. Additionally flipping the green and red column results in an even permutation of the original matrix. The immanant of this matrix gives yet another results, E. The overall number of immanants is bounded by the maximal number of unique permutation operations of the corresponding symmetric group $S_n$.[]{data-label="FIG:MatrixFunctions"}](figure2.pdf){width="45.00000%"}
In the smallest instance of a three–photon quantum interference $n=3$ photons are injected into a $m=3$–mode interferometric network and measured as three–fold coincidences at the three output ports. The optical transformation implemented by the interferometer can be any $3\times3$ linear optical transformation and the distinguishability of the three photons is arbitrarily tunable by setting the relative temporal delays; $\Delta\tau_1$ between the first and second photon and $\Delta\tau_2$ between the second and third photon.\
The coincidence probability $P_{111}(\Delta\tau_1,\Delta\tau_2)$ is given in equation (\[eq:P111\]), where $\hat{a}^{\dagger}_1(\omega)$, $\hat{a}^{\dagger}_2(\omega')$ and $\hat{a}^{\dagger}_3(\omega'')$ are the creation operators in modes $1,2,3$ of $T$ for photons with different spectral shape functions dependent on the frequency variables $\omega,\omega',\omega''$. Here $\bra{\psi_{in}}=\bra{0}\hat{a}_1(\omega)\hat{a}_2(\omega')\hat{a}_2(\omega'')$ is the three–photon state impinging on the interferometer. An expression of equation (\[eq:P111\]), expanded in terms of immanants, determinants and permanents, results in a linear superposition of 60 terms. However, utilizing a symmetry–adapted basis allows for the compact representationso given in equation (\[eq:P1112\]) & (\[eq:P1113\]) (see methods and supplementary information for further details). Here four immanants, the permanent and the determinant of $T$ constitute the components of a six–dimensional basis vector $\boldsymbol{v_3}$. The basis transformation $\hat{P}$ and $\hat{S}$, a matrix mapping matrix elements to matrix functions, transform between the basis vector of equation (\[eq:P1112\]), $\boldsymbol{a}=\hat{P}\hat{S}\boldsymbol{v_3}$, and the basis vector of equation (\[eq:P1113\]), $\boldsymbol{v_3}$. Analogous to equation (\[eq:P112\]) the $\zeta$ terms are derived from the mode–overlap integral and the $\xi$ terms are factors describing the shape of the interference feature. In this notation the overlap terms weight a sum of six matrices: the identity matrix and five permutation matrices $\rho_{12}$,$\rho_{13}$,$\rho_{23}$,$\rho_{123}$ and $\rho_{132}$, the subscripts of which label the permutation operation.
The natural basis for three photons
-----------------------------------
\
Whereas in equation (\[eq:P111\]) the basis states exhibit no particular permutation symmetry, states of the natural basis introduced to yield the fully block–diagonal form of equation (\[eq:P1112\]) have specific permutation properties: states of one symmetry type transform to states of the same type under permutation, i.e. they are decoupled under permutation. States in the natural basis thus play the role of normal coordinates for the non–classical interference of photons. Where equation (\[eq:P1112\]) highlights the six different permutational possibilities for three photons summing the matrices inside the square–brackets yields the $6\times6$ rate matrix $\hat{R}^{(3)}(\Delta\tau_1,\Delta\tau_2)$ of equation (\[eq:P1113\]). $\hat{R}^{(3)}(\Delta\tau_1,\Delta\tau_2)$ contains all the information regarding the input state, i.e. mode–mismatch and temporal delay to specify the non–classical interference of three photons independent of the scattering transformation $T$. Two entries of the block–diagonal rate matrix are sufficient for an interpretation. $F_{per}=\hat{R}^{(3)}_{11}(\Delta\tau_1,\Delta\tau_2)$ quantifies the fraction of the output probability distribution proportional to the permanent; the corresponding basis state is fully symmetric under permutation. $F_{det}=\hat{R}^{(3)}_{66}(\Delta\tau_1,\Delta\tau_2)$ quantifies the fraction of the output probability distribution proportional to the determinant; the corresponding basis state is fully antisymmetric under permutation. The contribution proportional to immanants can also be explicitly calculated. When only interested in their overall contribution it is given as $F_{imm}=1-F_{per}-F_{det}$. In the case for perfectly overlapping photons $F_{per}=1$ and therefore only the permanent of the scattering matrix contributes to the output probability distribution. Classical behaviour of the photons can be identified for $F_{per}=F_{det}=\frac{1}{6}$. As in the two photon case the input state and the interferometer decouple in the natural basis. As a consequence the treatment of the quantum interference of three photons in larger interferometric networks consisting of many modes becomes very efficient. For such a problem it is sufficient to calculate the rate matrix $\hat{R}^{(3)}$ only once. The scattering matrix $T$, necessary to calculate the basis vector $\boldsymbol{v_3}$ for a specific element of a output probability distribution is just a $3\times3$ submatrix of the larger scattering matrix. It is specified by the input ports of the photons and the ports in which they exit the interferometer. To obtain multiple elements of a probability distribution it is sufficient to determine their respective basis vector $\boldsymbol{v_3}$.
$$\begin{aligned}
P_{111}(\Delta\tau_1,\Delta\tau_2)=
&\int d\omega\int d\omega' \int d\omega''|\bra{\psi_{in}}\hat{T}^{\dagger}\hat{a}^{\dagger}_1(\omega)\hat{a}^{\dagger}_2(\omega')\hat{a}^{\dagger}_3(\omega'')\ket{0}|^2 \label{eq:P111}\\
= &(\hat{P}\hat{S}\boldsymbol{v_3})^{\dagger} \big[{1\!\!1} + \rho_{12}\zeta_{12}\text{e}^{-\xi_{12}\Delta\tau_1^2}+ \rho_{23}\zeta_{23}\text{e}^{-\xi_{23}\Delta\tau_2^2}
+ \rho_{13}\zeta_{13}\text{e}^{-\xi_{13}(\Delta\tau_1-\Delta\tau_2)^2}\nonumber\\ & + \zeta_{123}\left(\rho_{132}\text{e}^{\xi_{123}^*(\Delta\tau_1,\Delta\tau_2)}+\rho_{123}\text{e}^{\xi_{123}(\Delta\tau_1,\Delta\tau_2)}\right)\big](\hat{P}\hat{S}\boldsymbol{v_3}),\label{eq:P1112}\\
= &\boldsymbol{v_3}^{\dagger}\big[\hat{R}^{(3)}(\Delta\tau_1,\Delta\tau_2)\big]\boldsymbol{v_3}\label{eq:P1113},\end{aligned}$$
{width="92.50000%"}
The coincidence landscape
-------------------------
\
In the experiment four–photon events generated by higher–order emission from a spontaneous parametric down–converter are distributed to four different spatial modes. Relying on a detection event in the trigger mode and post–selection, the three–photon input state, one photon in each input mode coupled to the interferometer, is heralded. We ensure that all photons are indistinguishable in a polarization basis. The spectral properties of these photons are independently measured using a single–photon spectrometer. Their relative temporal delay $\Delta\tau_1$ and $\Delta\tau_2$ can be set using motorized delay lines. The transformation of the fs–written integrated interferometer, a $5\times5$ unitary matrix, is recovered using the reconstruction method specified in the supplementary material. Injecting the photons in three input ports of the interferometer and detecting them in three separate output ports uniquely selects a $3\times3$ scattering submatrix $T$ (see figure **3a** and **3d**). For each $3\times3$ submatrix, using a precisely tunable delay allows us to reveal the full spectrum and thereby nature of the non–classical interference. We visualize this as a three–dimensional coincidence landscape as shown in figure **3b** and **3e**. The relief of such a landscape features distinct “landforms” are in correspondence with distinguishability features of the photons. In the center region, $\Delta\tau_1=\Delta\tau_2=0\pm\tau_c$, a peak or dip arises due to constructive or destructive interference of all three photons. Note that the absolute zero position $\Delta\tau_1=\Delta\tau_2=0$ corresponds to a permanent only in the absence of any spectral distinguishability. Along the three axis $\Delta\tau_1=0$, $\Delta\tau_2=0$ and $\Delta\tau_1=\Delta\tau_2>|\pm\tau_c|$ valleys or ridges form due to the non–classical interference of two indistinguishable photons with the third one being distinguishable. Each valley or ridge depicts a case where one of the three photons is distinguishable compared to the other two photons. Along those ridges and valleys the output probability is largely proportional to immanants of the scattering matrix. “Classical” behaviour, i.e. complete distinguishability, of the three photons is associated to plateaux for $\Delta\tau_1=-\Delta\tau_2>|\pm\tau_c|$. These are the only areas where determinants of the scattering matrix contribute, accounting for the anti–symmetrical part of the input state. Coincidences for six points of pairwise different temporal delays, $P1$ to $P6$, for two different scattering submatrices (see figure **3c** and **3f**) are measured. These six points were selected because they highlight the connection between landscape features, permutation symmetries, and partial distinguishability. Furthermore they provide a sufficient set of experimental data for fitting the coincidence landscapes. A reduced $\chi^2$ of $1.38$ and $1.10$ for the two landcapes quantifies the overlap between our theory and the experiment. The deviations are most likely due to higher–order emissions and frequency correlations of the input state.\
The landscape interpretation can be extended to the interference of larger numbers of photons $n$, which generate n-dimensional “hyperlandscapes”. These are spanned by $n$-$1$ axes of pairwise temporal delays with the last axis representing the actual coincidence rate. The “landforms” reach from complex $n$–dimensional features corresponding to the partial indistinguishability of all $n$ photons to the simple one-dimensional plateaux associated with completely distinguishable photons.
{width="90.00000%"}
From permanents to immanants
----------------------------
\
Quantum computing leverages quantum resources to efficiently perform certain classically hard computations[@Nielsen2010]. Whereas many quantum algorithms solve a certain decision problem, BosonSampling introduces a new paradigm: it seeks efficient sampling of a distribution of matrix transformations, which is a task hard to implement efficiently on classical computers. Optical realizations of both approaches, universal quantum computing and BosonSampling, rely intrinsically on the non–classical interference of more than two photons. BosonSampling is singular amongst current proposals because of its low requirements of space and time resources, brings within current technological reach the realistic possibility of demonstrating the superiority of quantum computing. This promise has led to several BosonSampling experiments[@Broome2013; @Spring2013; @Tillmann2013; @Crespi2013] and follow–up work[@Gogolin2013; @Aaronson2013; @Spagnolo2014; @Carolan2014].\
In order to scale BosonSampling to larger instances two main issues need to be addressed. The first issue is the technology[@Lita2008; @Zhou2014; @Marsili2013] needed to increase the size of the instances implemented. The second issue is handling of possible errors[@Leverrier2013; @Rohde2012; @Rohde2012a]. BosonSampling is a purely passive optical scheme and therefore lacks error correction capabilities[@Rohde2014a]. The success of computation depends crucially on the quality of the experimental apparatus. Only in the ideal case where the interfering photons are indistinguishable in all degrees of freedom is the resulting output probability distribution proportional to the permanent only. Our analysis exposes that this condition is rather fragile and therefore distinguishability must be regarded as the dominant source of error.\
Remarkably, large classes of immanants are known to be in the same complexity class as permanents[@burgisser2000computational; @brylinski2003complexity]. Thus it is an intuitive conjecture that output probability distributions depending largely on immanants rather than just the permanant are also computationally hard. Whether this holds for sampling from these distributions is an active field of research.\
Optical implementations of BosonSampling instances utilize state–of–the–art large–scale random scattering networks and generate huge output probability distributions consisting of many elements. These prerequisites make it a benchmark for multi–photon non–classical interference. Consequently a description of generalized non–classical interference needs to be assessed under these conditions. Our approach decouples the interferometer from the non–classical interference hence the treatment and conclusions become analogue for e.g. central building blocks of linear optical quantum computing like ancillae assisted CNOT–gates. These typically feature more symmetric and simpler networks however, rendering the non–classical interference far less rich.
{width="95.00000%"}
Investigation of a BosonSampling computer
-----------------------------------------
\
We investigate generalized non–classical interference of three photons in a five–moded interferometric network in theory and experiment. This serves a dual purpose: On one hand it emphasizes how distinguishability influences a three–photon BosonSampling instance. On the other hand the full permutational spectrum of a generalized non–classical interference is shown for complex networks exhibiting a random structure. The photons exhibit some spectral mismatch and are additionally rendered fully or partially distinguishable by controlling temporal delays. Figure **4a** illustrates the result for partial distinguishability, whereas in figure **4b** and **4c** this distinguishability is increased by varying the temporal delay along a diagonal axis $\Delta\tau_1\approx\Delta\tau_2$. The extreme case of complete distinguishability and thus classical behaviour is shown in figure **4d**. As reference we include in all figures the ideal case of zero delay and perfect indistinguishability as grey bars. The interferometer independent contribution $F_{per}$, $F_{det}$ and $F_{imm}$ is contained as an inset in the legend of each figure.
The elements of each output probability distribution are recovered by calculating the corresponding matrix functions. Note that for each element the absolute value of these matrix functions, e.g. $|\text{per}(T)|^2$ or $|\text{det}(T)|^2$, can vary largely depending on the scattering submatrix $T$. This is pronounced for the output event $123$ where $|\text{per}(T_{123})|^2\approx\frac{1}{5}|\text{det}(T_{123})|^2$. In general the fraction of the output probability distribution proportional to the permanent drops rapidly with increasing distinguishability. Instead contributions from immanants become dominant and reflect cases where only two of the three photons interfere non–classically. For large delays along the antidiagonal axis $\Delta\tau_1\approx-\Delta\tau_2$ the three photons’ wavefunctions do not overlap anymore and the determinant contributes with $F_{det}=\frac{1}{6}$ (see figure **4d**). For comparably large delays along the diagonal axis $\Delta\tau_1\approx\Delta\tau_2$ two photons stay nearly indistinguishable and the contribution from the determinant is suppressed to $F_{det}\approx0$ (see figure **4c**). The classical case (figure **4d**) can be always identified with equal contribution from the permanent and determinant $F_{per}=F_{det}=\frac{1}{n!}$, which is for $n=3$ photons $F_{per}=F_{det}=\frac{1}{6}$.\
Our theory emphasizes the permutation symmetries of $n$ photons using the representation theory of the symmetric group $S_n$. The theory is thus independent of the number of modes $m$ in the interferometer, a feature that is extremely convenient for large scale networks where $m\gg n$, even though the representations increase with with $n!$. In figure **5** we show the applicability of our method for larger $n$ and $m$ with a calculation of a generalized non–classical interference of five photons injected in a network of nine modes. The full spectrum of such a non–classical interference, constituted by permanents, determinants and immanants of the respective scattering submatrices is revealed by tuning the photons’ distinguishability. Different physical scenarios of partial distinguishability, e.g the case where four photons are indistinguishable from one another but distinguishable from the fifth photon, are covered by the corresponding partitions of the immanants. Figure **5b** highlights that already partial distinguishability significantly alters the output probability distribution to be primarily proportional to immanants.
Discussion {#discussion .unnumbered}
==========
We present a novel analysis of multi–photon quantum interference revealing the full permutational spectrum of input states with arbitrary distinguishability. A comprehensive physical interpretation is achieved/given by establishing a correspondence between matrix immanants and these mixed symmetry input states. We introduce a rate-matrix containing all the information on the non-classical interference and basis vectors containing the information on the interferometric network. Output probabilities are recovered as an inner product of these vectors with the rate-matrix serving as a metric. This rate-matrix is block-diagonalized and each block corresponds to a different physical scenario of non-classical interference. This indicates that this block diagonalization and consequent interpretation are not only fundamental but also universal features of multi-photon interferometry. We experimentally confirm our theory by recovering the full coincidence landscape of three arbitrarily distinguishable photons and give an analytical example for five photons. Our approach thus provides a deeper understanding of the rich spectrum of multi-photon non-classical interference. Additionally our method can be used to characterize a broad range of optical interferometers used for example in quantum information processing. While passive schemes like BosonSampling benefit most from this approach it applies analogously to crucial building blocks of linear optical quantum computing relying on the non–classical interference of more than two photons[@Knill2001; @Pittman2001].\
Methods
=======
**Three–photon coincidence probability** Vector $\hat{P}\hat{S}\boldsymbol{v}$ in equation (\[eq:P1112\]) is defined as:
$$\begin{aligned}
\hat{P}\hat{S}\boldsymbol{v}=\left(\begin{array}{c}
\text{per}(T) \\
\text{det}(T) \\
\frac{1}{2\sqrt{3}}\text{imm}(T)+\frac{1}{2\sqrt{3}}\text{imm}(T_{312})\\
\frac{1}{6}\text{imm}(T)-\frac{1}{3}\text{imm}(T_{132})-\frac{1}{6}\text{imm}(T_{213})+\frac{1}{3}\text{imm}(T_{312})\\
\frac{1}{6}\text{imm}(T)+\frac{1}{3}\text{imm}(T_{132})+\frac{1}{6}\text{imm}(T_{213})+\frac{1}{3}\text{imm}(T_{312})\\
-\frac{1}{2\sqrt{3}}\text{imm}(T)+\frac{1}{2\sqrt{3}}\text{imm}(T_{213})\ ,
\end{array}\right )\end{aligned}$$
where $T_{ijk}$ is the matrix $T$ in which rows 1,2 and 3 have been rearranged in order $i$, $j$, $k$.\
{width="95.00000%"}
**Labels of immanants by Young diagrams** The different immanants in the caption of Fig. $5$ are indexed with the corresponding young diagrams. The Young diagrams are a collection of boxes here used to distinguish different physical scenarios of multi–photon non–classical interference. Young diagrams are a pictorial representation of different partitions of $S_n$[@Guise2014].\
**State generation** A Ti–Sapphire oscillator emitting pulses at and a repetition rate of is frequency doubled in a $\text{LiB}_3\text{O}_5$ (LBO) crystal (see Fig. \[Fig:Setup\] for a schematic of the experimental setup). The output power of this second harmonic generation can be controlled by a power regulation stage consisting of a half–wave plate (HWP) and a polarizing beam splitter (PBS) placed before the LBO-crystal. The resulting emission at is focused into a thick $\beta\text{--BaB}_2\text{O}_4$ (BBO) crystal cut for degenerate non–collinear type–II down–conversion[@Kwiat1995]. A compensation scheme consisting of HWPs and thick BBO–crystals is applied for countering temporal and spatial walk–off. The two spatial outputs of the down–converter pass through narrowband interference filters ($\lambda_{\text{FWHM}}=$) to achieve a coherence time greater than the birefringent walk–off due to group velocity mismatch in the crystal ($|v_{g_e}-v_{g_o}|$ $\times$ half crystal thickness). Additionally this renders the photons close to spectral indistinguishability. The down–conversion–source is aligned to emit the maximally entangled Bell–state $\ket{\phi^+}=\frac{1}{\sqrt{2}}\left(\ket{HH}+\ket{VV}\right)$ when pumped at cw–equivalent pump power. The state is coupled into single mode fibers (Nufern 780–HP) equipped with pedal–based polarisation controllers to counter any stress–induced rotation of the polarisation inside the fiber. Each of these spatial modes is then coupled to one input of a PBS while its other input is occupied with a vacuum–state. The outputs pass HWPs and are subsequently coupled to four polarisation maintaining fibers (Nufern PM780–HP). Temporal overlap is controlled by two motorized delay lines that exhibit a bidirectional repeatability of $\pm\,$. Temporal alignment precision is limited by other factors in the setup to approximately $\pm\,$ and is therefore within a precision of 2.5% of the coherence length of the photons. The polarisation maintaining fibers are mated to a single mode fiber v–groove–array (Nufern PM780–HP) with a pitch of and butt–coupled to the integrated circuit. The coupling is controlled by a manual six–axis flexure stage and stable within 5% of the total single–photon counts over 12 hours. The output fiber array consists of a multimode v–groove–array (GIF–625) and the photons are detected by single–photon avalanche photodiodes which are recorded with a home–built Field Programmable Gate Array logic. The coincidence time window was set to .\
In order to measure the six points of the coincidence landscapes a three–photon input state was injected into the integrated network (see supplementary information for further details). Therefore the BBO was pumped with cw-equivalent power of and the ratio of the six–photon emission over the desired four–photon emission was measured to be below 5%.\
**Integrated network fabrication.** The integrated photonic networks were fabricated using a femtosecond direct–write writing technology[@Itoh2006; @Marshall2009]. Laser pulses were focused below the surface of a high–purity fused silica wafer by an NA=0.6 objective. The pulses exhibit a pulse duration of at repetition rate and a central wavelength of . In order to write the individual waveguides the wafer was translated with a speed of . The waveguide modes exhibit a mode field diameter of $\times$ for a wavelength of and a propagation loss of . This results in a coupling loss of with the type of input fibers used in this experiment. Coupling to the output array results in negligible loss due to the use of multimode fibers.\
The authors thank I. Dhand and J. Cotter for helpful discussions, M. Tomandl for assistance with the illustrations and J. Nielsen and J. Kulp for assistance with coding and computing the five–photon non-classical interference. M.T., S.E.S. and P.W. acknowledge support from the European Commission with the project EQuaM -Emulators of Quantum Frustrated Magnetism (No 323714), GRASP - Graphene-Based Single–Photon Nonlinear Optical Devices (No 613024), PICQUE - Photonic Integrated Compound Quantum Encoding (No 608062), QuILMI - Quantum Integrated Light Matter Interface (No 295293) and the ERA-Net CHIST-ERA project QUASAR - Quantum States: Analysis and Realizations, the German Ministry of Education and Research (Center for Innovation Competence program, grant 03Z1HN31), the Vienna Center for Quantum Science and Technology (VCQ), the Austrian Nano-initiative Nanostructures of Atomic Physics (NAP-PLATON), and the Austrian Science Fund (FWF) with the projects PhoQuSi Photonic Quantum Simulators (Y585-N20) and the doctoral programme CoQuS Complex Quantum Systems, the Vienna Science and Technology Fund (WWTF) under grant ICT12-041, and the Air Force Office of Scientific Research, Air Force Material Command, United States Air Force, under grant number FA8655-11-1-3004. B.C.S. acknowledges support from AITF (Alberta Innovates Technology Futures), NSERC (Natural Sciences and Engineering Research Council), and CIFAR (Canadian Institute for Advanced Research). The work of H.dG. is supported in part by NSERC of Canada. S.–H.T.: This material is based on research supported in part by the Singapore National Research Foundation under NRF Award No. NRF-NRFF2013-01. R.H., S.N. and A.S. acknowledge support from the German Ministry of Education and Research (Center for Innovation Competence programme, grant 03Z1HN31), the Deutsche Forschungsgemeinschaft (grant NO462/6-1), the Thuringian Ministry for Education, Science and Culture (Research group Spacetime, grant 11027-514).\
The authors declare that they have no competing financial interests.\
Appendix {#appendix .unnumbered}
========
Two–photon non–classical interference
-------------------------------------
Two photons injected into different inputs of an arbitrary beam splitter or a network built from arbitrary beam splitters and phase shifters will interfere non–classically [@Hong1987; @Fearn1989]. This input state can be expressed as, $$\label{eq:2in}
\ket{11} = (\hat{A}^{\dagger}_1(\alpha_1)e^{i\omega_1\tau_1})(\hat{A}^{\dagger}_2(\alpha_2)e^{i\omega_2\tau_2})\ket{0},$$ with $$\label{eq:Ai}
\hat{A}^{\dagger}_i(\alpha_i)=\int\limits_{0}^\infty d\omega_i \alpha_i(\omega_i)\hat{a}^{\dagger}_i(\omega_i) \ ,$$ for $A^{\dagger}_i(\alpha_i)$ a creation operator for a photon with spectral function $$\label{eq:alpha}
\left |\alpha(\omega_i)\right|^2= \frac{1}{\sqrt{2\pi}\sigma_i}\exp\left(-\frac{(\omega_i-\omega_{c,i})^2}{2\sigma_i^2}\right)$$ centered at time $\tau_i$. The frequency-mode creation operators on the right-hand side (RHS) of equation (\[eq:Ai\]) satisfy the commutator relation $$\label{comm}
[\hat{a}_i(\omega),\hat{a}^{\dagger}_j(\omega')]=\delta_{ij}\delta(\omega-\omega')\mathds{1}$$ with $\mathds{1}$ the identity operator. This commutator relation also defines the photons’ symmetry under permutation operations. For two photons it is sufficient to define their relative temporal delay as $\Delta\tau=\tau_1-\tau_2$. Only in the case of ideal bosonic particles exhibiting no modal mismatch and perfect temporal overlap, i.e. $\Delta\tau=0$, does the RHS of equation (\[comm\]) become the well–known bosonic commutator relation describing perfect symmetry under exchange. When the two–photon input state (see equation (\[eq:2in\])) is mixed via a transformation matrix $B=U_{2\times2}$ and projected on an output where the two photons exit in different modes, the output probability becomes,
$$\begin{aligned}
P_c(\Delta\tau)&=\int d \omega_1 \int d \omega_2 \left|\langle 11| \hat{B}^\dag\hat{a}^{\dagger}_1(\omega_1)\hat{a}^{\dagger}_2(\omega_2) |0 \rangle \right|^2\\
&=
\begin{pmatrix}
{\rm per}(B) \\
{\rm det}(B)
\end{pmatrix}^{\mathlarger{\dagger}}
\left[\frac{1}{2}
\begin{pmatrix}
1 &\quad\,\, 0 \\
0 &\quad\,\, 1
\end{pmatrix}
+\frac{1}{2}\zeta\text{e}^{- \xi\Delta\tau^2}
\begin{pmatrix}
1 &\quad\, 0 \\
0 &\,\,\, -1
\end{pmatrix}
\right]
\begin{pmatrix}
{\rm per}(B) \\
{\rm det}(B)
\end{pmatrix}\label{eq6}\\
&= \boldsymbol{v_2}^{\dagger} \big[\hat{R}^{(2)}(\Delta\tau)\big]\boldsymbol{v_2}\label{eq7},\end{aligned}$$
with $$\zeta=\frac{2\sigma_1\sigma_2}{\sigma_1^2+\sigma_2^2}\exp\left({-\frac{(\omega_{c,1}-\omega_{c,2})^2}{2(\sigma_1^2+\sigma_2^2)}}\right),\;
\xi=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2}$$ denoting factors arising from the spectral overlap integral and
$$\boldsymbol{v_2}=\frac{1}{\sqrt{2}}\begin{pmatrix}
{\rm per}(B) \\
{\rm det}(B)
\end{pmatrix}$$
the new basis vector constituted by matrix functions of the scattering submatrix $T$. As a second–order correlation effect this non–classical interference is dependent on the permutational symmetry of the interfering wavefunctions also reflected in the basis vector $\boldsymbol{v_2}$. For the case of indistinguishable photons ($\omega_{c,1}=\omega_{c,2}$, $\sigma_1=\sigma_2$ or $\Delta\tau=0$), the output probability is only proportional to the permanent. This is a function symmetric under permutation of rows of the transformation matrix arising in photon interferometry due to bosonic exchange symmetry. However, with loss of complete indistinguishability ($\omega_{c1}\neq\omega_{c2}$, $\sigma_1\neq\sigma_2$ and $\Delta\tau\neq0$), equation (\[eq6\]) becomes proportional to a combination of the determinant and the permanent. This is a consequence of the input state losing its symmetry under exchange. Equation (\[eq7\]) decouples the influence of the interferometer from the influence of the input state. The latter is contained in the diagonal $2\times2$ rate–matrix $\hat{R}^{(2)}(\Delta\tau)$, whereas the description of the interferometer is absorbed in the new basis vector $\boldsymbol{v_2}$. The two non–zero entries of the rate–matrix, $\hat{R}^{(2)}_{11}$ and $\hat{R}^{(2)}_{22}$, are sufficient to reveal the nature of the non–classical interference of two photons of arbitrary coherence. Where $\hat{R}^{(2)}_{11}$ quantifies the contribution from the permanent of the scattering submatrix $\hat{R}^{(2)}_{22}$ quantifies the contribution from the determinant of the scattering submatrix. The output probability $P_c$ is recovered by calculting those matrix functions.\
Three–photon non–classical interference
---------------------------------------
Non–classical interference of photons depends on indistinguishability of the interfering photons and transformations mixing the modes. Adding a third photon noticeably increases the complexity. An input state corresponding to three photons in three different transverse spatio–temporal modes can be described as
$$\begin{aligned}
\ket{111}=(A^{\dagger}_1(\alpha_1)e^{i\omega\tau_1})(A^{\dagger}_2(\alpha_2)e^{i\omega'\tau_2})(A^{\dagger}_3(\alpha_3)e^{i\omega''\tau_3})\ket{0}\ .\end{aligned}$$
For three photons it is sufficient to define two relative temporal delays, $\Delta\tau_1=\tau_1-\tau_2$ and $\Delta\tau_2=\tau_3-\tau_2$. When this input state is transformed via a submatrix $T=U_{3\times3}$ and projected on an output where the three photons exit in different modes the fully expanded output probability can be written as
$$\begin{aligned}
P_{111}(\Delta\tau_1,\Delta\tau_2)=&\int d\omega\int d\omega' \int d
\omega''|\bra{111}\hat{T}^\dag a_1^\dag(\omega)a_2^\dag(\omega')a_3^
\dag(\omega'')\ket{0}|^2 \\
=&\label{fullterms}\frac{1}{6}|{{\rm det}}(T)|^2 +\frac{2}{9}|{{\rm imm}}(T_{132})|^2+\frac{1}{9}{{\rm imm}}^*(T_{132}){{\rm imm}}(T_{213})+\frac{1}{9}{{\rm imm}}(T_{132}){{\rm imm}}^*(T_{213})\\ \nonumber
&+\frac{2}{9}|{{\rm imm}}(T_{213})|^2+\frac{2}{9}|{{\rm imm}}(T_{231})|^2+\frac{2}{9}|{{\rm imm}}(T)|^2+\frac{1}{9}{{\rm imm}}(T_{231}){{\rm imm}}^*(T)\\
&+\frac{1}{6}|\Per(T)|^2+\frac{1}{9}{{\rm imm}}(T){{\rm imm}}^*(T_{231})\nonumber\\
& +\zeta_{13}\exp(-2\xi_{13}(\Delta\tau_1-\Delta\tau_2)^2)\Big( -\frac{1}{6}|{{\rm det}}(T)|^2-\frac{2}{9}{{\rm imm}}(T){{\rm imm}}^*(T_{132})-\frac{1}{9}{{\rm imm}}(T){{\rm imm}}^*(T_{213})\nonumber\\
&-\frac{1}{9}{{\rm imm}}^*(T_{132}){{\rm imm}}(T_{231})+\frac{1}{9}{{\rm imm}}^*(T_{213}){{\rm imm}}(T_{231})-\frac{1}{9}{{\rm imm}}(T_{132}){{\rm imm}}^*(T_{231})\nonumber\\
&+\frac{1}{9}{{\rm imm}}(T_{213}){{\rm imm}}^*(T_{231})-\frac{2}{9}{{\rm imm}}(T_{132}){{\rm imm}}^*(R)-\frac{1}{9}{{\rm imm}}(T_{213}){{\rm imm}}^*(T)+\frac{1}{6}|\Per(T)|^2\Big )\nonumber\\
&+\zeta_{12}\exp(-2\xi_{12}\Delta\tau_1^2)\Big(-\frac{1}{6}|{{\rm det}}(T)|^2+\frac{1}{9}{{\rm imm}}(T){{\rm imm}}^*(T_{132})+\frac{2}{9}{{\rm imm}}(T){{\rm imm}}^*(T_{213})\nonumber\\
&+\frac{2}{9}{{\rm imm}}^*(T_{132}){{\rm imm}}(T_{231})+\frac{1}{9}{{\rm imm}}^*(T_{213}){{\rm imm}}(T_{231})+\frac{2}{9}{{\rm imm}}(T_{132}){{\rm imm}}^*(T_{231})\nonumber\\
&+\frac{1}{9}{{\rm imm}}(T_{213}){{\rm imm}}^*(T_{231})+\frac{1}{9}{{\rm imm}}(T_{132}){{\rm imm}}^*(T)+\frac{2}{9}{{\rm imm}}(T_{213}){{\rm imm}}^*(T)+\frac{1}{6}|\Per(T)|^2\Big)\nonumber\\
&+\zeta_{23}\exp(-2\xi_{23}\Delta\tau_2^2)\Big(-\frac{1}{6}|{{\rm det}}(T)|^2+\frac{1}{9}{{\rm imm}}(T){{\rm imm}}^*(T_{132})-\frac{1}{9}{{\rm imm}}(T){{\rm imm}}^*(T_{213})\label{tau4}\nonumber\\
&-\frac{1}{9}{{\rm imm}}^*(T_{132}){{\rm imm}}(T_{231})-\frac{2}{9}{{\rm imm}}^*(T_{213}){{\rm imm}}(T_{231})-\frac{1}{9}{{\rm imm}}(T_{132}){{\rm imm}}^*(T_{231})\nonumber\\
&-\frac{2}{9}{{\rm imm}}(T_{213}){{\rm imm}}^*(T_{231})+\frac{1}{9}{{\rm imm}}(T_{132}){{\rm imm}}^*(T)-\frac{1}{9}{{\rm imm}}(T_{213}){{\rm imm}}^*(T)+\frac{1}{6}|\Per(T)|^2\Big)\nonumber\\
&+\zeta_{123}\exp(-I_a+iI_s)\Big(\frac{1}{6}|{{\rm det}}(T)|^2-\frac{1}{9}|{{\rm imm}}(T_{132})|^2-\frac{2}{9}{{\rm imm}}^*(T_{132}){{\rm imm}}(T_{213})\nonumber\\
&+\frac{1}{9}{{\rm imm}}(T_{132}){{\rm imm}}^*(T_{213})-\frac{1}{9}|{{\rm imm}}(T_{213})|^2+\frac{1}{9}{{\rm imm}}(T){{\rm imm}}^*(T_{231})-\frac{1}{9}|{{\rm imm}}(T_{231})|^2\nonumber\\
&-\frac{1}{9}|{{\rm imm}}(T)|^2-\frac{2}{9}{{\rm imm}}(T_{231}){{\rm imm}}^*(T)+\frac{1}{6}|\Per(T)|^2\Big)\nonumber\\
&+\zeta_{123}\exp(-I_a-iI_s)\Big(\frac{1}{6}|{{\rm det}}(T)|^2-\frac{1}{9}|{{\rm imm}}(T_{132})|^2-\frac{2}{9}{{\rm imm}}(T_{132}){{\rm imm}}^*(T_{213})\nonumber\\
&+\frac{1}{9}{{\rm imm}}^*(T_{132}){{\rm imm}}(T_{213})-\frac{1}{9}|{{\rm imm}}(T_{213})|^2+\frac{1}{9}{{\rm imm}}^*(T){{\rm imm}}(T_{231})-\frac{1}{9}|{{\rm imm}}(T_{231})|^2\nonumber\\
&-\frac{1}{9}|{{\rm imm}}(T)|^2\nonumber-\frac{2}{9}{{\rm imm}}^*(T_{231}){{\rm imm}}(T)+\frac{1}{6}|\Per(T)|^2\Big)\ ,\nonumber\end{aligned}$$
with $$\begin{aligned}
\zeta_{123}=&\sqrt{\zeta_{12}\zeta_{23}\zeta_{13}},
\nonumber\\
I_a\equiv & I_a(\Delta\tau_1,\Delta\tau_2)=-(\Delta \tau_1)^2 \frac{\xi_{12}}{2}-(\Delta \tau_1 - \Delta\tau_2)^2 \frac{\xi_{13}}{2}-(\Delta\tau_2)^2 \frac{\xi_{23}}{2},
\nonumber\\
I_s\equiv & I_s(\Delta\tau_1,\Delta\tau_2)=\Delta \tau_1 \nu_{12}-(\Delta\tau_1-\Delta\tau_2)\nu_{13}-\Delta\tau_2\nu_{23},
\nonumber\\
\zeta_{ij}=&\frac{2\sigma_i\sigma_j}{\sigma_i^2+\sigma_j^2}\exp\left(-\frac{(\omega_{c,i}-\omega_{c,j})^2}{2(\sigma_i^2+\sigma_j^2)}\right),\\
\xi_{ij}=&\frac{2\sigma_i^2\sigma_j^2}{\sigma_i^2+\sigma_j^2},\;
\nu_{ij}=\frac{\omega_{c,i}\sigma_j^2+\omega_{c,j}\sigma_i^2}{\sigma_i^2+\sigma_j^2}.\end{aligned}$$ The subscripts denote the mode labels for the submatrix $T$. $T_{ijk}$ is the matrix $T$ with the rows permuted according to $1\rightarrow i$, $2\rightarrow j$, and $3\rightarrow k$.\
For a more elegant expression, equation (\[fullterms\]) can be simplified introducting six matrices, $\openone$, $\rho_{12}$, $\rho_{13}$, $\rho_{23}$, $\rho_{123}$, and $\rho_{132}$: $$\begin{aligned}
\label{eq:allsym}
P_{111}(\Delta\tau_1,\Delta\tau_2)
=&(\hat{P}\hat{S}\boldsymbol{v_3})^{\dagger}\big[\openone + \rho_{12}\zeta_{12}e^{-\xi_{12}\Delta\tau_1^2}+\rho_{23}\zeta_{23}e^{-\xi_{23}\Delta\tau_2^2}\nonumber\\
&+\rho_{13}\zeta_{13}e^{-\xi_{13}(\Delta\tau_1-\Delta\tau_2)^2}
+\zeta_{123}(\rho_{132}e^{\xi^*_{123}(\Delta\tau_1,\Delta\tau_2)}+\rho_{123}\text{e}^{\xi_{123}(\Delta\tau_1,\Delta\tau_2)})\big](\hat{P}\hat{S}\boldsymbol{v_3}) \\
= &\boldsymbol{v_3}^{\dagger}\big[\hat{R}^{(3)}(\Delta\tau_1,\Delta\tau_2)\big]\boldsymbol{v_3}\label{eq16},\end{aligned}$$ where $$\xi_{123}(\Delta\tau_1,\Delta\tau_2)=I_a+iI_s.$$ The vector $\hat{P}\hat{S}\boldsymbol{v_3}$ contains all the immanants and the determinant and permanent of $T$: $$\begin{aligned}
\label{eq:r}
\hat{P}\hat{S}\boldsymbol{v_3}
\equiv \begin{pmatrix}
\frac{1}{\sqrt{6}}{\rm per}(T) \\
\frac{1}{\sqrt{6}}{\rm det}(T)\\
\frac{1}{2\sqrt{3}}{\rm imm}(T)+\frac{1}{2\sqrt{3}}{\rm imm}(T_{213})\\
\frac{1}{6}{\rm imm}(T)-\frac{1}{3}{\rm imm}(T_{132})-\frac{1}{6}{\rm imm}(T_{213})+\frac{1}{3}{\rm imm}(T_{312})\\
\frac{1}{6}{\rm imm}(T)+\frac{1}{3}{\rm imm}(T_{132})+\frac{1}{6}{\rm imm}(T_{213})+\frac{1}{3}{\rm imm}(T_{312})\\
-\frac{1}{2\sqrt{3}}{\rm imm}(T)+\frac{1}{2\sqrt{3}}{\rm imm}(T_{213})
\end{pmatrix}\end{aligned}$$
with
$$\begin{aligned}
\label{eq:r2}
\boldsymbol{v_3}=\begin{pmatrix}
{\rm per}(T) \\
{\rm imm}(T) \\
{\rm imm}(T_{132}) \\
{\rm imm}(T_{213}) \\
{\rm imm}(T_{312}) \\
{\rm det}(T)
\end{pmatrix},\ &
\hat{P}=\begin{pmatrix}
\frac{1}{\sqrt{6}} & \frac{1}{\sqrt{6}} & \frac{1}{\sqrt{6}} & \frac{1}{\sqrt{6}} & \frac{1}{\sqrt{6}} & \frac{1}{\sqrt{6}} \\
\frac{1}{\sqrt{6}} & -\frac{1}{\sqrt{6}} & -\frac{1}{\sqrt{6}} & \frac{1}{\sqrt{6}} & \frac{1}{\sqrt{6}} & -\frac{1}{\sqrt{6}} \\
\frac{1}{\sqrt{3}} & -\frac{1}{2\sqrt{3}} & \frac{1}{\sqrt{3}} & -\frac{1}{2\sqrt{3}} & -\frac{1}{2\sqrt{3}} & -\frac{1}{2\sqrt{3}} \\
0 & -\frac{1}{2} & 0 & -\frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\
0 & \frac{1}{2} & 0 & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\
-\frac{1}{\sqrt{3}} & -\frac{1}{2\sqrt{3}} & \frac{1}{\sqrt{3}} & \frac{1}{2\sqrt{3}} & \frac{1}{2\sqrt{3}} & -\frac{1}{2\sqrt{3}}
\end{pmatrix}, \nonumber
\hat{S}=\begin{pmatrix}
\frac{1}{6} & \frac{1}{3} & 0 & 0 & 0 & \frac{1}{6} \\
\frac{1}{6} & 0 & \frac{1}{3} & 0 & 0 & -\frac{1}{6}\\
\frac{1}{6} & 0 & 0 & \frac{1}{3} & 0 & -\frac{1}{6} \\
\frac{1}{6} & -\frac{1}{3} & 0 & 0 & -\frac{1}{3} & \frac{1}{6} \\
\frac{1}{6} & 0 & 0 & 0 & \frac{1}{3} & \frac{1}{6} \\
\frac{1}{6} & 0 & -\frac{1}{3} & -\frac{1}{3} & 0 & -\frac{1}{6}
\end{pmatrix}.\end{aligned}$$
Here $\hat{P}$ is a basis–transformation and $\hat{S}$ is a matrix mapping matrix–elements to matrix functions. The six matrices $\rho$ are, in fact, permutation matrices reduced to block–diagonal form: $$\begin{aligned}
\openone=\begin{pmatrix}
1 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0\\
0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & 1
\end{pmatrix},\ &
\rho_{12}=\begin{pmatrix}
1 & 0 & 0 & 0 & 0 & 0 \\
0 & -1 & 0 & 0 & 0 & 0\\
0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & -1 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & -1
\end{pmatrix}, \nonumber \\
\rho_{23}=\begin{pmatrix}
1 & 0 & 0 & 0 & 0 & 0 \\
0 & -1 & 0 & 0 & 0 & 0\\
0 & 0 & -\frac{1}{2} & -\frac{\sqrt{3}}{2} & 0 & 0 \\
0 & 0 & -\frac{\sqrt{3}}{2} & \frac{1}{2} & 0 & 0 \\
0 & 0 & 0 & 0 & -\frac{1}{2} & -\frac{\sqrt{3}}{2} \\
0 & 0 & 0 & 0 & -\frac{\sqrt{3}}{2} & \frac{1}{2}
\end{pmatrix},\ & \rho_{13}=\begin{pmatrix}
1 & 0 & 0 & 0 & 0 & 0 \\
0 & -1 & 0 & 0 & 0 & 0\\
0 & 0 & -\frac{1}{2} & \frac{\sqrt{3}}{2} & 0 & 0 \\
0 & 0 & \frac{\sqrt{3}}{2} & \frac{1}{2} & 0 & 0 \\
0 & 0 & 0 & 0 & -\frac{1}{2} & \frac{\sqrt{3}}{2} \\
0 & 0 & 0 & 0 & \frac{\sqrt{3}}{2} & \frac{1}{2}
\end{pmatrix},\nonumber \\
\rho_{123}=\begin{pmatrix}
1 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0\\
0 & 0 & -\frac{1}{2} & -\frac{\sqrt{3}}{2} & 0 & 0 \\
0 & 0 & \frac{\sqrt{3}}{2} & -\frac{1}{2} & 0 & 0 \\
0 & 0 & 0 & 0 & -\frac{1}{2} & -\frac{\sqrt{3}}{2} \\
0 & 0 & 0 & 0 & \frac{\sqrt{3}}{2} & -\frac{1}{2}
\end{pmatrix}, \ & \rho_{132}=\begin{pmatrix}
1 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0\\
0 & 0 & -\frac{1}{2} & \frac{\sqrt{3}}{2} & 0 & 0 \\
0 & 0 & -\frac{\sqrt{3}}{2} & -\frac{1}{2} & 0 & 0 \\
0 & 0 & 0 & 0 & -\frac{1}{2} & \frac{\sqrt{3}}{2} \\
0 & 0 & 0 & 0 & -\frac{\sqrt{3}}{2} & -\frac{1}{2}
\end{pmatrix}.\end{aligned}$$ Equation (\[eq:allsym\]) describes the same features as equation (\[fullterms\]) but highlights the permutational options for three photons. It is given in a maximally decoupled basis, which allows for a compact notation. The terms originating from the overlap integrals ($\zeta$ terms and $\xi$ terms) contain all the information on the physical properties of the interfering photons. The effect of the permutation symmetry of the photons is included in the permutation matrices $\rho$. Equation (\[eq16\]) features a even further compressed notation and allows for an elegant interpretation. Where the block–diagonal $6\times6$ rate–matrix $\hat{R}^{(3)}(\Delta\tau_1,\Delta\tau_2)$ contains all the information on the permutational symmetry and non–classical interference itself the basis–vector $\boldsymbol{v_3}$ contains the information on the interferometer. Two entries of this rate–matrix are sufficient for an interpretation. $F_{per}=\hat{R}_{11}^{(3)}(\Delta\tau_1,\Delta\tau_2)$ quantifies the fraction of the output probability distribution proportional to the permanent and $F_{det}=\hat{R}_{66}^{(3)}(\Delta\tau_1,\Delta\tau_2)$ to the determinant of the submatrix $T$. The contribution proportional to immanants can also be explicitly calculated. When only interested in their overall contribution this is given as $F_{imm}=1-F_{per}-F_{det}$. In the extremal case when all the photons are indistinguishable, i.e., $$\omega_{c,1}=\omega_{c,2}=\omega_{c,3}=\omega_c,\,\,
\sigma_1=\sigma_2=\sigma_3=\sigma,\,\,
\Delta\tau_1=\Delta\tau_2=0,$$ we have $\zeta_{ij}=1$, $\xi_{ij}=\sigma^2$ and $\nu_{ij}=\omega$ so the output probability reduces from a superposition of $60$ terms to just $P_{111}\rightarrow |\Per(T)|^2$.
Five–photon non–classical interference
--------------------------------------
The simulated data for a BosonSampling instance of five photons of arbitrary distinguishability injected into an interferometric network of nine modes, shown in figure **5** of the main manuscript, is calculated as outlined in the accompanying Mathematica program. The Mathematica notebook “5 photon rate matrix.nb” contains modules that are necessary to compute the interferometer–independent rate matrix. First the regular representation of elements in $S_5$ is computed. These are $120\times120$ matrices which represent permutations of five objects. In the rate matrix, these representations form a basis, each weighted by an integral that is the corresponding overlap integral of five photons with arbitrary distinguishability caused by spectral and temporal mode mismatch. From these regular repesentations and the overlap integrals, the interferometer independent rate matrix, $Rm$ is obtained. The basis vector of this rate matrix is constituted by matrix functions of the $5\times5$ scattering submatrix $T_5$. The first and second entries of the basis vector are chosen to be the permanent and determinant of $T_5$ respectively. The remaining 118 entries of the basis vector need to cover all five partitions of immanants of $S_5$. Each partition is constituted by a number of elements equal to its dimension squared. Those elements are the immanant of the scattering submatrix of this partition and the immanants of non–redundant permutations of the scattering submatrix of this partition. In this decomposition different partitions of immanants do not mix. Therefore the fraction of an output probability proportional to a specific partition of an immanant can be calculated independently. For example the block in the rate matrix corresponding to the {2,2,1} partition is a $25\times25$ matrix, $Rm_{\{2,2,1\}}$ ranging from $Rm_{3,3}$ to $Rm_{27,27}$. Consequently, the related elements of the basis vector run from row $3$ to row $27$ and form a basis vector, $\boldsymbol{v_{\{2,2,1\}}}$ for this subspace. The output probability of this subspace can be calculated as $P_{\{2,2,1\}}=\boldsymbol{v_{\{2,2,1\}}}^{\dagger} Rm_{\{2,2,1\}}\boldsymbol{v_{\{2,2,1\}}}$. Individual partitions of immanants have a direct mapping to different physical scenarios of non–classical interference. $P_{\{2,2,1\}}$ quantifies the fraction of the output probability that arises due to a case of non–classical interference of two pairs of indistinguishable photons (the two pairs are distinguishable to one another) and the transmission of a completely distinguishable photon.
Matrix Reconstruction
---------------------
The fabrication of integrated photonic networks using a femtosecond–laser–direct–writing technology works with high precision and high stability. Discrete unitary operators acting on modes can be realized solely from beam splitters and phase shifters [@Reck1994]. These networks are arranged like cascaded Mach–Zehnder interferometers shown in Fig. \[FIG:interferometer\]. Notably though, even advanced writing precision can introduce small deviations from the initially targeted values of individual elements. In our case this writing precision is limited to around over the whole length of the waveguide (in this experiment ). In a cascaded interferometric arrangement small deviations of individual elements may add up to a noticeable deviation in the overall transformation. The splitting ratio of individual directional couplers is set by their mode separation and coupling length. Both characteristic variables are three orders of magnitude bigger than the positioning precision and therefore unaffected by it. Unfortunately small length fluctuations due to the positioning precision can introduce unintended phase shifts. In the worst case, i.e. a phase shifter spanning the whole length of a waveguide, the resultant phase shifts can even reach $\pi/8$. The layout used for the interferometric networks reported here (see Fig. \[FIG:interferometer\]) circumvents this worst case. Even if the unintended phase shifts are decreased by a factor of $3$ at least; their influence needs to be evaluated and the actually implemented unitary needs to be reconstructed. The characterization procedure we use builds on the one introduced in [@Laing2012; @Tillmann2013]. Two–photon states from a down–conversion source are injected into different modes of the optical network to be characterized. This in situ method allows for a characterization with states having the same physical properties, e.g. frequency and spectral shape, as used later in the experiment.
![**Integrated photonic network.** Schematic drawing of the optical network. The circuit consists of eight directional couplers ($\eta_1...\eta_8$), eleven phase shifters ($\phi_1...\phi_{11}$), five input modes (1...5) as well as of five output modes(1’...5’). To allow coupling to the waveguide with standard fiber–arrays the input and output modes are separated and the total length of the chip is .[]{data-label="FIG:interferometer"}](Chip.jpg){width="50.00000%"}
### Estimating the visibilities of submatrices {#Esti}
We assume the optical interferometer can be described by a $5\times 5$ unitary matrix and we reconstruct its transformation via visibilities measured by injecting two photons into any combination of two of its five inputs. The visibility for two photons entering input modes $i,j$ and exiting in the output modes $k,l$ can be calculated from the $2\times 2$ submatrix $U_{i,j,k,l}$. For five input and output modes this results in $\binom 5 2 \times \binom 5 2 = 100$ possibilities. Owing to the structure of the interferometer (see Fig. \[FIG:interferometer\]), a photon injected into port 5 cannot exit from output 1’. This leads to a visibility of zero for the four input pairs $ij=15,25,35,45$ and the output pairs $kl=15,25,35,45$. These visibilities are omitted from this reconstruction algorithm, so the unitary transformation is reconstructed from 84 non–zero visibilities.
Our interferometric network consists of eight beam splitters and eleven phase shifters. Each beam splitter implements a SU(2) transformation with matrix representation: $$\begin{aligned}
\label{su2}
&\begin{pmatrix}
\cos\frac{\beta}{2}
& i\sin\frac{\beta}{2}\\
i\sin\frac{\beta}{2}
&\cos\frac{\beta}{2}
\end{pmatrix} \ ,\end{aligned}$$ where $\beta$ is the Euler angle associated with the transmittivity $\eta$ via the relationship $\eta=\cos^2(\beta/2)$. Note that in equation (\[su2\]) the beam splitter also implements a relative phase shift of $\pi$ between the first and second mode.\
The eleven phase shifters produce additional phases in their respective modes. Each phase shifter has a matrix representation of $$\begin{aligned}
\label{su2p}
&\begin{pmatrix}
\text{e}^{i\alpha_1}
& \text{0}\\
\text{0}
&\text{e}^{i\alpha_2}
\end{pmatrix} \ ,\end{aligned}$$ with $\alpha_i$ the phase shift in mode $i$.\
The spectral shape of the photons is measured with a single–photon spectrometer (Ocean Optics QE6500) and to a good approximation is of Gaussian shape. Such Gaussians are defined by only two parameters, namely their central frequency and the variance, which for the $i^\text{th}$ photon of the input pair is given by equation (\[eq:alpha\]), and expressed here as $$\left|\phi_i(w)\right|^2=\frac{1}{\sqrt{2\pi}\sigma_i}\exp\left(-\frac{(\omega-\omega_{c,i})^2}{2\sigma_i^2}\right),\;
i=1,\ 2.$$ Assuming both photons exhibit identical spectral function, i.e. $|\phi_1(\omega)|^2=|\phi_2(\omega)|^2$, and the detectors are modeled by the detection positive–operator valued measure (POVM) with two elements $\{\Pi_0,\Pi_1\}$ satisfying completeness, $\sum_i \Pi_i=\mathbb{I}$, $$\Pi_1=\int d\omega a^\dag(\omega)\ket{0}\bra{0}a(\omega) \ , \Pi_0=\mathbb{I}-\Pi_1 \ ,$$ then the visibility is $$\begin{aligned}
\label{vis}
V=-\frac{h_1 h_2^*+h_1^* h_2 }{|h_1|^2+|h_2|^2} \ ,\end{aligned}$$ with $$h_1=U^{11}_{i,j,k,l}U^{22}_{i,j,k,l},\;h_2=U^{12}_{i,j,k,l}U^{21}_{i,j,k,l} \ ,$$ and $U^{a,b}_{i,j,k,l}$ denotes the element in the $a^{\rm th}$ row and $b^{\rm th}$ column of the matrix $U_{i,j,k,l}$. In an experiment the two photons will always have slightly different spectral functions whose mismatch needs to be accounted for. The central wavelengths and spectral bandwidths of the photons used in this characterization measurement are $\lambda_{c,1}=$, $\Delta \lambda_1=$, and $\lambda_{c,2}=$, $\Delta\lambda_{2}=$ respectively. The coincidence counts $N_{c}$ as a function of time delay $t$ and spectral mode mismatch are
$$\begin{aligned}
\label{coinccount}
N_c(t)=(1+T*t)(Y_0+A\frac{2\sigma_1\sigma_2}{\sigma_1^2+\sigma_2^2}\exp\left(-\frac{(\omega_{c,1}-\omega_{c,2})^2+4\sigma_1^2\sigma_2^2(t-t_c)^2}{2(\sigma_1^2+\sigma_2^2)}\right)-(HO_1+HO_2-d)) \ ,\end{aligned}$$
where $Y_0$, $A$, $t_c$ and $T$ are parameters to be fitted to the experimental data. The experimental data for a given input/output combination $i,j,k,l$ it is typically recorded for 30 increments with a stepwidth of and integrated over each step. The coincidences are read out by a field–programmable gate array logic (FPGA logic). As individual delays are set by translating a fiber coupler with a motorized screw (Newport LTA–HL) there can be a small drift in coupling efficiency over the whole delay–range of . Without this drift, the background of the visibility would be a horizontal straight line. For drifts smaller than $5\,\%$ of the two–photon flux the drift is in good approximation linear and can be modelled with an additional parameter, $T$. The positioning precision of the delay lines is limited to approximately $\pm$ which is within $2.5\,\%$ of the coherence time of the interfering photons. When the two–photon input state is generated via down–conversion pumped by a pulsed laser system, higher order emission can lead to unwanted contribution to the input state. The first higher order, which is a four–fold emission, causes a small contribution of two photons in each input mode during the characterization of a $2\times2$ submatrix. This can add a constant background to the two–fold coincidences in the following scenario: two photons in one input mode are lost and the two photons in the other input mode leave the network in different output ports. We measure such contributions by blocking one of the two input–modes and recording the two–photon coincidences at the output. These signals are labelled $HO_1$ and $HO_2$ respectively and subtracted from the data. The background coincidence rate $d$ may be interpreted as a contribution to $N_c$ stemming from dark counts due to electrical noise and background light. This rate $d$ is also present in $HO_1$ and $HO_2$. Therefore it has to be added to equation (\[coinccount\]) to account for all unwanted coincidences only once. The error for the raw data was verified to be Poissonian. For the data processing the error of the higher order term $(HO_1+HO_2-d)$ and the abscissa–error caused by the limited alignment precision of the delay lines need to be taken into account additionally. These errors provide weighting in the minimization algorithm and influence the standard errors of the fitted parameters. The visibility, $$\begin{aligned}
V=1-\frac{Y_0+A}{Y_0} \ ,\end{aligned}$$ is finally calculated from the parameters $Y_0$ and $A$, whereas the width of the dip or peak is fixed by the spectral function of the two photons. .
![**Example for one dataset used for the reconstruction of $U_5$.** []{data-label="FIG:2hvexample"}](In24Out23.pdf){width="80.00000%"}
### Parameter estimation and reconstruction of the unitary matrix {#ParaEsti}
parameters of the interferometer that give an optimal fit to the experimentally measured visibilities are obtained using a Eight of the $19$ parameters are transmittivities, $\beta_1,\beta_2,\ldots \beta_8$, and eleven are phases, $\phi_1,\phi_2,\ldots \phi_{11}$. To find the best–fit set of parameters, the data was processed with a Matlab program that uses fmincon to minimize the function $V_{\rm opt}$, $$\begin{aligned}
\label{approxchisq}
V_{\rm opt}=\sum_{i=1}^{84}\frac{\left(V_i^{\rm (exp)}-V_i^{\rm (th)}\right)^2}{\sigma_i^2\Gamma}\ ,\end{aligned}$$ where $V_i^{\rm (th)}$ is the theoretical value of the visibility calculated from our special unitary model of the interferometer using equation (\[vis\]) for the $i^{\rm th}$ data set, and $\Gamma$ is a constant value equal to $({\rm number\,of\, data\,sets\,in\,visibilities} - {\rm number\,of\,parameters} - 1)=2522-19-188-1=2314$.\
The $5\times 5$ reconstructed matrix $U_5$ using the procedure outlined above is
$$\begin{aligned}
U_5
=\begin{pmatrix}
0.0320-0.3370 i & 0.07239+0.8203 i & -0.2780-0.1060 i & 0.1228-0.3220 i & 0\\
0.0114+0.2751 i & -0.3863+0.1860 i & -0.1353+0.2073 i & -0.7842-0.1502 i & 0.0124 - 0.2036 i \\
-0.7757-0.2328 i & -0.2937+0.0018i & -0.2677-0.0162i & 0.0267+0.3517i & -0.2476-0.0151 i\\
0.1444-0.2611 i & -0.1518-0.0840 i & -0.1392+0.0839i & -0.1327-0.0092i & 0.0203+0.8449i\\
0.2225+0.1231i & 0.0715-0.1293i & -0.7929-0.0268i & 0.0871+0.3067i & 0.4123-0.1121i
\end{pmatrix}.\end{aligned}$$
Quality of the reconstructed description
----------------------------------------
Using this matrix, the probability of coincidence counts, $P_{11}^{\rm (th)}$, can be predicted for any two–photon inputs and outputs. For the inputs $i$ and $j$, $i<j$ and outputs $k$ and $l$, $k<l$, this reads as $$\begin{aligned}
P_{11}^{\rm (th)}(t-t_c)=|U^{ki}_{5}U^{jl}_{5}|^2+|U^{li}_{5}U^{kj}_{5}|^2+(U^{li}_{5}U^{kj}_{5}{U^{ki}_{5}}^*{U^{jl}_{5}}^*+{U^{li}_{5}}^* {U^{kj}_{5}}^* U^{ki}_{5}U^{jl}_{5})f(t-t_c) \ ,\end{aligned}$$ where $$\begin{aligned}
f(t)\equiv(2\sigma_1\sigma_2/(\sigma_1^2+\sigma_2^2))\exp\left(-\frac{(\omega_{c,1}-\omega_{c,2})^2+4\sigma_1^2\sigma_2^2t^2}{2(\sigma_1^2+\sigma_2^2)}\right),\end{aligned}$$ and $U_5^{ab}$ is the element in the $a^{\rm th}$ row and $b^{\rm th}$ column of $U_5$. The actual coincidence count is then $$\begin{aligned}
\label{Nc(th)}
N_c^{\rm (th)}(t)=N_0 (1+T) P_{11}^{\rm (th)}(t-t_c),\end{aligned}$$ where $N_0$, $t_c$ and $T$ are parameters used to find the best fit to the experimental data. The exact $\chi^2_{\rm red}$ is calculated using $$\begin{aligned}
\chi^2_{\rm red}=\sum_{i=1}^m \frac{\left(N_{c,i}^{\rm(exp)}-N_{c,i}^{\rm (th)}\right)^2}{\nu\epsilon_i^2} \ ,\end{aligned}$$ where $m=3030$, $\nu=m-20-100-1=2909$, $\epsilon_i$ is the error for the corresponding datapoint, and $N_{c,i}^{\rm(exp)}$ denoting the experimental data corrected for higher order emissions. The sum is taken over the data set and the index labels the data. The obtained $\chi^2_{\rm red}$ between the data and the predicted coincidence counts using $U_5$ is $$\begin{aligned}
\label{result}
\chi^2_{\rm red}=2.086 \ .\end{aligned}$$
State Generation
----------------
We use an Ti:Sapphire oscillator emitting pulses at a wavelength of which get frequency doubled via a $LiB_3O_5$ (LBO). The upconverted beam is focused into a thick $\beta-BaB_2O_2$ (BBO) crystal cut for degenerate non–collinear type–II spontaneous parametric down–conversion. To achieve near spectral indistinguishability and enhance temporal coherence of the down–converted wave packets the photons are filterd by $\lambda_{\mathrm{FWHM}} = $ interference filters. The source is aligned to emit the maximally entangled state $$\ket{\phi^+}=\frac{1}{\sqrt{2}}\left(\ket{H}_a\ket{H}_b+\ket{V}_a\ket{V}_b\right),$$ when pumped with low pump power ( cw–equivalent). H and V denote horizontal and vertical polarization and a and b are the two spatial emission–modes. When pumped with higher pump powers ( cw–equivalent) noticeable higher order emission occurs: $$\ket{\psi}_{a,b}=\frac{1}{\sqrt{3}}(\ket{HH}_a\ket{HH}_b+\ket{HV}_a\ket{HV}_b+\ket{VV}_a\ket{VV}_b).$$ This state is guided to two polarizing beam splitter (PBS) cubes. A detection event in the trigger mode $a''$ heralds the generation of either the state $\ket{V}_{a'}\ket{V}_{b'}\ket{H}_{b''}$ or $\ket{HH}_{b''}$ (see Fig. \[FIG:Chip\]). Only in the first case are the three modes $a'$, $b'$, and $b''$ occupied with one single photon, whereas in the latter case mode $b''$ is occupied with two photons and mode $b'$ with vacuum. Post–selection on a four–fold coincidence between mode $a''$, $a'$, $b'$, and $b''$ allows for the heralding of the desired input state where only one photons enters each input mode. The half–wave plates in mode $a'$ and $b'$ are set to $45^{\circ}$ to render them indistinguishable in polarization from the other photons. This heralding scheme holds independently of any transformation for the photons in mode $a'$, $b'$, and $b''$ as long as it acts on spatial modes, e.g. consisting of beam splitters and phase shifters only.
![**State generation.** A pump beam is focused into a $\beta$–$BaB_2O_2$ (BBO) crystal cut for non–collinear, degenerate, type–II down–conversion. The generated state is emitted into the spatial modes $a$ and $b$. A compensation scheme consisting of half–wave plates (HWPs) and thick BBO crystals is applied for countering temporal and spatial walk–off. Narrowband interference filters ($\lambda_{\text{FWHM}}=$) are applied to increase the temporal coherence of the photons and render them close to spectral indistinguishability. The modes $a$ and $b$ are subsequently split by polarizing beam splitter cubes (PBS) and two half–wave plates in their reflected ports are set to $45^{\circ}$ to ensure the same polarization in all four output modes ($a''$, $a'$, $b'$, and $b''$). With this scheme three indistinguishable photons in mode $a'$, $b'$, and $b''$ each can be heralded from a four–fold emission by a successful trigger event in mode $a''$.[]{data-label="FIG:Chip"}](SETUP_SI.jpg){width="70.00000%"}
Analysis of the three–fold coincidence data
-------------------------------------------
Three photons are inserted into input modes 1,2 and 4 of the interferometric network. The spectral characteristics of these photons were measured using a single–photon spectrometer (Ocean Optics QE6500) and are in good approximation of Gaussian shape. Note that this spectral data differs slightly compared to the characterization measurements (see \[Esti\]).
$\lambda_c$ $\Delta \lambda_{\rm {FWHM}}$
----- ------------- -------------------------------
In1 789.35 nm 2.85 nm
In2 789.52 nm 2.79 nm
In4 789.41 nm 2.72 nm
This spectral data allows to express the mode overlap integrals in dependence of the time delays $\Delta\tau_1$ and $\Delta\tau_2$ between the first and second photon and the second and third photon respectively. The theoretical prediction for the output probability in any of the ten three–fold output ports is then calculated using equation (\[eq:allsym\]). Consequently each $3\times3$ submatrix $R$ is constituted by matrix elements selected by the input and output ports. The output probability (see equation (\[eq:allsym\])) of any landscape contains a constant term and four terms proportional to different mode overlap functions. By sampling six points of pairwise temporal delay of $\Delta\tau_1$ and $\Delta\tau_2$ contributions of each of these terms can be assessed. These six points are
$\Delta\tau_1$ $\Delta\tau_2$
---- ---------------- ----------------
P1 0fs 130fs
P2 0fs -870fs
P3 -300fs -170fs
P4 -1000fs -870fs
P5 -1000fs 130fs
P6 -1000fs 1130fs
An offset of $\Delta\tau_{\rm off}=\text{\SI{130}{\femto\second}}$ is introduced in the temporal delay mode $\Delta\tau_2$, otherwise the delays are set to combinations of , and $\pm$. Precision of the temporal alignment was estimated to be $\pm$. In one measurement run the points P1 to P6 are recorded consecutively for two hours each. To account for effects of drift this order is reversed in the next measurement run, therefore the points are recorded in the order P6 to P1. The four–fold count rates range from to dependent on the output combination. In between each measurement run the setup was realigned to optimize for maximal count rates. In order to obtain sufficient statistics, the whole data acquisition is repeated over 19 measurement runs for a total of 228 hours.\
As Poissonian error modeling results in too optimistic error bars in case of long data acquisition due to multiple sources of error, we adapted the error modeling. The 19 measurements are independent runs therefore mean and standard deviation of the mean provide more useful information. Each individual measurement run is represented as a six–dimensional vector, with the $i^{\rm th}$ entry of the vector containing the four–fold counts of the $P_{i^{\rm th}}$ delay point integrated over two hours. These vectors can then be normalized to unit vectors thereby obtaining relative output probabilities. The mean and the standard deviation of the mean can now be calculated for each of the six delay points. Ultimately the overlap with the theoretical prediction is obtained by a least squared minimization weighted with the standard deviations. Here a linear scaling factor is introduced relating the relative experimental probabilities to the absolute theoretical ones. The goodness of fit is calculated using the reduced $\chi^2$. The number of degrees of freedom is in this case $\nu=6-2=4$.\
The experimental data for the four different scenarios of BosonSampling affected by distinguishability, shown in figure **4** of the main manuscript, are recorded using the same method as above. The experimental data and theoretical prediction is contained in table \[figure4data\].
[ |l|l|l|l|l|l|l| ]{}\
& $\tau_1$ & $\tau_2$ & theory & experimental & red. $\chi^2$ & count rate\
& 0 fs & 130 fs &3.41%&$3.17\% \pm $ 0.26%& &\
& 0 fs & -870 fs &1.89%&$2.18\% \pm $ 0.19%&&\
& -300 fs & -170 fs &3.13%&$2.99\% \pm $ 0.25%&$1.38$ & $\approx$ 10 mHz\
& -1000 fs & -870 fs &2.95%&$2.96\% \pm $ 0.26%&&\
& -1000 fs & 130 fs &2.20%&$2.51\% \pm $ 0.21%&&\
& -1000 fs & 1130 fs &2.73%&$2.72\% \pm $ 0.31%&&\
& 0 fs & 130 fs &14.19%&$14.73\% \pm $ 0.93%& &\
& 0 fs & -870 fs &23.69%&$24.01\% \pm $ 0.84%&&\
& -300 fs & -170 fs &17.67%&$19.1\% \pm $ 0.98%&$1.10$ & $\approx$ 80 mHz\
& -1000 fs & -870 fs &25.09%&$24.01\% \pm $ 0.85%&&\
& -1000 fs & 130 fs &21.14%&$21.32\% \pm $ 0.80%&&\
& -1000 fs & 1130 fs &31.40%&$30.85\% \pm $ 1.44%&&\
[ |c|c|l|l|l|l|l| ]{}\
figure & $T_{ijk}$ & exp in %& per in % & imm in %& det in %& theo in %\
& 245 & $1.46 \pm 0.39$ &1.72& 0.13 & 0.00 & 1.86\
& 235 & $10.02 \pm 0.83$ &11.32& 0.44 & 0.00 & 11.76\
& 123 & $46.75 \pm 2.95$ &33.38& 11.97& 0.00 & 45.36\
& 345 & $0.47 \pm 0.20$ &0.03& 0.16& 0.00 & 0.19\
& 234 & $7.24 \pm 0.80$ &7.08& 0.77 & 0.00 & 7.85\
& 134 & $6.69 \pm 0.71$ &6.30& 1.54 & 0.00 & 7.85\
& 125 & $7.96 \pm 0.89$ &5.21& 2.87 & 0.00 & 8.08\
& 145 & $1.69 \pm 0.40$ &1.41& 0.13 & 0.00 & 1.55\
& 135 & $8.01 \pm 0.77$ &3.98& 1.10 & 0.00 & 5.08\
& 124 & $9.71 \pm 0.94$ &8.95& 1.50 & 0.00 & 10.45\
& 245 & $1.35 \pm 0.24$ &0.93& 0.59 & 0.01 & 1.53\
& 235 & $7.93 \pm 0.68$ &6.11& 2.45 & 0.11 & 8.67\
& 123 & $50.86 \pm 2.60$ &18.02& 29.62& 1.19 & 48.82\
& 345 & $0.78 \pm 0.16$ &0.02& 0.64& 0.01 & 0.66\
& 234 & $6.00 \pm 0.41$ &3.82& 3.04 & 0.03 & 6.89\
& 134 & $5.12 \pm 0.58$ &3.40& 1.21 & 0.01 & 4.61\
& 125 & $8.07 \pm 0.74$ &2.81& 3.69 & 0.03 & 6.53\
& 145 & $1.86 \pm 0.26$ &0.76& 1.20 & 0.02 & 1.98\
& 135 & $8.64 \pm 0.68$ &2.15& 8.86 & 0.14 & 11.15\
& 124 & $9.39 \pm 0.59$ &4.83& 4.16 & 0.16 & 9.15\
& 245 & $1.17\pm0.26$ &0.37& 0.77 & 0.05 & 1.19\
& 235 & $6.59\pm0.58$ &2.46& 3.35 & 0.59 & 6.40\
& 123 & $53.64\pm1.90$ &7.26& 40.72& 6.54 & 54.52\
& 345 & $0.92\pm0.20$ &0.01& 0.96& 0.03 & 1.00\
& 234 & $5.29\pm0.43$ &1.54& 3.63 & 0.15 & 5.32\
& 134 & $4.06\pm0.40$ &1.37& 2.43 & 0.04 & 3.84\
& 125 & $8.19\pm0.64$ &1.13& 6.26 & 0.19 & 7.58\
& 145 & $1.57\pm0.22$ &0.31& 1.24 & 0.12& 1.67\
& 135 & $9.91\pm0.74$ &0.87& 8.03 & 0.78 & 9.68\
& 124 & $8.67\pm1.02$ &1.95& 5.97 & 0.89 & 8.80\
& 245 & $0.92\pm0.23$ &0.19& 0.51 & 0.17 & 0.86\
& 235 & $5.12\pm0.58$ &1.21& 1.91 & 1.97 & 5.09\
& 123 & $58.26\pm2.73$ &3.58& 33.36& 21.69 & 58.63\
& 345 & $0.60\pm0.09$ &0.00& 0.56& 0.11 & 0.68\
& 234 & $4.17\pm0.39$ &0.76& 2.85 & 0.50 & 4.11\
& 134 & $4.03\pm0.51$ &0.68& 2.79 & 0.12 & 3.59\
& 125 & $7.38\pm0.62$ &0.56& 6.00 & 0.64 & 7.19\
& 145 & $1.32\pm0.28$ &0.15& 0.91 & 0.39 & 1.45\
& 135 & $10.00\pm0.38$ &0.43& 7.15 & 2.57 & 10.15\
& 124 & $8.18\pm0.71$ &0.96& 4.33 & 2.95 & 8.25\
[10]{} url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
. ** ****, ().
& ().
, & . ** ****, ().
, , , & . ** ****, ().
, , , & . ** ****, ().
. ** ****, ().
, & . ** ****, ().
, & . ** ****, ().
. ** ****, ().
& . ** ****, ().
& . ** ****, ().
& . In **, (, ).
, & . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
, , & . ** ****, ().
, , & . ** ****, ().
& . ** ****, ().
& ** (, ).
*et al.* . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ().
, , & . ** ().
& . ** ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
, & . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
& ** ().
& . ** ****, ().
. ** ****, ().
, , & ** ().
. ** ****, ().
, ** ().
, & . ** ****, ().
, & . ** ****, ().
*et al.* . ** ****, ().
, , & . ** ****, ().
*et al.* . ** ****, ().
& . ** ****, ().
, , & . ** ****, ().
& . ** ().
. ** ().
*et al.* . ** ****, ().
|
---
abstract: 'Gamma-Ray Bursts (GRBs), short and intense pulses of low energy , have fascinated astronomers and astrophysicists since their unexpected discovery in the late sixties. During the last decade, several space missions: BATSE (Burst and Transient Source Experiment) on Compton Gamma-Ray Observatory, BeppoSAX and now HETE II (High-Energy Transient Explorer), together with ground optical, infrared and radio observatories have revolutionized our understanding of GRBs showing that they are cosmological, that they are accompanied by long lasting afterglows and that they are associated with core collapse Supernovae. At the same time a theoretical understanding has emerged in the form of the fireball internal-external shocks model. According to this model GRBs are produced when the kinetic energy of an ultra-relativistic flow is dissipated in internal collisions. The afterglow arises when the flow is slowed down by shocks with the surrounding circum-burst matter. This model has numerous successful predictions like the prediction of the afterglow itself, the prediction of jet breaks in the afterglow light curve and of an optical flash that accompanies the GRBs themselves. In this review I focus on theoretical aspects and on physical processes believed to take place in GRBs.'
author:
- Tsvi Piran
title: 'The Physics of Gamma-Ray Bursts'
---
INTRODUCTION {#sec:intro}
=============
Gamma-Ray Bursts (GRBs) are short and intense pulses of soft . The bursts last from a fraction of a second to several hundred seconds. GRBs arrive from cosmological distances from random directions in the sky. The overall observed fluences range from $10^{-4}$ergs/cm$^2$ to $10^{-7}$ergs/cm$^2$ (the lower limit depends, of course, on the characteristic of the detectors and not on the bursts themselves). This corresponds to isotropic luminosity of $10^{51}-10^{52}$ergs/sec, making GRBs the most luminous objects in the sky. However, we know today that most GRBs are narrowly beamed and the corresponding energies are “only" around $10^{51}$ergs [@Frail01; @PanaitescuK01; @Piranetal01], making them comparable to Supernovae in the total energy release.
The GRBs are followed by afterglow - lower energy, long lasting emission in the X-ray, optical and radio. The radio afterglow was observed in some cases several years after the bursts. The accurate afterglow positions enabled the identification of host galaxies in almost all cases when afterglow was detected and this in turn enabled the determination of the corresponding redshifts that range from 0.16 (or possibly even down to 0.0085) to 4.5. Within the host galaxies there is evidence that (long duration) GRBs arise within star forming regions and there is evidence that they follow the star formation rate.
While not all observed features are understood there is an overall agreement between the observations and the fireball model. According to the fireball model GRBs are produced when the kinetic energy of an ultra-relativistic flow is dissipated. The GRB itself is produced by internal dissipation within the flow while the afterglow is produced via external shocks with the circum-burst medium. I will focus in this review on this model.
The numerous observations of the GRB and the observations of the afterglow constrain the fireball model that describes the emitting regions. The evidence on nature of the inner engine that powers the GRB and produces the ultra-relativistic flow is however, indirect. The energetic requirements and the time scales suggest that GRB involve the formation of the black hole via a catastrophic stellar collapse event or possibly a neutron star merger. Additional indirect evidence arises from the requirement of the fireball model of long (several dozen seconds) activity of the inner engine. This hints towards an inner engine built on an accreting black hole. On the other hand, the evidence of association of GRBs with star forming regions indicates that GRBs progenitors are massive stars. Finally, the appearance of Supernova bumps in the afterglow light curve (most notably in GRB 030329) suggest association with Supernovae and stellar collapse.
I review here the theory of GRB, focusing as mentioned earlier on the fireball internal-external shocks model. I begin in §\[sec:obs\] with a brief discussion of the observations. I turn in §\[sec:accepted\] to some generally accepted properties of GRB models - such as the essential ultra-relativistic nature of this phenomenon. Before turning to a specific discussion of the fireball model I review in §\[sec:rel\] several relativistic effects and in §\[sec:physical-Processes\] the physical process, such as synchrotron emission or particle acceleration in relativistic shocks that are essential ingredients of this model. In §\[sec:PROMPT\] I turn to a discussion of the prompt emission and the GRB. In §\[sec:afterglow\] I discuss modelling the afterglow emission. I consider other related phenomenon - such as TeV emission, high energy neutrinos, Ultra High energy cosmic rays and gravitational radiation in §\[sec:Other\]. Finally, I turn in §\[sec:inner-engine\] to examine different ‘inner engines’ and various aspects related to their activity. I conclude with a discussion of open questions and observational prospects.
While writing this review I realized how large is the scope of this field and how difficult it is to cover all aspects of this interesting phenomenon. Some important aspects had to be left out. I also did not attempt to give a complete historical coverage of the field. I am sure that inadvertently I have missed many important references. I refer the reader to several other recent review papers [@Fishman1995; @P99; @ParadijsARAA00; @P00; @Meszaros01; @Hurleyetal02; @Meszaros02a; @Galama_sari] that discuss these and other aspects of GRB theory and observations from different points of view.
OBSERVATIONS {#sec:obs}
=============
I begin with a short review of the basic observed properties of GRBs. This review is brief as a complete review requires a whole paper by itself. I refer the reader to several review papers for a detailed summary of the observations [@Fishman1995; @ParadijsARAA00; @Hurleyetal02; @Galama_sari]. I divide this section to three parts. I begin with the prompt emission - the GRB itself. I continue with properties of the afterglow. I conclude with a discussion of the rates of GRBs, the location of the bursts within their host galaxies and the properties of the host galaxies.
Prompt Emission {#sec:prompt-obs}
----------------
I begin with a discussion of the GRB itself, namely the and any lower-energy emission that occurs simultaneously with them. This includes emission that generally accompanies the $\gamma$-ray emission as a low energy tail. In some cases, called flashes (XRFs), the signal is weak and all that we have is this signal. Prompt (operationally defined as the time period when the -ray detector detects a signal above background) longer-wavelength emission may also occur at the optical and radio but it is harder to detect. However, so far optical flashes was observed In three cases [@Akerlof99; @Foxetal03; @LiEtal03] simultaneously with the -ray emission.
### Spectrum {#sec:spec-obs}
The spectrum is non thermal. The energy flux peaks at a few hundred keV and in many bursts there is a long high energy tail extending in cases up to GeV. The spectrum varies strongly from one burst to another. An excellent phenomenological fit for the spectrum was introduced by @Band93 using two power laws joined smoothly at a break energy $(\tilde\alpha-\tilde\beta)E_0$: $$N(\nu) = N_0 \cases { \big({h\nu})^{\tilde \alpha} \exp (-{h \nu
\over E_0}) & for $ h\nu < (\tilde\alpha-\tilde\beta)E_0$ ;\cr
\big[{(\tilde \alpha-\tilde \beta) E_0 } \big]^{(\tilde
\alpha-\tilde\beta)} \big({h \nu }\big)^{\tilde\beta} \exp (\tilde
\beta-\tilde \alpha), & for $h \nu
> (\tilde\alpha-\tilde\beta)E_0,$ \cr} \ .$$ I denote the spectral indices here as $\tilde \alpha$ and $\tilde \beta$ to distinguish them from the afterglow parameters ($\alpha$ and $\beta$) discussed later. There is no particular theoretical model that predicts this spectral shape. Still, this function provides an excellent fit to most of the observed spectra. For most observed values of $\tilde\alpha$ and $\tilde\beta$, $\nu
F_\nu \propto \nu^2 N(\nu)$ peaks at $E_p = (\tilde\alpha+2)E_0$. For about 10% of the bursts the upper slope is larger than -2 and there is no peak for $\nu F_\nu$ within the observed spectrum. Another group of bursts, NHE bursts, (no high energy) [@Pendleton_NHE97] does not have a hard component (which is reflected by a very negative value of $\tilde \beta$). The “typical” energy of the observed radiation is $E_p$. $E_p$ defined in this way should not be confused with the commonly used hardness ratio which is the ratio of photons observed in two BATSE [^1] channels: Channel 3 (100-300keV) counts divided by Channel 2 (50-100keV) counts. The break frequency and the peak flux frequencies are lower on average for bursts with lower observed flux [@Mallozi95; @Mallozzi98].
@Band93 present a small catalogue of the spectra of 52 bright bursts which they analyze in terms of the Band function. @PreeceEtal00 present a larger catalogue with 156 bursts selected for either high flux or fluence. They consider several spectral shape including the Band function.
Fig. \[fig:spectrum\_distribution\] shows the distribution of observed values of the break energy, $(\tilde\alpha-\tilde\beta)E_0$, in a sample of bright bursts [@PreeceEtal00]. Most of the bursts are the range $100\,{\rm
keV}<(\tilde\alpha-\tilde\beta)E_0<400\,{\rm keV}$, with a clear maximum in the distribution around $(\tilde\alpha-\tilde\beta)E_0\sim 250$keV. There are not many soft GRBs - that is, GRBs with peak energy in the tens of keV range. However, the discovery [@XRF] of XRFs - flashes with similar temporal structure to GRBs but lower typical energies - shows that the low peak energy cutoff is not real and it reflects the lower sensitivity of BATSE in this range [@BATSE_XRF].
. The solid line represents the whole sample while the dashed line represent a subset of the data. \[fig:spectrum\_distribution\]
Similarly, it is debatable whether there is a real paucity in hard GRBs and there is an upper cutoff to the GRB hardness or it just happens that the detection is optimal in this (a few hundred keV) band. BATSE triggers, for example, are based mostly on the count rate between 50keV and 300keV. BATSE is, therefore, less sensitive to harder bursts that emit most of their energy in the MeV range. Using BATSE’s observation alone one cannot rule out the possibility that there is a population of harder GRBs that emit equal power in total energy which are not observed because of this selection effect [@PN96; @CohenKatzP98; @Llyod_Petrosian99; @Lingen97]. More generally, a harder burst with the same energy as a soft one emits fewer photons. Furthermore, the spectrum is generally flat in the high energy range and it decays quickly at low energies. Therefore it is intrinsically more difficult to detect a harder burst. A study of the SMM (Solar Maximum Mission) data [@Harris97] suggests that there is a deficiency (by at least a factor of 5) of GRBs with hardness above 3MeV, relative to GRBs peaking at $\sim$0.5MeV, but this data is consistent with a population of hardness that extends up to 2MeV.
Overall the narrowness of the hardness distribution is very puzzling. First, as I stressed earlier it is not clear whether it is real and not a result of an observational artifact. If it is real then on one hand there is no clear explanation to what is the physical process that controls the narrowness of the distribution (see however @Guetta_Spada_Waxman01). On the other hand cosmological redshift effects must broaden this distribution and it seem likely (but not demonstrated yet) that if the GRB distribution extends to z=10 as some suggest [@ReichartLamb00; @CiardiLoeb00; @BrommLoeb02; @Lloyd-RonningFryerRamirez-Ruiz02] then such a narrow distribution requires an intrinsic correlation between the intrinsic hardness of the burst and its redshift, namely that the intrinsic hardness increases with the redshift. There is some evidence for such a correlation between $E_p$ and the observed peak flux [@Mallozi95; @Mallozzi98]. More recently @AmatiEtal02 reported on a correlation between $E_p$ and the isotropic equivalent energy seen in 12 BeppoSAX bursts that they have analyzed. They also report on a correlation between $E_p$ and the redshift as, the bursts with higher isotropic equivalent energy are typically more distant. These three different correlations are consistent with each other if the observed peak flux of bursts is determined by their intrinsic luminosity more than by the distance of the bursts. In such a case (because of the larger volume at larger distances) the observed more distant bursts are on average brighter than nearer ones (see also §\[sec:hosts-distribution\]).
Even though the burst hardness distribution shows a single population a plot of the hardness vs temporal duration shows that short bursts (see Fig. \[fig:hardness-duration\]) are typically harder [@Dezalay96; @Kouveliotou96]. The correlation is significant. Another interesting sub-group of bursts is the NHE (no high energy) bursts - bursts with no hard component that is no emission above 300keV [@Pendleton_NHE97]. This group is characterized by a large negative value of $\beta$, the high energy spectral slope. The NHE bursts have luminosities about an order of magnitude lower than regular bursts and they exhibit an effectively homogeneous intensity distribution with $\langle V
/V_{max} \rangle= 0.53 \pm 0.029$. As I discuss later in §\[sec:temp-obs\] most GRB light curves are composed of many individual pulses. It is interesting that in many bursts there are NHE pulses combined with regular pulses.
EGRET (The Energetic Gamma Ray Experiment Telescope) the high energy detector on Compton - GRO detected seven GRBs with photon energies ranging from 100 MeV to 18 GeV [@EGRET_GRB]. In some cases this very high energy emission is delayed more than an hour after the burst [@Hurley94; @Sommer94]. No high-energy cutoff above a few MeV has been observed in any GRB spectrum. Recently, [@Gonzalez03] have combined the BATSE (30keV -2Mev) data with the EGRET data for 26 bursts. In one of these bursts, GRB 941017 (according to the common notation GRBs are numbered by the date), they have discovered a high energy tail that extended up to 200 MeV and looked like a different component. This high energy component appeared 10-20 sec after the beginning of the burst and displayed a roughly constant flux with a relatively hard spectral slope ($F_\nu \propto \nu^0$) up to 200 sec. At late time (150 after the trigger) the very high energy (10-200 MeV) tail contained 50 times more energy than the “main" energy (30keV-2MeV) band. The TeV detector, Milagrito, discovered (at a statistical significance of 1.5e-3 or so, namely at 3$\sigma$) a TeV signal coincident with GRB 970417 [@Milagrito_970417; @Atkins03]. If true this would correspond to a TeV fluence that exceeds the low energy fluence. However no further TeV signals were discovered from other 53 bursts observed by Milagrito [@Milagrito_970417] or from several bursts observed by the more sensitive Milagro [@Milagro_GRB]. One should recall however, that due to the attenuation of the IR background TeV photons could not be detected from $z>0.1$. Thus even if most GRBs emit TeV photons those photons won’t be detected on Earth.
Another puzzle is the low energy tail. @CohenKatzP98 analyze several strong bursts and find that their low energy slope is around 1/3 to -1/2. However, @PreeceEtal98 [@Preece02] suggest that about 1/5 of the bursts have a the low energy power spectrum, $\alpha$, steeper than 1/3 (the synchrotron slow cooling low energy slope). A larger fraction is steeper than -1/2 (the fast cooling synchrotron low energy slope). However, this is not seen in any of the HETE spectrum whose low energy resolution is somewhat better. All HETE bursts have a low energy spectrum that is within the range 1/3 and -1/2 [@BarraudEtal03]. As both BATSE and HETE use NaI detectors that have a poor low energy resolution [@CohenKatzP98], this problem might be resolved only when a better low energy spectrometer will be flown.
### Temporal Structure {#sec:temp-obs}
The duration of the bursts spans five orders, ranging from less than 0.01sec to more than 100sec. Common measures for the duration are $T_{90}$ ($T_{50}$) which correspond to the time in which 90% (50%) of the counts of the GRB arrives. As I discuss below (see §\[sec:pop\]) the bursts are divided to long and short bursts according to their $T_{90}$. Most GRBs are highly variable, showing 100% variations in the flux on a time scale much shorter than the overall duration of the burst. Fig \[fig:variable\] depicts the light curve of a typical variable GRB (GRB 920627). The variability time scale, $\delta t$, is determined by the width of the peaks. $\delta t$ is much shorter (in some cases by a more than a factor of $10^4$) than $T$, the duration of the burst. Variability on a time scale of milliseconds has been observed in some long bursts [@NakarPiran02a; @McBreenEtal01short]. However, only $\sim 80$% of the bursts show substantial substructure in their light curves. The rest are rather smooth, typically with a FRED (Fast Rise Exponential Decay) structure.
@Fenimore_Ramirez-Ruiz01 (see also @Reichartetal01) discovered a correlation between the variability and the luminosity of the bursts. This correlation (as well as the lag-luminosity relation discussed later) allow us to estimate the luminosity of bursts that do not have a known redshift.
The bursts seem to be composed of individual pulses, with a pulse being the “building blocks" of the overall light curve. Individual pulses display a hard to soft evolution with the peak energy decreasing exponentially with the photon fluence [@Liang96; @Norrisetal96; @Ford95]. The pulses have the following temporal and spectral features. (i) The light curve of an individual pulse is a FRED - fast rise exponential decay - with an average rise to decay ratio of 1:3 [@Norrisetal96]. (ii) The low energy emission is delayed compared to the high energy emission[^2] [@Norrisetal96]. @Norris_lags00 have found that these spectral lags are anti-correlated with the luminosity of the bursts: Luminous bursts have long lags. This lag luminosity relation provides another way to estimate the luminosity of a burst from its (multi-spectra) light curve. (iii) The pulses’ low energy light curves are wider compared to the high energy light curves. The width goes as $ \sim
E^{-0.4} $[@Fenimoreetal95]. (iv) There is a Width-Symmetry-Intensity correlation. High intensity pulses are (statistically) more symmetric (lower decay to rise ratio) and with shorter spectral lags [@Norrisetal96]. (v) There is a Hardness-Intensity correlation. The instantaneous spectral hardness of a pulse is correlated to the instantaneous intensity (the pulse become softer during the pulse decay) [@Borgonovo01].
Both the pulse widths, $\delta t$, and the pulse separation,$\Delta t$ , have a rather similar log-normal distributions. However, the pulse separation, distribution, reveals has an excess of long intervals [@NakarPiran02a]. These long intervals can be classified as quiescent periods [@Ramirez-Ruiz_Merloni01], relatively long periods of several dozen seconds with no activity. When excluding these quiescent periods both distributions are log-normal with a comparable parameters [@NakarPiran02a; @QuilliganEtal02]. The average pulse interval, $\bar \Delta t = 1.3sec$ is larger by a factor 1.3 then the average pulse width $\bar \delta t= 1sec$. One also finds that the pulse widths are correlated with the preceding interval [@NakarPiran02a]. @Ramirez-Ruiz_Fenimore00 found that the pulses’ width does not vary along the bursts.
One can also analyze the temporal behavior using the traditional Fourier transform method to analyze. The power density spectra (PDS) of light curves shows a power law slope of $ \sim -5/3 $ and a sharp break at 1Hz [@Beloborodov_pds_00].
The results described so far are for long bursts. the variability of short ($T<2$sec) bursts is more difficult to analyze. The duration of these bursts is closer to the limiting resolution of the detectors. Still most ($\sim 66\%$) short bursts are variable with $\delta t/T < 0.1$ [@NakarPiran02b]. These variable bursts are composed of multiple subpulses.
### Populations {#sec:pop}
[**Long and Short Bursts**]{} The clearest classification of bursts is based on their duration. @Kouveliotou_2pop_93 have shown that GRB can be divided to two distinct groups: long burst with $T_{90}>2$sec and short bursts with $T_{90}< 2$sec. Note that it was suggested [@Mukherjee98; @Horvath98] that there is a third intermediate class with $2.5{\rm sec} <T_{90}< 7$sec. However, it is not clear if this division to three classes is statistically significant [@Hakkila00].
An interesting question is whether short bursts could arise from single peaks of long bursts in which the rest of the long burst is hidden by noise. @NakarPiran02b have shown that in practically all long bursts the second highest peak is comparable in height to the first one. Thus, if the highest peak is above the noise so should be the second one. Short bursts are a different entity. This is supported by the observation that short bursts are typically harder [@Dezalay96; @Kouveliotou96]. The duration-hardness distribution (see Fig. \[fig:hardness-duration\]) shows clearly that there are not soft short bursts.
The spatial distribution of the [**observed**]{} short bursts is clearly different from the distribution of the [**observed**]{} long one. A measure of the spatial distribution is the average ratio $
\langle V/V_{max} \rangle \equiv \langle (C/C_{min})^{-3/2}
\rangle $, where $C$ is the count rate and $C_{min}$ is the minimal rate required for triggering. In a uniform Eucleadian sample this ratio equals $0.5$ regardless of the luminosity function. One of the first signs of a cosmological origin of GRBs was the deviation of this value from 0.5 for the BATSE sample [@Meeganetal92Nat]. The the $ \langle V/V_{max} \rangle $ of the BATSE short bursts sample [@Mao_Narayan_P94; @P96IAU; @Katz_Canel96] is significantly higher than $ \langle V/V_{max} \rangle $ of the long bursts sample. Note that more recently [@Schmidt01] suggested that the two values are similar and the distribution of long and short bursts is similar. However, @GuettaPiran03 finds $\langle V/V_{max} \rangle_{long} = 0.282$ and $\langle V/V_{max}
\rangle_{long} = 0.390$ (I discuss this point further in §\[sec:rates\]). This implies that the population of [**observed**]{} short bursts is nearer on average than the population of the observed long ones. This is not necessarily a statement on the location of short vs. the location of long bursts. Instead it simply reflects the fact that it is more difficult to detect a short burst. For a short burst one has to trigger on a shorter (and hence noisier) window the detector (specifically BATSE that triggers on 64 ms for short bursts and on 1 sec for long ones) is less sensitive to short bursts. I discuss later, in §\[sec:rates\], the question of rates of long vs. short bursts.
So far afterglow was detected only from long bursts. It is not clear whether this is an observational artifact or a real feature. However, there was no afterglow observed for the only well localized short hard burst: GRB020531 [@Hurley020531]. Chandra observations show an intensity weaker by at least a factor of 100-300 than the intensity of the afterglow from long bursts at a similar time [@Butler020531]. Afterglow was not observed in other wavelength as well [@Klotz03]
As identification of hosts and redshifts depend on the detection of afterglow this implies that nothing is known about the distribution, progenitors, environment etc.. of short burst. These bursts are still waiting for their afterglow revolution.
[**Flashes**]{} (XRFs) are bursts with a similar temporal structure to GRBs but lower typical energies. @XRF discovered these flushes by comparing GRBM (GRB Monitor) with sensitivity above 40 keV and WFC (Wide Field Camera) triggering on BeppoSAX[^3]. In 39 cases the WFCs were triggered without GRBM triggering implying that these flashed do not have any hard component and most of their flux is in . The duration of 17 of these transients (out of the 39 transients), denoted flashes (XRFs), is comparable to the duration of the X-ray emission accompanying GRBs. The peak fluxes of the XRFs are similar to the fluxes observed during GRBs in the WFCs ($\sim
10^{-8}$ergs/sec/cm$^2$) but their peak energy is clearly below 40 keV. These finding confirmed the detection of @StrohmayerEtal98 of 7 GRBs with $E_p < 10$keV and 5 additional GRBs with $E_p< 50$keV in the GINGA data.
@BarraudEtal03 analyze 35 bursts detected on HETE II[^4]. They find that XRFs lie on the extension of all the relevant GRB distributions. Namely there is a continuity from GRBs to XRFs. Detailed searches in the BATSE data revealed that some of these bursts have also been detected by BATSE [@BATSE_XRF]. Using the complete search in 90% of the WFC data available, @Heise03 find that the observed frequency of XRFs is approximately half of the GRB-frequency: In 6 years of BeppoSAX observations they have observed 32 XRFs above a threshold peak-luminosity of $5 \times 10^{-9}$erg/s/cm$^2$ in the 2-25 keV range compared with 54 GRBs (all GRBs above BATSE thresholds are observed if in the field of view).
By now @Soderberg02 discovered optical afterglow from XRF 020903 and they suggest that the burst was at $z=0.25$. They also suggest a hint of an underlying SN signal (see §\[sec:obs-SN\]) peaking between 7-24 days after the initial XRF trigger. Afterglow was discovered from XRF 030723as well [@Fox030723].
### Polarization {#sec:prompt-polarization}
Recently, @CoburnBoggs03 reported on a detection of a very high ($80\%\pm 20\%$) linear polarization during the prompt $ \gamma $-ray emission of GRB 021206. This burst was extremely powerful. The observed fluence of GRB 021206 was $1.6 \cdot
10^{-4}ergs/cm^2$ at the energy range of 25-100Kev [@Hurley02GCN1727; @Hurley02GCN1728]. This puts GRB 021206 as one of the most powerful bursts, and the most powerful one (a factor of 2-3 above GRB990123) after correcting for the fact that it was observed only in a narrow band (compared to the wide BATSE band of 20-2000keV). @CoburnBoggs03 analyzed the data recorded by the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI). The polarization is measured in this detector by the angular dependence of the number detection of simultaneous pairs of events that are most likely caused by a scattering of the detected within the detector. The data data analysis is based on 12 data points which are collected over 5sec. Each of these points is a sum of several independent observations taken at different times. Thus the data is some kind of convolution of the polarization over the whole duration of the burst.
@CoburnBoggs03 test two hypothesis. First they test the null hypothesis of no polarization. This hypothesis is rejected at a confidence level of $ 5.7\sigma $. Second they estimate the modulation factor assuming a constant polarization during the whole burst. The best fit to the data is achieved with $ \Pi
=(80\pm 20)\% $. However, @CoburnBoggs03 find that the probability that $\chi^2$ is greater than the value obtained with this fit is 5%, namely the model of constant polarization is consistent with the analysis and observations only at the 5% level.
@Rutledge03Polarization reanalyzed this data and pointed out several inconsistencies within the methodology of @CoburnBoggs03. Their upper limit on the polarization (based on the same data) is $\sim 4\%$. In their rebuttle @BoggsCoburn03 point out that the strong upper limit (obtained by @Rutledge03Polarization is inconsistent with the low S/N estimated by these authors. However, they do not provide a clear answer to the criticism of the methodology raised by @Rutledge03Polarization. This leaves the situation, concerning the prompt polarization from this burst highly uncertain.
### Prompt Optical Flashes {#sec:prompt-optical}
The Robotic telescope ROTSE (Robotic Optical Transient Search Experiment) detected a 9th magnitude optical flash that was concurrent with the GRB emission from GRB 990123 [@Akerlof99]. The six snapshots begun 40sec after the trigger and lasted until three minutes after the burst. The second snapshot that took place 60sec after the trigger recorded a 9th magnitude flash. While the six snapshots do not provide a “light curve" it is clear that the peak optical flux does not coincide with the peak emission that takes place around the first ROTSE snapshot. This suggests that the optical flux is not the “low energy tail" of the emission. Recently, @Foxetal03 reported on a detection of 15.45 magnitude optical signal from GRB 021004 193 sec after the trigger. This is just 93 seconds after the 100 sec long burst stopped being active. Shortly afterwards @LiEtal03 reported on a detection of 14.67 magnitude optical signal from GRB 021211 105 sec after the trigger. Finally, @Price030329 detected a 12th magnitude prompt flash, albeit this is more than 1.5 hours after the trigger. Similar prompt signal was not observed from any other burst in spite of extensive searches that provided upper limits. @Kehoe01 searched 5 bright bursts and found single-image upper limits ranging from 13th to 14th magnitude around 10 sec after the initial burst detection and from 14 to 15.8 magnitudes one hour later. These upper limits are consistent with the two recent detections which are around 15th mag. The recent events of rapid detection suggest that we should expect many more such discoveries in the near future.
### The GRB-Afterglow Transition - Observations {#sec:transition-obs}
There is no direct correlation between the $\gamma$-ray fluxes and the (or optical) afterglow fluxes. The extrapolation of the afterglow fluxes backwards generally does not fit the $\gamma$-ray fluxes. Instead they fit the late prompt signal. These results are in nice agreement with the predictions of the Internal - External shocks scenario in which the two phenomena are produced by different effects and one should not expect a simple extrapolation to work.
The expected GRB afterglow transition have been observed in several cases. The first observation took place (but was not reported until much latter) already in 1992 [@BureninEtal99]. BeppoSAX data shows a rather sharp transition in the hardness that takes place several dozen seconds after the beginning of the bursts. This transition is seen clearly in the different energy bands light curves of GRB990123 and in GRB980923 [@GiblinEtal99]. @Connaughton02 have averaged the light curves of many GRBs and discovered long and soft tails: the early afterglow. Additional evidence for the transition from the GRB to the afterglow can be observed in the observations of the different spectrum within the GRB [@Preece02].
The Afterglow {#sec:obs-afterglow}
--------------
Until 1997 there were no known counterparts to GRBs in other wavelengths. On Feb 28 1997 the Italian-Dutch satellite BeppoSAX detected afterglow from GRB 970228 [@Costa_970228]. The exact position given by BeppoSAX led to the discovery of optical afterglow [@vanParadijs970228]. Radio afterglow was detected in GRB 970508 [@Frail970508]. By now more than forty afterglows have been observed (see http://www.mpe.mpg.de/$\sim$jcg/grb.html for a complete up to date tables of well localized GRBs with or without afterglow. Another useful page is: http://grad40.as.utexas.edu/grblog.php). About half of these have optical and radio afterglow (see Fig \[fig:Venn\]). The accurate positions given by the afterglow enabled the identification of the host galaxies of many bursts. In twenty or so cases the redshift has been measured. The observed redshifts range from 0.16 for GRB 030329 (or 0.0085 for GRB 980425) to a record of 4.5 (GRB 000131). Even though the afterglow is a single entity I will follow the astronomical wavelength division and I will review here the observational properties of , optical and radio afterglows.
### The afterglow {#sec:obs-xr}
The afterglow is the first and strongest, but shortest signal. In fact it seems to begin already while the GRB is going on (see §\[sec:transition-obs\] for a discussion of the GRB-afterglow transition). The light curve observed several hours after the burst can usually be extrapolated to the late parts of the prompt emission.
The afterglow fluxes from GRBs have a power law dependence on $\nu$ and on the observed time $t$ [@Piro01]: $f_\nu(t)
\propto \nu^{-\beta} t^{-\alpha}$ with $\alpha \sim 1.4$ and $\beta \sim 0.9$. The flux distribution, when normalized to a fixed hour after the burst has a rather narrow distribution. A cancellation of the k corrections and the temporal decay makes this flux, which is proportional to $(1+z)^{\beta-\alpha}$ insensitive to the redshift. Using 21 BeppoSAX bursts [@Piro01] @Piranetal01 find that the 1-10keV flux, 11 hours after the burst is $ 5 \times 10^{-13}$ergs/cm$^{-2}$sec. The distribution is log-normal with $\sigma_{f_{x}}\approx 0.43
\pm 0.1$ (see fig. \[fig:x-rays1\]). @Pasquale03 find a similar result for a larger sample. However, they find that the afterglow of GRBs with optical counterparts is on average 5 times brighter than the afterglow of dark GRBs (GRBs with no detected optical afterglow). The overall energy emitted in the afterglow is generally a few percent of the GRB energy. @BergerKulkarniFrail03 find that the luminosity is indeed correlated with the opening angle and when taking the beaming correction into account they find that $L_X=f_b
L_{X,iso}$, is approximately constant, with a dispersion of only a factor of 2.
lines were seen in 7 GRBs: GRB 970508 [@Piro970508], GRB 970828 [@Yoshida970828], GRB 990705 [@Amati990705], GRB 991216 [@Piro991216], GRB 001025a [@Watson010220], GRB 000214 [@Antonelli000214] and GRB 011211 [@Reeves011211]. The lines were detected using different instruments: BeppoSAX, ASCA (Advanced Satellite for Cosmology and Astrophysics) , Chandra and XMM-Newton. The lines were detected around 10 hours after the burst. The typical luminosity in the lines is around $10^{44}-10^{45}$ergs/sec, corresponding to a total fluence of about $10^{49}$ergs. Most of the lines are interpreted as emission lines of Fe K$\alpha$. However, there are also a radiative-recombination-continuum line edge and K$\alpha$ lines of lighter elements like Si, S, Ar and Ca (all seen in the afterglow of GRB 011211 [@Reeves011211]). In one case (GRB 990705, @Amati990705) there is a transient absorption feature within the prompt emission, corresponding also to Fe K$\alpha$. The statistical significance of the detection of these lines is of some concern (2-5 $\sigma$), and even thought the late instruments are much more sensitive than the early ones all detections remain at this low significance level. @Rutledge03 and @Sako03HEAD expressed concern about the statistical analysis of the data showing these lines and claim that none of the observed lines is statistically significant. The theoretical implications are far reaching. Not only the lines require, in most models, a very large amount of Iron at rest (the lines are quite narrow), they most likely require a huge energy supply ($>
10^{52}$ergs), twenty time larger than the typical estimated energy ($\sim 5 \cdot 10^{50}$ergs).
### Optical and IR afterglow {#sec:Obs-opt}
About 50% of well localized GRBs show optical and IR afterglow. The observed optical afterglow is typically around 19-20 mag one day after the burst (See fig \[fig:optical\_one\_day\]). The signal decays, initially, as a power law in time, $t^{-\alpha}$ with a typical value of $\alpha
\approx 1.2$ and large variations around this value. In all cases the observed optical spectrum is also a power law $\nu^{-\beta}$. Generally absorption lines are superimposed on this power law. The absorption lines correspond to absorption on the way from the source to earth. Typically the highest redshift lines are associated with the host galaxy, providing a measurement of the redshift of the GRB. In a few cases emission lines, presumably from excited gas along the line of site were also observed.
Technical difficulties led a gap of several hours between the burst and the detection of the optical afterglow, which could be found only after an accurate position was available. The rapid localization provided by HETE II helped to close this gap and an almost complete light curve from 193 sec after the trigger ($\approx 93$ sec after the end of the burst) is available now for GRB021004 [@Foxetal03].
Many afterglow light curves show an achromatic break to a steeper decline with $\alpha \approx 2$. The classical example of such a break was seen in GRB 990510 [@Harrisonetal99; @Staneketal99] and it is shown here in Fig. \[fig:990510\]. It is common to fit the break with the phenomenological formula: $F_\nu (t) = f_*
(t/t_*)^{-\alpha_{1}}\{
1-\exp[-(t/t_*)^{(\alpha_{1}-\alpha_{2})}](t/t_*)^{(\alpha_{1}-\alpha_{2})}
\}$. This break is commonly interpreted as a jet break that allows us to estimate the opening angle of the jet [@Rhoads99; @SPH99] or the viewing angle within the standard jet model [@Rossi02] (see §\[sec:Energetics\] below).
The optical light curve of the first detected afterglow (from GRB 970228) could be seen for more than half a year [@Fruchteretal98]. In most cases the afterglow fades faster and cannot be followed for more than several weeks. At this stage the afterglow becomes significantly dimer than its host galaxy and the light curve reaches a plateau corresponding to the emission of the host.
In a several cases: e.g. GRB 980326 [@Bloom99], GRB 970228 [@Reichart99] GRB 011121 [@Bloometal02; @Garnavichetal03] red bumps are seen at late times (several weeks to a month). These bumps are usually interpreted as evidence for an underlying SN. A most remarkable Supernova signature was seen recently in GRB 030329 [@Stanek03SN; @Hjorth03SN]. This supernova had the same signature as SN98bw that was associated with GRB 990425 (see §\[sec:obs-SN\]).
Finally, I note that varying polarization at optical wavelengths has been observed in GRB afterglows at the level of a few to ten percent [@CovinoEtal99; @WijersEtal99; @RolEtal00; @CovinoEtal02; @Bersier03; @Greineretal03]. These observations are in agreement with rough predictions ([@Sari99; @GhiselliniLazzati99]) of the synchrotron emission model provided that there is a deviation from spherical symmetry (see §\[sec:pol\_theory\] below).
### Dark GRBs
Only $\sim 50\%$ of well-localized GRBs show optical transients (OTs) successive to the prompt gamma-ray emission, whereas an counterpart is present in 90% of cases (see Fig. \[fig:Venn\]). Several possible explanations have been suggested for this situation. It is possible that late and shallow observations could not detect the OTs in some cases; several authors argue that dim and/or rapid decaying transients could bias the determination of the fraction of truly obscure GRBs [@Fyn01a; @Ber02]. However, recent reanalysis of optical observations [@Rei01; @Ghi00; @Laz00] has shown that GRBs without OT detection (called dark GRBs, FOAs Failed Optical Afterglows, or GHOSTs, Gamma ray burst Hiding an Optical Source Transient) have had on average weaker optical counterparts, at least 2 magnitudes in the R band, than GRBs with OTs. Therefore, they appear to constitute a different class of objects, albeit there could be a fraction undetected for bad imaging.
The nature of dark GRBs is not clear. So far three hypothesis have been put forward to explain the behavior of dark GRBs. First, they are similar to the other bright GRBs, except for the fact that their lines of sight pass through large and dusty molecular clouds, that cause high absorption [@ReichartPrice02]. Second, they are more distant than GRBs with OT, at $ z \ge 5 $ [@Fruchter_970228; @ReichartLamb00], so that the Lyman break is redshifted into the optical band. Nevertheless, the distances of a few dark GRBs have been determined and they do not imply high redshifts [@Djo02; @Ant00; @Pir02]. A third possibility is that the optical afterglow of dark GRBs is intrinsically much fainter (2-3 mag below) than that of other GRBs.
@Pasquale03 find that GRBs with optical transients show a remarkably narrow distribution of flux ratios, which corresponds to an average optical-to-x spectral index $0.794\pm 0.054$. They find that, while 75% of dark GRBs have flux ratio upper limits still consistent with those of GRBs with optical transients, the remaining 25% are 4 - 10 times weaker in optical than in X-rays. This result suggests that the afterglows of most dark GRBs are intrinsically fainter in all wavelength relative to the afterglows of GRBs with observed optical transients. As for the remaining 25% here the spectrum (optical to X-ray ratio) must be different than the spectrum of other afterglows with a suppression of the optical band.
### Radio afterglow
Radio afterglow was detected in $\sim 50$ % of the well localized bursts. Most observations are done at about 8 GHz since the detection falls off drastically at higher and lower frequencies. The observed peak fluxes are at the level of 2 mJy. A turnover is seen around $0.2$ mJy and the undetected bursts have upper limits of the order of 0.1 mJy. As the localization is based on the afterglow (and as practically all bursts have afterglow) almost all these bursts were detected in . $\sim
80$ % of the radio-afterglow bursts have also optical afterglow. The rest are optically dark. Similarly $\sim 80$% of the optically observed afterglow have also a radio component (see fig \[fig:Venn\]).
Several bursts (GRBs 980329, 990123, 91216, 000926, 001018, 010222, 011030, 011121) were detected at around one day. Recent radio observations begin well before that but do not get a detection until about 24 hrs after a burst. The earliest radio detection took place in GRB 011030 at about 0.8 days after the burst [@TaylorFrailFox01]. In several cases (GRBs 990123, 990506, 991216, 980329 and 020405) the afterglow was detected early enough to indicate emission from the reverse shock and a transition from the reverse shock to the forward shock.
The radio light curve of GRB 970508 (see fig \[fig:radio970508\]) depicts early strong fluctuations (of order unity) in the flux [@Frail970508]. @Goodman97 suggested that these fluctuations arise due to scintillations and the decrease (with time) in the amplitude of the fluctuations arises from a transition from strong to weak scintillations. @Frail970508 used this to infer the size of the emitting region of GRB 970508 at $\sim 4$ weeks after the burst as $\sim
10^{17}$cm. This observations provided the first direct proof of relativistic expansion in GRBs.
The self-absorbed frequencies fall in the centimeter to meter wave radio regime and hence the lower radio emission is within the self-absorption part of the spectrum (see §\[Sec:self-abs\] later). In this case the spectrum rises as $\nu^2$ [@KP97]. The spectral shape that arises from a the fact that the system is optically thick enables us (using similar arguments to those of a simple black body emission) to determine the size of the emitting region. In GRB 990508 this has lead to $\sim 10^{17}$cm. A comparable estimate to the one derived from scintillations.
The long-lived nature of the radio afterglow allows for unambiguous calorimetry of the blast wave to be made when its expansion has become sub-relativistic and quasi-spherical. The light curves evolves on a longer time scale in the radio. Some GRB afterglows have been detected years after the burst even after the relativistic-Newtonian transition (see §\[sec:Newtonian\]). At this stage the expansion is essentially spherical and this enables a direct “calorimetric” estimate of the total energy within the ejecta [@Waxmanetal98].
Hosts and Distribution {#sec:hosts-distribution}
-----------------------
### Hosts {#sec:hosts}
By now (early 2004) host galaxies have been observed for all but 1 or 2 bursts with optical, radio or afterglow localization with arcsec precision [@Hurleyetal02]. The no-host problem which made a lot of noise in the nineties has disappeared. GRBs are located within host galaxies (see @Djorgovski02a [@Djorgovski02b] and @Hurleyetal02 for detailed reviews). While many researchers believe that the GRB host population seem to be representative of the normal star-forming field galaxy population at a comparable redshifts, others argue that GRB host galaxies are significantly bluer than average and their star formation rate is much higher than average.
The host galaxies are faint with median apparent magnitude $R\approx 25$. Some faint hosts are at $R\approx 29$. Down to $R\approx 25$ the observed distribution is consistent with deep field galaxy counts. @Jimenezetal01 find that the likelihood of finding a GRB in a galaxy is proportional to the galaxy’s luminosity.
The magnitude and redshift distribution of GRB host galaxies are typical for normal, faint field galaxies, as are their morphologies [@Odewahn98; @Holland01; @Bloom02; @Hurleyetal02; @Djorgovski02b]. While some researchers argue that the broad band optical colors of GRB hosts are not distinguishable from those of normal field galaxies at comparable magnitudes and redshifts [@Bloom02; @Sokolov01], others [@Fruchter_970228] asserts that the host galaxies are unusually blue and that they are strongly star forming. @LeFlochetal03 argues that R-K colors of GRB hosts are unusually blue and the hosts may be of low metallicity and luminosity. This suggests [@LeFloch04] that the hosts of GRBs might be different from the cites of the majority of star forming galaxies that are luminous, reddened and dust-enshrouded infrared starbursts (@ElbazCesarsky03 and references therein). @LeFloch04 also suggests that this difference might rise due to an observational bias and that GRBs that arise in dust-enshrouded infrared starbursts are dark GRBs whose afterglow is not detectable due to obscuration. Whether this is tru or not is very relevant to the interesting question to which extend GRBs follow the SFR and to which extend they can be used to determine the SFR at high redshifts.
@Totani97, @Wijersetal_SFR98 and @Pac98 suggested that GRBs follow the star formation rate. As early as 1998 @Fruchter_970228 noted that all four early GRBs with spectroscopic identification or deep multicolor broadband imaging of the host (GRB 970228 GRB 970508, GRB 971214, and GRB 980703) lie in rapidly star-forming galaxies. Within the host galaxies the distribution of GRB-host offset follows the light distribution of the hosts [@Bloom02]. The light is roughly proportional to the density of star formation. Spectroscopic measurements suggest that GRBs are within Galaxies with a higher SFR. However, this is typical for normal field galaxy population at comparable redshifts [@Hurelyetal02]. There are some intriguing hints, in particular the flux ratios of \[Ne III\] 3859 to \[OII\] 3727 are on average a factor of 4 to 5 higher in GRB hosts than in star forming galaxies at low redshifts [@Djorgovski02b]. This may represent an indirect evidence linking GRBs with massive star formation. The link between GRBs and massive stars has been strengthened with the centimeter and submillimeter discoveries of GRB host galaxies [@BergerKF01; @FrailBMetal02] undergoing prodigious star formation (SFR$\sim 10^3$ M$_\odot$ yr$^{-1}$), which remains obscured at optical wavelengths.
Evidence for a different characteristics of GRB host galaxies arise from the work of @Fynbo02 [@Fynbo03] who find that GRB host galaxies “always" show Lyman alpha emission in cases where a suitable search has been conducted. This back up the claim for active star formation and at most moderate metallicity in GRB hosts. It clearly distinguishes GRB hosts from the Lyman break galaxy population, in which only about 1/4 of galaxies show strong Lyman alpha.
### The Spatial Distribution {#sec:spatial}
BATSE’s discovery that the bursts are distributed uniformly on the sky [@Meeganetal92Nat] was among the first indication of the cosmological nature of GRBs. The uniform distribution indicated that GRBs are not associated with the Galaxy or with “local" structure in the near Universe.
Recently there have been several claims that sub-groups of the whole GRB population shows a deviation from a uniform distribution. @MeszarosA00a [@MeszarosA00b], for example, find that the angular distribution of the intermediate sub-group of bursts (more specifically of the weak intermediate sub-group) is not random. @Celottietal02 reported that the two-point angular correlation function of 407 short BATSE GRBs reveal a $\sim 2\sigma$ deviation from isotropy on angular scales $2^o-4^o$. This results is consistent with the possibility that observed short GRBs are nearer and the angular correlation is induced by the large scale structure correlations on this scale. These claims are important as they could arise only if these bursts are relatively nearby. Alternatively this indicates repetition of these sources [@Celottietal02]. Any such deviation would imply that these sub-groups are associated with different objects than the main GRB population at least that these subgroup are associated with a specific feature, such as a different viewing angle.
@Clineetal03 studied the shortest GRB population, burst with a typical durations several dozen ms. They find that there is a significant angular asymmetry and the $\langle V/V_{max} \rangle
$ distribution provides evidence for a homogeneous sources distribution. They suggest that these features are best interpreted as sources of a galactic origin. However, one has to realize that there are strong selection effects that are involved in the detection of this particular subgroup.
### GRB rates and the [*isotropic*]{} luminosity function {#sec:rates}
There have been many attempts to determine the GRB luminosity function and rate from the BATSE peak flux distribution. This was done by numerous using different levels of statistical sophistication and different physical assumptions on the evolution of the rate of GRBs with time and on the shape of the luminosity function.
Roughly speaking the situation is the following. There are now more than 30 redshift measured. The median redshift is $z\approx
1$ and the redshift range is from 0.16 (or even 0.0085 if the association of GRB 980425 with SN 98bw should be also considered) to 4.5 (for GRB 000131). Direct estimates from the sample of GRBs with determined redshifts are contaminated by observational biases and are insufficient to determine the rate and luminosity function. An alternative approach is to estimates these quantities from the BATSE peak flux distribution. However, the observed sample with a known redshifts clearly shows that the luminosity function is wide. With a wide luminosity function, the rate of GRB is only weakly constraint by the peak flux distribution. The analysis is further complicated by the fact that the observed peak luminosity, at a given detector with a given observation energy band depends also on the intrinsic spectrum. Hence different assumptions on the spectrum yield different results. This situation suggest that there is no point in employing sophisticated statistical tools (see however, [@LoredoWasserman95; @P99] for a discussion of these methods) and a simple analysis is sufficient to obtain an idea on the relevant parameters.
I will not attempt to review the various approaches here. A partial list of calculations includes [@Piran92; @Cohen_Piran95; @FenimoreBloom95; @LoredoWasserman95; @HorackHakkila97; @LoredoWasserman98; @P99; @Schmidt99; @Schmidt01; @Schmidt01a; @SethiBhargavi01]. Instead I will just quote results of some estimates of the rates and luminosities of GRBs. The simplest approach is to fit $\langle V/V_{max} \rangle$, which is the first moment of the peak flux distribution. @Schmidt99 [@Schmidt01; @Schmidt01a] finds using $\langle
V/V_{max} \rangle$ of the long burst distribution and assuming that the bursts follow the [@PorcianiMadau01] SFR2, that the present local rate of long observed GRBs is $\approx ~0.15 {\rm
Gpc}^{-3} {\rm yr}^{-1}$ [@Schmidt01]. Note that this rate from [@Schmidt01] is smaller by a factor of ten than the earlier rate of [@Schmidt99]! This estimate corresponds to a typical (isotropic) peak luminosity of $\sim 10^{51}$ergs/sec. These are the observed rate and the isotropic peak luminosity.
Recently @GuettaPiranWaxman03 have repeated these calculations . They use both the [@Rowan-Robinson99] SFR formation rate: $$\label{RR} R_{GRB}(z) = \rho_0 \left\{ \begin{array}{ll}
10^{0.75 z} & z<1 \nonumber \\
10^{0.75 z_{\rm peak}} & z>1.
\end{array}
\right. \; ,$$ and SFR2 from [@PorcianiMadau01]. Their best fit luminosity function (per logarithmic luminosity interval, $d\log L $) is: $$\label{Lfun} \Phi_o(L) =c_o \left\{ \begin{array}{ll}
(L/L^*)^{\alpha}\qquad & L^*/30 < L < L^* \\
(L/L^*)^{\beta} \qquad & L^* erg/sec < L < 30L^*
\end{array}
\right. ;,$$ and 0 otherwise with a typical luminosity, $L^*=1.1
\times10^{51}$ergs/sec, $\alpha=-0.6$ and $\beta=-2$, and $c_o$ is a normalization constant so that the integral over the luminosity function equals unity. The corresponding local GRB rate is $\rho_0=0.44$Gpc$^{-1}$yr$^{-1}$. There is an uncertainty of a factor of $\sim 2$ in the typical energy, $L^*$, and in the local rate. I will use these numbers as the “canonical" values in the rest of this review.
The observed (BATSE) rate of short GRBs is smaller by a factor of three than the rate of long ones. However, this is not the ratio of the real rates as :(i) The BATSE detector is less sensitive to short bursts than to long ones; (ii) The true rate depends on the spatial distribution of the short bursts. So far no redshift was detected for any short bursts and hence this distribution is uncertain. For short bursts we can resort only to estimates based on the peak flux distribution. There are indications that $\langle
V/V_{max} \rangle$ of short burst is larger (and close to the Eucleadian value of 0.5) than the $\langle V/V_{max} \rangle$ value of long ones (which is around 0.32). This implies that the observed short bursts are nearer to us that the long ones [@Mao_Narayan_P94; @Katz_Canel96; @Tavani98] possible with all observed short bursts are at $z<0.5$. However, @Schmidt01 finds for short bursts $\langle V/V_{max}
\rangle= 0.354$, which is rather close to the value of long bursts. Assuming that short GRBs also follow the SFR he obtains a local rate of $0.075 {\rm Gpc}^{-3} {\rm yr}^{-1}$ - a factor of two below the rate of long GRBs! The (isotropic) peak luminosities are comparable. This results differs from a recent calculation of @GuettaPiran03 who find for short bursts $\langle
V/V_{max} \rangle= 0.390$ and determine from this a local rate of $1.7 {\rm Gpc}^{-3} {\rm yr}^{-1}$ which is about four times the rate of long bursts. This reflects the fact that the [**observed**]{} short GRBs are significantly nearer than the [**observed**]{} long ones.
These rates and luminosities are assuming that the bursts are [**isotropic**]{}. Beaming reduces the actual peak luminosity increases the implied rate by a factor $f_b^{-1}=2 / \theta^2$. By now there is evidence that GRBs are beamed and moreover the total energy in narrowly distributed [@Frail01; @PanaitescuK01]. There is also a good evidence that the corrected peak luminosity is much more narrowly distributed than the isotropic peak luminosity [@vanPuttenRegimbau03; @GuettaPiranWaxman03]. The corrected peak luminosity is $L_{peak} (\theta^2/2) \sim const$. @Frail01 suggest that the true rate is larger by a factor of 500 than the observed isotropic estimated rate. However, @GuettaPiranWaxman03 repeated this calculation performing a careful average over the luminosity function and find that that true rate is only a factor of $\sim 75 \pm25 $ times the isotropically estimate one. Over all the true rate is: $ 33 \pm 11
h_{65}^{3} {\rm Gpc}^{-3} {\rm yr}^{-1}$.
With increasing number of GRBs with redshifts it may be possible soon to determine the GRB redshift distribution directly from this data. However, it is not clear what are the observational biases that influence this data set and one needs a homogenous data set in order to perform this calculation. Alternatively one can try to determine luminosity estimators [@Norris_lags00; @Fenimore_Ramirez-Ruiz01; @SchaeferDengBand01; @Schaefer03a] from the subset with known redshifts and to obtain, using them a redshift distribution for the whole GRB sample. @Lloyd-RonningFryerRamirez-Ruiz02 find using the @Fenimore_Ramirez-Ruiz01 sample that this method implies that (i) The rate of GRBs increases backwards with time even for $z>10$, (ii) The Luminosity of GRBs increases with redshift as $(1+z)^{1.4\pm 0.5}$; (iii) Hardness and luminosity are strongly correlated. It is not clear how these features, which clearly depend on the inner engine could depend strongly on the redshift. Note that in view of the luminosity-angle relation (see §\[sec:Energetics\] below) the luminosity depends mostly on the opening angle. An increase of the luminosity with redshift would imply that GRBs were more narrowly collimated at earlier times.
### Association with Supernovae {#sec:obs-SN}
The association of GRBs with star forming regions and the indications that GRBs follow the star formation rate suggest that GRBs are related to stellar death, namely to Supernovae [@Pac98]. Additionally there is some direct evidence of association of GRBs with Supernovae.
[**GRB 980425 and SN98bw:**]{} The first indication of an association between GRBs and SNes was found when SN 98bw was discovered within the error box of GRB 980425 [@Galama98bw]. This was an usual type Ic SN which was much brighter than most SNs. Typical ejection velocities in the SN were larger than usual ($ \sim 2\cdot 10^4 km/sec$) corresponding to a kinetic energy of $2-5 \time 10^{52}$ ergs, more than ten times than previously known energy of SNes, [@IwamotoEtal98]. Additionally radio observations suggested a component expanding sub relativistically with $v \sim 0.3 c$ [@Kulkarnietal98]. Thus, 1998bw was an unusual type Ic supernovae, significantly more powerful than comparable SNes. This may imply that SNs are associated with more powerful SNes. Indeed all other observations of SN signature in GRB afterglow light curves use a SN 98bw templates. The accompanying GRB, 980425 was also unusual. GRB 980425 had a smooth FRED light curve and no high energy component in its spectrum. Other bursts like this exist but they are rare. The redshift of SN98bw was 0.0085 implying an isotropic equivalent energy of $\sim 10^{48}$ergs. Weaker by several orders of magnitude than a typical GRB.
The BeppoSAX Wide Field Cameras had localized GRB980425 with a 8 arcmin radius accuracy. In this circle, the BeppoSAX NFI (Narrow Field Instrument) had detected two sources, S1 and S2. The NFI could associate with each of these 2 sources an error circle of 1.5 arcmin radius. The radio and optical position of SN1998bw were consistent only with the NFI error circle of S1, and was out of the NFI error circle of S2. Therefore, @Pianetal00 identified S1 with X-ray emission from SN1998bw, although this was of course no proof of association between SN and GRB. It was difficult, based only on the BeppoSAX NFI data, to characterize the behavior and variability of S2 and it could not be excluded that S2 was the afterglow of GRB980425. The XMM observations of March 2002 [@Pianetal03] seem to have brought us closer to the solution. XMM detects well S1, and its flux is lower than in 1998: the SN emission has evidently decreased. Concerning the crucial issue, S2: XMM, having a better angular resolution than BeppoSAX NFIs, seems to resolve S2 in a number of sources. In other words, S2 seems to be not a single source, but a group of small faint sources. Their random variability (typical fluctuations of X-ray sources close to the level of the background) may have caused the flickering detected for S2. This demolishes the case for the afterglow nature of S2, and strengthens in turn the case for association between GRB980425 and SN1998bw.
[**Red Bumps:**]{} The late red bumps (see §\[sec:Obs-opt\]) have been discovered in several GRB light curves [@Bloom99; @Reichart99; @Bloometal02; @Garnavichetal03]. These bumps involve both a brightening (or a flattening) of the afterglow as well as a transition to a much redder spectrum. These bumps have been generally interpreted as due to an underlining SN [@Bloom99]. In all cases the bumps have been fit with a template of SN 1998bw, which was associated with GRB 980425. @EsinBlandford00 proposed that these bumps are produced by light echoes on surrounding dust (but see [@Reichart01]). @Waxman-Draine00 purposed another alternative explanation based on dust sublimation.
For most GRBs there is only an upper limit to the magnitude of the bump in the light curve. A comparison of these upper limits (see Fig. \[fig:SNbump\]) with the maximal magnitudes of type Ibc SNe shows that the faintest GRB-SN non-detection (GRB 010921) only probes the top $\sim$40th-percentile of local Type Ib/Ic SNe. It is clear that the current GRB-SNe population may have only revealed the tip of the iceberg; plausibly, then, SNe could accompany all long-duration GRBs.
[**GRB 030329 and CN 2003dh:**]{} The confirmation of SN 98bw like bump and the confirmation of the GRB-SN association was dramatically seen recently [@StanekEtal03; @Hjorth03SN] in the very bright GRB 030329 that is associated with SN 2003dh [@ChornockEtal03]. The bump begun to be noticed six days after the bursts and the SN 1999bw like spectrum dominated the optical light curve at later times (see Fig. \[SN\_2003dh\]. The spectral shapes of 2003dh and 1998bw were quite similar, although there are also differences. For example \[sec:Energetics\] estimated a somewhat larger expansion velocity for 2003dh. Additionally the signal was much brighter (but this could be purely afterglow).
For most researchers in the field this discovery provided the final conclusive link between SNe and GRBs (at least with long GRBs). As the SN signature coincides with the GRB this observations also provides evidence against a Supranova interpretation, in which the GRB arises from a collapse of a Neutron star that takes place sometime after the Supernova in which the Neutron star was born - see \[sec:Supranova\] . (unless there is a variety of Supranova types, some with long delay and others with short delay between the first and the second collapses) the spectral shapes of 2003dh and 1998bw were quite similar, although there are also differences. For example there is a slightly larger expansion velocity for 2003dh. It is interesting that while not as week as GRB 990425, the accompanying GRB 99030329 was significantly weaker than average. The implied opening angle reveals that the prompt $\gamma$-ray energy output, $E_\gamma$, and the X-ray luminosity at $10\;$hr, $L_X$, are a factor of $\sim 20$ and $\sim 30$, respectively, below the average value around which most GRBs are narrowly clustered (see \[sec:Energetics\] below).
It is interesting to compare SN 1999bw and SN 2003dh. Basically, at all epochs @Mathesonetal03 find that the best fit to spectra of 2003dh is given by 1998bw at about the same age . The light curve is harder, as the afterglow contribution is significant, but using spectral information they find that 2003dh had basically the same light curve as 1998bw. @Mazzalietal03 model the spectra and find again that it was very similar to 1998bw. They find some differences, but some of that might be due to a somewhat different approach to spectral decomposition, which gives somewhat fainter supernova.
[**lines:**]{} The appearance of iron lines (see §\[sec:obs-xr\]) has been interpreted as additional evidence for SN. One has to be careful with this interpretation as the iron lines are seen as if emitted by matter at very low velocities and at rather large distances. This is difficult to achieve if the supernova is simultaneous with the GRB, as the SN bumps imply. This lines might be consistent with the Supranova model [@VietriStella98] in which the SN takes place month before the GRB. However, in this case there won’t be a SN bump in the light curve! @MR00 [@MR01] and @KumarNarayan03 suggest alternative interpretations which do not require a Supranova.
Energetics {#sec:Energetics}
-----------
Before redshift measurements were available the GRB energy was estimated from the BATSE catalogue by fitting an (isotropic) luminosity function to the flux distribution (see e.g @Cohen_Piran95 [@LoredoWasserman98; @Schmidt99; @Schmidt01; @Schmidt01a; @GuettaPiranWaxman03] and many others). This lead to a statistical estimate of the luminosity function of a distribution of bursts.
These estimates were revolutionized with the direct determination of the redshift for individual bursts. Now the energy could be estimated directly for specific bursts. Given an observed $\gamma$-ray fluence and the redshift to a burst one can easily estimate the energy emitted in $\gamma$-rays, $E_{\gamma,iso}$ assuming that the emission is isotropic (see @Bloom_Frail_Sari01 for a detailed study including k corrections). The inferred energy, $E_{\gamma,iso}$ was the isotropic energy, namely the energy assuming that the GRB emission is isotropic in all directions. The energy of the first burst with a determined redshift, GRB 970508, was around $10^{51}$ergs. However, as afterglow observations proceeded, alarmingly large values ([*e.g.*]{} $3.4 \times 10^{54}$ergs for GRB990123) were measured for $E_{\gamma,iso}$. The variance was around three orders of magnitude.
However, it turned out [@Rhoads99; @SPH99] that GRBs are beamed and $E_{\gamma,iso}$ would not then be a good estimate for the total energy emitted in $\gamma$-rays. Instead: $E_\gamma \equiv
(\theta^2/2)E_{\gamma,iso}$. The angle, $\theta$, is the effective angle of $\gamma$-ray emission. It can be estimated from $t_{b}$, the time of the break in the afterglow light curve [@SPH99]: $$\theta =0.16 (n/E_{k,iso,52})^{1/8} t_{b,days}^{3/8} = 0.07
(n/E_{k,\theta,52})^{1/6} t_{b,days}^{1/2},$$ where $t_{b,days}$ is the break time in days. $E_{k,iso,52}$ is “isotropic equivalent" kinetic energy, discussed below, in units of $10^{52}$ergs, while $E_{k,\theta,52}$ is the real kinetic energy in the jet i.e: $E_{k,\theta,52}=(\theta^2/2)
E_{k,iso,52}$. One has to be careful which of the two energies one discusses. In the following I will usually consider, unless specifically mentioned differently, $E_{k,iso,52}$, which is also related to the energy per unit solid angle as: $E_{k,iso,52}/4
\pi$. The jet break is observed both in the optical and in the radio frequencies. Note that the the observational signature in the radio differs from that at optical and [@SPH99; @Harrisonetal99] (see Fig. \[fig:990510\_radio\]) and this provides an additional confirmation for this interpretation.
@Frail01 estimated $E_\gamma$ for 18 bursts, finding typical values around $10^{51}$ergs (see also @PanaitescuK01). @BloomFrailKulkarni03 find ${E}_\gamma = 1.33 \times 10^{51}\,h_{65}^{-2}$ erg and a burst–to–burst variance about this value $\sim 0.35$ dex, a factor of 2.2. This is three orders of magnitude smaller than the variance in the isotropic equivalent $E_\gamma$. A compilation of the beamed energies from [@BloomFrailKulkarni03], is shown in Figs \[fig:energy1\] and \[fig:energy2\]. It demonstrates nicely this phenomenon. The constancy of $E_\gamma$ is remarkable, as it involves a product of a factor inferred from the GRB observation (the flux) with a factor inferred from the afterglow observations (the jet opening angle). However, $E_\gamma$ might not be a good estimate for $E_{tot}$, the total energy emitted by the central engine. First, an unknown conversion efficiency of energy to $\gamma$-rays has to be considered: $E_{tot} = \epsilon^{-1} E_\gamma =\epsilon^{-1} (\theta^2/2)
E_{\gamma,iso}$. Second, the large Lorentz factor during the $\gamma$-ray emission phase, makes the observed $E_\gamma$ rather sensitive to angular inhomogeneities of the relativistic ejecta [@KP00b]. The recent early observations of the afterglow of GRB 021004 indicate that indeed a significant angular variability of this kind exists [@NakarPiranGranot03; @NakarPiran03b].
The kinetic energy of the flow during the adiabatic afterglow phase, $E_k$ is yet another energy measure that arises. This energy (per unit solid angle) can be estimated from the afterglow light curve and spectra. Specifically it is rather closely related to the observed afterglow flux [@Kumar00; @Waxman_Friedman; @Piranetal01]. As this energy is measured when the Lorentz factor is smaller it is less sensitive than $E_\gamma$ to angular variability. The constancy of the flux [@Piranetal01] suggest that this energy is also constant. Estimates of $E_{k,\theta}$ [@PanaitescuK01] show that $\bar E_\gamma \approx 3 \bar E_{k,\theta}$, namely the observed “beamed" GRB energy is larger than the estimated “beamed" kinetic energy of the afterglow. @Frail01, however, find that $\bar E_\gamma \approx \bar E_{k,\theta}$, namely that the two energies are comparable.
An alternative interpretation to the observed breaks is that we are viewing a “universal" angle dependent, namely, “structured" jet - from different viewing angles [@Lipunov_Postnov_Pro01; @Rossi02; @Zhang02]. The observed break corresponds in this model to the observing angle $\theta$ and not to the opening angle of the jet. This interpretation means that the GRB beams are wide and hence the rate of GRBs is smaller than the rate implied by the usual beaming factor. On the other hand it implies that GRBs are more energetic. @GuettaPiranWaxman03 estimate that this factor (the ratio of the fixed energy of a “structured" jet relative to the energy of a uniform jet to be $\sim 7$. However they find that the observing angle distribution is somewhat inconsistent with the simple geometric one that should arise in universal structured jets (see also @Pernaetal03 [@NakarGranotGuetta03]). The energy-angle relation discussed earlier require (see §\[sec:structured\] below) an angle dependent jet with $E(\theta)
\propto \theta^{-2}$.
Regardless of the nature of the jet (universal structured jet or uniform with a opening angle that differs from one burst to another) at late time it becomes non relativistic and spherical. With no relativistic beaming every observer detects emission from the whole shell. Radio observations at this stage enable us to obtain a direct calorimetric estimate of the total kinetic energy of the ejecta at late times [@FrailWaxmanKulkarni00] Estimates performed in several cases yield a comparable value for the total energy.
If GRBs are beamed we should expect orphan afterglows (see §\[sec:orphan\]): events in which we will miss the GRB but we will observe the late afterglow that is not so beamed. A comparison of the rate of orphan afterglows to GRBs will give us a direct estimate of the beaming of GRBs (and hence of their energy). Unfortunately there are not even good upper limits on the rate of orphan afterglows. @Veerswijk03 consider the observations within the Faint Sky Variability Survey (FSVS) carried out on the Wide Field Camerea on teh 2.5-m Isacc Newton Telescope on La Palma. This survey mapped 23 suare degree down to a limiting magnitude of about V=24. They have found one object which faded and was not detected after a year. However, its colors suggest that it was a supernova and not a GRB. Similarly, @VandenBerketal02 find a single candidate within the Sloan Digital Sky Survey. Here the colors were compatible with an afterglow. However, later it was revealed that this was a variable AGN and not an orphan afterglow. As I discuss later this limits are still far to constrain the current beaming estimates (see §\[sec:orphan\]).
One exception is for late radio emission for which there are some limits [@PernaLoeb98; @Levinsonetal02]. @Levinsonetal02 show that the number of orphan radio afterglows associated with GRBs that should be detected by a flux-limited radio survey is smaller for a smaller jet opening angle $\theta$. This might seen at first sight contrary to expectation as narrower beams imply more GRBs. But, on the other hand, with narrower beams each GRB has a lower energy and hence its radio afterglow is more difficult to detect. Overall the second factor wins. Using the results of FIRST and NVSS surveys they find nine afterglow candidates. If all candidates are associated with GRBs then there is a lower limit on the beaming factor of $f^{-1}_b \equiv (\theta^2/2)> 13$. If none are associated with GRBs they find $f^{-1}_b > 90$. This give immediately a corresponding upper limit on the average energies of GRBs. @GuettaPiranWaxman03 revise this values in view of a recent estimates of the correction to the rate of GRBs to: $f^{-1}_b = 40$.
When considering the energy of GRBs one has to remember the possibility, as some models suggest, that an additional energy is emitted which is not involved in the GRB itself or in the afterglow. @vanPuttenLevinson01, for example, suggest that a powerful Newtonian wind collimates the less powerful relativistic one. The “standard jet" model also suggests a large amount of energy emitted sideways with a lower energy per solid angle and a lower Lorentz factors. It is interesting to note that the calorimetric estimates mentioned earlier limit the total amount of energy ejected regardless of the nature of the flow. More generally, typically during the afterglow matter moving with a lower Lorentz factor emits lower frequencies. Hence by comparing the relative beaming of afterglow emission in different wavelength one can estimate the relative beaming factors, $f^{-1}_b(E)$, at different wavelength and hence at different energies. @NakarPiran03a use various searches for orphan afterglow to limit the (hard) energy to be at most comparable to the energy. This implies that the total energy of matter moving at a Lorentz factor of $\sim 40$ is at most comparable to the energy of matter moving with a Lorentz factor of a few hundred and producing the GRB itself. At present limits on optical orphan afterglow are insufficient to set significant limits on matter moving at slower rate, while as mentioned earlier radio observations already limit the overall energy output.
These observations won’t limit, of course, the energy emitted in gravitational radiation, neutrinos, Cosmic Rays or very high energy photons that may be emitted simultaneously by the source and influence the source’e energy budget without influencing the afterglow.
THE GLOBAL PICTURE - GENERALLY ACCEPTED INGREDIENTS {#sec:accepted}
====================================================
There are several generally accepted ingredients in practically all current GRB models.
[**Relativistic Motion:**]{} Practically all current GRB models involve a relativistic motion with a Lorentz factor, $\Gamma > 100$. This is essential to overcome the compactness problem (see §\[sec:comp\] below). At first this understanding was based only on theoretical arguments. However, now there are direct observational proofs of this concept: It is now generally accepted that both the radio scintillation [@Goodman97] and the lower frequency self-absorption [@KP97] provide independent estimates of the size of the afterglow, $\sim
10^{17}$cm, two weeks after the burst. These observations imply that the afterglow has indeed expanded relativistically. @SP99a suggested that the optical flash accompanying GRB 990123 provided a direct evidence for ultra-relativistic motion with $\Gamma \sim 100$. @SoderbergRamirezRuiz03 find a higher value: $1000 \pm 100$. However, these interpretations are model dependent.
The relativistic motion implies that we observe blue shifted photons which are significantly softer in the moving rest frame. It also implies that when the objects have a size $R$ the observed emission arrives on a typical time scale of $R/c \G^2$ (see §\[sec:Temporal\]). Relativistic beaming also implies that we observe only a small fraction ($1/\Gamma$) of the source. As I discussed earlier (see §\[sec:Energetics\] and also \[sec:patchy-shell\]) this has important implications on our ability to estimate the total energy of GRBs.
While all models are based on ultra-relativistic motion, none explains convincingly (this is clearly a subjective statement) how this relativistic motion is attained. There is no agreement even on the nature of the relativistic flow. While in some models the energy is carried out in the form of kinetic energy of baryonic outflow in others it is a Poynting dominated flow or both.
[**Dissipation**]{} In most models the energy of the relativistic flow is dissipated and this provides the energy needed for the GRB and the subsequent afterglow. The dissipation is in the form of (collisionless) shocks, possibly via plasma instability. There is a general agreement that the afterglow is produced via external shocks with the circumburst matter (see \[sec:afterglow\]). There is convincing evidence (see [*e.g.*]{} @Fenimoreetal96 [@SP97; @Ramirez-Ruiz_Fenimore00; @Piran_Nakar02] and §\[sec:ex-int\] below) that in most bursts the dissipation during the GRB phase takes place via internal shocks, namely shocks within the relativistic flow itself. Some (see e.g. @Dermer_Mitman99 [@Begelman99; @RuffiniEtal01; @Dar03]) disagree with this statement.
[**Synchrotron Radiation:**]{} Most models (both of the GRB and the afterglow) are based on Synchrotron emission from relativistic electrons accelerated within the shocks. There is a reasonable agreement between the predictions of the synchrotron model and afterglow observations [@Wijers_Galama98; @GPS99a; @PanaitescuK01]. These are also supported by measurements of linear polarization in several optical afterglows (see §\[sec:Obs-opt\]). As for the GRB itself there are various worries about the validity of this model. In particular there are some inconsistencies between the observed special slopes and those predicted by the synchrotron model (see [@Preece02] and §\[sec:spec-obs\]). The main alternative to Synchrotron emission is synchrotron-self Compton [@Waxman97a; @Ghisellini_Celotti99] or inverse Compton of external light [@Shemi94; @Brainerd94; @ShavivDar95; @Lazzatietal03]. The last model requires, of course a reasonable source of external light.
[**Jets and Collimation:**]{} Monochromatic breaks appear in many afterglow light curves. These breaks are interpreted as “jet breaks" due to the sideways beaming of the relativistic emission [@PanaitescuMeszaros99; @Rhoads99; @SPH99] (when the Lorentz factor drops below $1/\theta_0$ the radiation is beamed outside of the original jet reducing the observed flux) and due to the sideways spreading of a beamed flow [@Rhoads99; @SPH99]. An alternative interpretation is of a viewing angles of a “universal structured jet" [@Lipunov_Postnov_Pro01; @Rossi02; @Zhang02] whose energy varies with the angle. Both interpretations suggest that GRBs are beamed. However, they give different estimates of the overall rate and the energies of GRBs (see §\[sec:structured\] below). In either case the energy involved with GRBs is smaller than the naively interpreted isotropic energy and the rate is higher than the observed rate.
[**A (Newborn) Compact Object**]{} If one accepts the beaming interpretation of the breaks in the optical light curve the total energy release in GRBs is $\sim 10^{51}$ergs [@Frail01; @PanaitescuK01]. It is higher if, as some models suggest, the beaming interpretation is wrong or if a significant amount of additional energy (which does not contribute to the GRB or to the afterglow) is emitted from the source. This energy, $\sim 10^{51}$ergs, is comparable to the energy released in a supernovae. It indicates that the process must involve a compact object. No other known source can release so much energy within such a short time scale. The process requires a dissipation of $\sim 0.1 m_\odot$ within the central engine over a period of a few seconds. The sudden appearance of so much matter in the vicinity of the compact object suggest a violent process, one that most likely involves the birth of the compact object itself.
[**Association with Star Formation and SNe:**]{} Afterglow observations, which exist for a subset of relatively bright long bursts, show that GRBs arise within galaxies with a high star formation rate (see [@Djorgovski01b] and §\[sec:hosts\]). Within the galaxies the bursts distribution follows the light distribution [@Bloom02]. This has lead to the understanding that (long) GRB arise from the collapse of massive stars (see §\[sec:Collapsar\]). This understanding has been confirmed by the appearance of SN bumps in the afterglow light curve (see §\[sec:obs-SN\] earlier) and in particular by the associations of SN 1999bw with GRB 980425 and of SN 2003dh with GRB 030329.
[**Summary:**]{} Based on these generally accepted ideas one can sketch the following generic GRB model: GRBs are a rare phenomenon observed within star forming regions and associated with the death of massive stars and the birth of compact objects. The emission arises from internal dissipation within a relativistic flow. This takes place at a distances of $\sim 10^{13}-10^{15}$cm from the central source that produces the relativistic outflow. Subsequent dissipation of the remaining energy due to interaction with the surrounding circumburst matter produces the afterglow. The nature of the “inner engine" is not resolved yet, however, a the association with SN (like 1998bw and 2003dh) shows that long GRBs involve a a collapsing star. Much less is known on the origin of short GRBs.
RELATIVISTIC EFFECTS {#sec:rel}
=====================
Compactness and relativistic motion {#sec:comp}
------------------------------------
The first theoretical clues to the necessity of relativistic motion in GRBs arose from the Compactness problem [@Ruderman75]. The conceptual argument is simple. GRBs show a non thermal spectrum with a significant high energy tail (see §\[sec:spec-obs\]). On the other hand a naive calculation implies that the source is optically thick. The fluctuations on a time scale $\delta t$ imply that the source is smaller than $c
\delta t$. Given an observed flux $F$, a duration $T$, and an distance $d$ we can estimate the energy $E$ at the source. For a typical photon’s energy $\bar E_\gamma$ this yields a photon density $ \approx 4 \pi d^2 F / \bar E_\gamma c^3 \delta t^2$. Now, two can annihilate and produce e$^+$e$^-$ pairs, if the energy in their CM frame is larger than $2 m_e c^2$. The optical depth for pair creation is: $$\tau_{\gamma\gamma} \approx { f_{e^\pm} \sigma_T 4 \pi d^2 F
\over \bar E_\gamma c^2 \delta t}$$ where, $f_{e^\pm}$ is a numerical factor denoting the average probability that photon will collide with another photon whose energy is sufficient for pair creation. For typical values and cosmological distances, the resulting optical depth is extremely large $\tau_{e^\pm} \sim 10^{15}$ [@Piran95]. This is, of course, inconsistent with the non-thermal spectrum.
The compactness problem can be resolved if the emitting matter is moving relativistically towards the observer. I denote the Lorentz factor of the motion by $\G$. Two corrections appear in this case. First, the observed photons are blue shifted and therefore, their energy at the source frame is lower by a factor $\G$. Second, the implied size of a source moving towards us with a Lorentz factor $\G$ is $c \delta t \G^2$ (see §\[sec:Temporal\] below). The first effect modifies $f_{e^\pm}$ by a factor $\G^{-2 \alpha}$ where $\alpha$ is the photon’s index of the observed (namely the number of observed photons per unit energy is proportional to $E^{-\alpha}$.). The second effect modifies the density estimate by a factor $\G^{-4}$ and it influences the optical depth as $\G^{-2}$. Together one finds that for $\alpha \sim 2$ one needs $\G \gtrsim 100$ to obtain an optically thin source.
The requirement that the source would be optically thin can be used to obtain direct limits from specific bursts on the minimal Lorentz factor within those bursts [@Krolik_Pier91; @FenimoreEpsteinHo93; @Woods_Loeb95; @Piran95; @Baring_Harding97; @P99; @Lithwick_Sari01]. A complete calculation requires a detailed integration over angular integrals and over the energy dependent pair production cross section. The minimal Lorentz factor depends also on the maximal photon energy, $E_{\rm max}$, the upper energy cutoff of the spectrum. @Lithwick_Sari01 provide a detailed comparison of the different calculations and point our various flaws in some of the previous estimates. They find that: $$\tau_{\gamma\gamma} = {11\over 180} {\sigma_T d^2 (m_e
c^2)^{-\alpha+1}{\cal F}\over c^2 \delta T (\alpha-1)} (
{E_{\rm max}\over m_ec^2})^{\alpha-1} \G^{2\alpha + 2}
(1+z)^{2\alpha-2} \ ,
\label{opt}$$ where the high end of the observed photon flux is given by ${\cal
F} E^{-\alpha}$ (photons per cm$^2$ per sec per unit photon energy). A lower limit on $\G$ is obtained by equating Eq. \[opt\] to unity.
Relativistic time effects {#sec:Temporal}
--------------------------
Consider first a source moving relativistically with a constant velocity along a line towards the observer and two photons emitted at $R_1$ and $R_2$. The first photon (emitted at $R_1$) will reach the observer at time $(R_2-R_1)/v-(R_2-R_1)/c$ before the second photon (emitted at $R_2$). For $\G \gg 1$ this equals $\approx (R_2-R_1)/2 c \G^2$. This allows us to associate an “observer time" $R/2 c \G^2$ with the distance $R$ and for this reason I have associated a scale $c \delta t \G^{-2}$ with fluctuations on a time scale $\delta t$ in the optical depth equation earlier (see §\[sec:comp\]). This last relation should be modified if the source moves a varying velocity (v=v(R)). Now $$\delta t_{12} \approx \int_{R_1}^{R_2}\frac{ dR}{ 2 c \G^2(R)} \ ,$$ which reduces to $$T_R \approx R/ 2 c \G^2 \ , \label{Rt}$$ for motion with a constant velocity. The difference between a constant velocity source and a decelerating source introduces a numerical factor of order eight which is important during the afterglow phase [@Sari97].
Consider now a relativistically expanding spherical shell, or at least a shell that is locally spherical (on a scale larger than $1/\G$). Emission from parts of the shell moving at angle $\theta$ relative to the line of sight to the observer will arrive later with a time delay $R(1-cos \theta)/c$. For small angles this time delay equals $R \theta^2/2 c$. As the radiation is beamed with an effective beaming angle $\approx 1/\G$ most of the radiation will arrive within a typical angular time scale: $$T_{ang} \equiv R /2 c \G^2 \ . \label{tang}$$ The combination of time delay and blueshift implies that if the emitted spectrum is a power law spectrum with a spectral index $\alpha$ then the observed signal from the instantaneous emission of a thin shell will decay at late time as a power law with $t^{-(2 - \alpha)}$ [@Fenimoreetal96; @NakarPiran03a]. The observed pulse from an instantaneous flash from a thin shell is shown in Fig. \[fig:thinshell\].
As I discuss later (see §\[sec:ex-int\]) the similarity between the angular time scale and the radial time scale plays a crucial role in GRB models.
Relativistic Beaming and the Patchy Shell Model {#sec:patchy-shell}
------------------------------------------------
The radiation from a relativistic source is beamed with a typical beaming angle $1/\Gamma$. This implies that if the source that is expanding radially with an ultra-relativistic speed a given observer “sees" radiation only from a region that is within $\Gamma^{-1}$ from its line of sight to the source. If the radius of the emitting region is $R$ the observer will see radiation from a region of size $R/\Gamma$. Since $\Gamma$ is extremely large during the GRB we observe emission only from a small fraction of the emitting shell. It is possible, and even likely, that the conditions within the small region that we observe will be different from the average ones across the shell. This means that the conditions that we infer won’t reflect the true average conditions within this particular GRB.
An interesting point related to the internal shocks (discussed later) model in this context is the following. According to the internal shocks model individual pulses are obtained by collisions between individual shells. Here the inhomogeneity of individual shells could be wiped out when the contributions of different hot spots from different shells is added. Alternatively the “inner engine" may produce a consistent angular pattern in which the hot spot is in the same position in all shells and in this case averaging won’t lead to a cancellation of the patchy shell structure.
Within the internal-external model the GRB is produced by internal shocks in which only the relative motion within the flow is dissipated. The bulk Lorentz factor remains unchanged. During the afterglow the shell is slowed down by external shocks. As the Lorentz factor decreases with time (see Eq. \[RGamma\]) we observe a larger and larger fraction of the emitting region until $\Gamma \approx \theta^{-1}$, where $\theta$ is the angular size of the whole emitting region - the GRB jet, see §\[sec:jets\]. This has several inevitable implications. If the initial relativistic flow is inhomogenous on a small angular scale then different observers looking at the same GRB (from different viewing angles) will see different light curves. A strong burst to one observer might look weak to another one if it is located at an angle larger than $1/\Gamma$ from the first. The two observers will see similar conditions later on, during the afterglow, as then they will observe the same angular regions. This has the following implications: (i) Given that the GRB population originate from some ‘typical’ distribution we expect that fluctuation between different bursts at early time during the GRB will be larger than fluctuations observed at late times during the afterglow [@KP00b]. A direct consequence of this behaviour is the appearance of a bias in the observations of GRBs. As we are more likely to detect stronger events we will tend to identify bursts in which a ‘hot spot‘ was pointing towards us during the GRB phase. If the original GRB shells are inhomogenous this would inevitably lead to a bias in the estimates of the GRB emission as compared to the kinetic energy during the afterglow. (ii) As the afterglow slows down we observe a larger and larger region. The angular structure would produces a variability in the light curve with a typical time scale of t, the observed time. These fluctuations will decay later as the Lorentz factor decreases and the observations are averaged over a larger viewing angle. @NakarPiranGranot03 have suggested that this is the source of the early fluctuations in the light curve of GRB 021004. @NakarOren03 modelled this process with a numerical simulation. They find that the flucutation light curve of GRB 021004 can be nicely fitted by this model and that it also explains the correlated fluctuations in the polarization (see also [@Granot03]).
PHYSICAL PROCESSES {#sec:physical-Processes}
===================
The observed prompt emission must be generated by energetic particles that have been accelerated within the collisionless shocks. The most likely process is synchrotron emission, even though there is some evidence that a simple synchrotron spectra does not fit all bursts [@Preece02] (but see however, [@BarraudEtal03] who finds consistency with the synchrotron model). I consider here, the different physical ingredient that determine the emission process: particle acceleration, magnetic field amplification, synchrotron emission and inverse Compton emission that could be relevant in some cases.
Relativistic Shocks {#sec:shocks}
-------------------
Shocks involve sharp jumps in the physical conditions. Conservation of mass, energy and momentum determine the Hugoniot shock jump conditions across the relativistic shocks for the case when the upstream matter is cold (see e.g. @BLmc1): $$\begin{aligned}
n_2 = 4 \Gamma n_1 \\
\nonumber e_2 = 4 \Gamma n_1 m_p c^2 \\
\Gamma_{sh}^2 = 2 \Gamma^2 \nonumber \label{jump}\end{aligned}$$ where $n_{1,2}$,$e_{1,2}$ are the number density and the energy density (measured in the local rest frame) of the matter upstream (region 1) and downstream (region 2). I have assumed that the energy density in region 1 is very small compared to the rest mass density. $\Ga$ is the Lorentz factor of the fluid just behind the shock and $\Ga_{sh}$ is the Lorentz factor of the shock front (both measured in the rest frame of the upstream fluid). The matter is compressed by a factor $\Gamma$ across a relativistic shock. The pressure, or the internal energy density behind the shock is of order $\Gamma^2 n_1 m_p c^2$. Thus, in the shock’s rest frame the relative “thermal" energy per particle (downstream) is of the same order of the kinetic energy per particle (ahead of the shock) upstream. Put differently the shock converts the ‘ordered’ kinetic energy to a comparable random kinetic energy. In an ultra-relativistic shock the downstream random velocities are ultra-relativistic.
Similar jump conditions can be derived for the Magnetic fields across the shock. The parallel magnetic field (parallel to the shock front) $B_{||}$ is compressed and amplified: $$B_{||2} = \Gamma B_{||1}$$ The perpendicular magnetic field $B_\bot$ remains unchanged.
The energy distribution of the (relativistic) electrons and the magnetic field behind the shock are needed to calculate the Synchrotron spectrum. In principle these parameters should be determined from the microscopic physical processes that take place in the shocks. However, it is difficult to estimate them from first principles. Instead I define two dimensionless parameters, $\epsilon_B$ and $\epsilon_e$, that incorporate our ignorance and uncertainties [@PacRho93; @Pi94; @SNP96]. It is commonly assumed that these energies are a constant fraction of the internal energy behind the shock (see however, @DaigneMochkovitch03). I denote by $\epsilon_e$ and by $\epsilon_B$ the ratio between these energies and the total internal energy: $$\begin{aligned}
e_e \equiv \epsilon_e e = 4 \Gamma^2_{sh} \epsilon_e n_1 m_p c^2 \\
\nonumber e_B = B^2 /8 \pi \equiv \epsilon_B e = 4 \Gamma^2_{sh}
\epsilon_B n_1 m_p c^2 \label{epsilons}\end{aligned}$$
One usually assumes that these factors, $\epsilon_{e,B}$, are constant through out the burst evolution. One may even expect that they should be constant from one burst to another (as they reflect similar underlying physical processes). However, it seems that a simple model that assumes that these parameters are constant during the prompt burst cannot reproduce the observed spectrum [@DaigneMochkovitch03]. This leads to explorations of models in which the equipartition parameters $\epsilon_{e,B}$ depend on the physical conditions within the matter.
In GRBs, as well as in SNRs the shocks are collisionless. The densities are so small so that mean free path of the particles for collisions is larger than the typical size of the system. However, one expects that ordered or random magnetic fields or alternatively plasma waves will replace in these shocks the role of particle collisions. One can generally use in these cases the Larmour radius as a typical shock width. A remarkable feature of the above shock jump conditions is that as they arise from general conservation laws they are independent of the detailed conditions within the shocks and hence are expected to hold within collisionless shocks as well. See however [@Mitra96] for a discussion of the conditions for collisionless shocks in GRBs.
Particle Acceleration {#sec:acc}
----------------------
It is now generally accepted that Cosmic rays (more specifically the lower energy component below $10^{15}$eV) are accelerated within shocks in SNRs is the Galaxy (see e.g. @Gaisser91). A beautiful demonstration of this effect arises in the observation of synchrotron emission from Supernova remnants, which shows emission from these accelerated particles within the shocks.
The common model for particle shock acceleration is the Diffuse Shock Acceleration (DSA) model. According to this model the particles are accelerated when they repeatedly cross a shock. Magnetic field irregularities keep scattering the particles back so that they keep crossing the same shock. The competition [@Fermi49] between the average energy gain, $E_f/E_i$ per shock crossing cycle (upstream-downstream and back) and the escape probability per cycle, $P_{esc}$ leads to a power-law spectrum $N(E) dE \propto E^{-p} dE $ with $$p = 1+ ln[1/(1-P_{esc})]/ln[\langle E_f /E_i \rangle].
\label{acce}$$ Note that within the particle acceleration literature this index $p$ is usually denoted as $s$. Our notation follows the common notation within the GRB literature.
@Blandford_Eichler87 review the theory of DSA in non-relativistic shocks. However, in GRBs the shocks are relativistic (mildly relativistic in internal shocks and extremely relativistic in external shocks). Acceleration in ultra relativistic shocks have been discussed by several groups [@Heavens_Drury88; @Bednarz98; @Gallant99; @Achterberg_etal01; @Kirk00; @Vietri02] In relativistic shocks the considerations are quite different from those in non-relativistic ones. Using the relativistic shock jump conditions (Eq. \[jump\] and kinematic considerations one can find (see @Vietri95 [@Gallant_99; @Achterberg_etal01]) that the energy gain in the first shock crossing is of the order $\Ga^2_{sh}$. However, subsequent shock crossing are not as efficient and the energy gain is of order unity $\langle E_f /E_i
\rangle \approx 2$ [@Gallant_99; @Achterberg_etal01].
The deflection process in the upstream region is due to a large scale smooth background magnetic field perturbed by MHD fluctuations. A tiny change of the particle’s momentum in the upstream region is sufficient for the shock to overtake the particle. Within the downstream region the momentum change should have a large angle before the particle overtakes the shock and reaches the upstream region. As the shock moves with a sub-relativistic velocity ($\approx c/\sqrt 3$) relative to this frame it is easy for a relativistic particle to overtake the shock. A finite fraction of the particles reach the upstream region. Repeated cycles of this type (in each one the particles gain a factor of $\sim 2$ in energy) lead to a power-law spectrum with $p \approx 2.2-2.3$ (for $\Gamma_{sh} \gg 1$). Like in non-relativistic shock this result it fairly robust and it does not depend on specific assumptions on the scattering process. It was obtained by several groups using different approaches, including both numerical simulations and analytic considerations. The insensitivity of this result arises, naturally from the logarithmic dependence in equation \[acce\] and from the fact that both the denominator and the numerator are of order unity. This result agrees nicely with what was inferred from GRB spectrum [@Sari_Piran_97MNRAS] or with the afterglow spectrum [@PanaitescuK01]. @Ostrowski02 point out, however, that this result requires highly turbulent conditions downstream of the shock. If the turbulence is weaker the resulting energy spectrum could be much steeper. Additionally as internal shocks are only mildly relativistic the conditions in these shocks might be different.
The maximal energy that the shock accelerated particles can be obtained by comparing the age of the shock $R/c$ (in the upstream frame) with the duration of an acceleration cycle. For a simple magnetic deflection, this later time is just half of the Larmour time, $ E/Z q_{e} B$ (in the same frame). The combination yields: $$E_{max} \approx Z q_{e} B R = 10^{20} {\rm eV} B_3 R_{15} \ ,
\label{emax_acc}$$ where the values that I have used in the last equality reflect the conditions within the reverse external shocks where UHECRs (Ultra High Energy Cosmic Rays) can be accelerated (see §\[sec:UHECRs\] below). For particle diffusion in a random upstream field (with a diffusion length $l$) one finds that $R$ in the above equation is replaced by $\sqrt{R l /3}$.
The acceleration process has to compete with radiation losses of the accelerated particles. Synchrotron losses are inevitable as they occur within the same magnetic field that is essential for deflecting the particles. Comparing the energy loss rate with the energy gain one obtain a maximal energy of: $$E_{max} \approx m c^2 \left( {4 \pi q_{e} \Ga_{sh} \over \sigma_T
B } \right )^{1/2} \approx 5 \cdot 10^{17} {\rm eV} (m/m_p)
\G_{100}^{1/2} B^{-1/2} \label{Emax_syn} .$$ The corresponding Lorentz factor is of the order of $10^8$ for $\Ga_{sh}=100$ and $B=1$ Gauss. Note that this formula assumes that the acceleration time is the Larmour time and hence that the synchrotron cooling time is equal to the Larmour time. Obviously it should be modified by a numerical factor which is mostly likely of order unity.
Synchrotron {#sec:sync}
------------
Synchrotron radiation play, most likely, an important role in both the GRB and its afterglow. An important feature of synchrotron emission is its polarization (see §\[sec:pol\_theory\]). Observations of polarization in GRB afterglows and in one case in the prompt emission support the idea that synchrotron emission is indeed taking place there (note however that IC also produces polarized emission). I review here the basic features of synchrotron emission focusing on aspects relevant to GRBs. I refer the reader to @Rybicki_Lightman79 for a more detailed discussion.
### Frequency and Power
The typical energy of synchrotron photons as well as the synchrotron cooling time depend on the Lorentz factor $\gamma_e$ of the relativistic electron under consideration and on the strength of the magnetic field . If the emitting material moves with a Lorentz factor $\Gamma$ the photons are blue shifted. The characteristic photon energy in the observer frame is given by: $$\label{syn_obs} (h\nu_{syn})_{obs}=\frac{\hbar q_eB}{m_ec}\gamma
_e^2\Ga ,$$ where $q_e$ is the electron’s charge.
The power emitted, in the local frame, by a single electron due to synchrotron radiation is: $$P_{syn} = \frac 4 3 \sigma_T c U_B \gamma_e^2 \ ,
\label{syn_power}$$ where $U_B \equiv B^2/8 \pi \equiv \epsilon_B e$ is the magnetic energy density and $\sigma_T$ is the Thompson cross section. The cooling time of the electron in the fluid frame is then $\gamma_e
m_e c^2/P$. The observed cooling time $t_{syn}$ is shorter by a factor of $\Ga$: $$\label{cooling} t_{syn}(\gamma_e) =\frac{3 m_e c}{4\sigma _T
U_B\gamma_e\Ga}.$$
Substituting the value of $ \gamma_e$ from equation \[syn\_obs\] into the cooling rate Eq. \[cooling\] one obtains the cooling time scale as a function of the observed photon energy: $$\label{tausyn2} t_{syn} (\nu) = \frac{3}{\sigma_T} \sqrt{\frac{ 2
\pi c m_e q_e} {B^{3} \Ga}} \nu^{-1/2}$$
Since $\ga_e$ does not appear explicitly in this equation $t_{syn}$ at a given observed frequency is independent of the electrons’ energy distribution within the shock. This is provided, of course, that there are electrons with the required $\ga_e$ so that there will be emission in the frequency considered. As long as there is such an electron the cooling time is “universal”. This equation shows a characteristic scaling of $t_{syn} (\nu) \propto \nu^{-1/2}$. This is not very different from the observed relation $\delta T \propto
\nu^{-0.4}$ [@Fenimore95]. However, it is unlikely that cooling and not a physical process determines the temporal profile.
The cooling time calculated above sets a lower limit to the variability time scale of a GRB since the burst cannot possibly contain spikes that are shorter than its cooling time. Observations of GRBs typically show asymmetric spikes in the intensity variation, where a peak generally has a fast rise and a slower decay. A plausible explanation of this observation is that the shock heating of the electrons happens rapidly (though episodically), and that the rise time of a spike is related to the heating time. The decay time is then set by the cooling, so that the widths of spikes directly measure the cooling time. However, it seems that there are problems with this simple explanation. First when plugging reasonable parameters one finds that the decay time as implied by this equation is too short. Second, if the cooling time is long the shocked region would suffer adiabatic losses and this would reduce the efficiency of the process. Thus it is unlikely that the pulse shapes can be explained by Synchrotron physics alone.
### The Optically thin Synchrotron Spectrum {#sec:synch-spec}
The instantaneous synchrotron spectrum of a single relativistic electron with an initial energy $\ga_e m_e c^2$ is approximately a power law with $F_\nu \propto \nu^{1/3}$ up to $\nu_{syn}(\ga_e)$ and an exponential decay above it. The peak power occurs at $\nu_{syn}(\gamma_e)$, where it has the approximate value $$\label{flux}
P_{\nu,max}\approx\frac{P(\gamma_e)}{\nu_{syn}(\gamma_e)}= \frac
{m_e c^2 \sigma_T} {3 q_e} \Gamma B .$$ Note that $P_{\nu,max}$ does not depend on $\gamma_e$, whereas the position of the peak does.
If the electron is energetic it will cool rapidly until it will reach $\ga_{e,c}$, the Lorentz factor of an electron that cools on a hydrodynamic time scale. For a rapidly cooling electron we have to consider the time integrated spectrum. For an initial Lorentz factor $\ga_e$: $F_\nu \propto \nu^{-1/2}$ for $\nu_{syn}(\ga_{e,c}) < \nu < \nu_{syn}(\ga_e)$.
To calculate the overall spectrum due to the electrons one needs to integrate over the electron’s Lorentz factor distribution. I consider first, following [@SPN98a], a power-law distribution a power index $p $ and a minimal Lorentz factor $\ga_{e,min}$. This is, of course, the simplest distribution and as discussed in §\[sec:acc\] this is the expected distribution of shock accelerated particles: $$N(\gamma_e )\sim \gamma_e^{-p}\ \ \ {\rm for}\ \gamma_e
>\gamma _{e,min}\;.
\label{e_distribution}$$ The condition $p>2$ is required so that the energy does not diverge at large $\gamma_e$ (@DaiCheng01 [@Bhattacharya01] consider also distributions with $2>p>1$ with a maximal energy cutoff, see below). The minimum Lorentz factor, $\gamma _{e,min}$, of the distribution is related to the electron’s energy density $e_e$ and the electron’s number density $n_e$ as: $$\gamma_{e,min}= {p-2\over p-1}{e_e \over n_e m_e c^2}=
{p-2\over p-1}
\langle\gamma_e\rangle. \label{gemin}$$ The minimal Lorentz factor plays an important role as it characterizes the ‘typical’ electron’s Lorentz factor and the corresponding ‘typical’ synchrotron frequency, $\nu_m \equiv
\nu_{syn}(\ga_{e,min})$. Interestingly the upper energy cutoff (which essentially exists somewhere) does not play a critical role in the spectrum for $p>2$. Of course it will lead to a high frequency cutoff of the spectrum around $\nu_{syn}$ that corresponds to this energy. However, quite generally, this happens at the high energy tail far from where the peak flux or the peak energy are emitted.
A simple modification of the above idea arises if only a fraction, $\xi_e$, of the electrons is accelerated to high energies and the rest of the electrons remain cold [@BykovMeszaros96; @Guetta_Spada_Waxman01]. If a small fraction of electrons shares the energy $e_e$ then the typical Lorentz factor would be $\xi_e^{-1} \ga_{e,min}$, where $\ga_{e,min}$ is calculated from Eq. \[gemin\] above. All the corresponding places where $\ga_{e,min}$ is used should be modified according to this factor. At the same time fewer electrons will be radiating. This will introduce a factor $\xi_e$ that should multiply the total emitted flux. In the following discussion I will not add this factors into the analysis. Similarly in situations when multiple pair are formed [@Ghisellini_Celotti99] the electron’s energy is shared by a larger number of electron. In this case $\xi_e$ is larger than unity and similar modifications of the spectrum applies.
The lowest part of the spectrum (strictly speaking the lowest part of the optically thin spectrum, as at very low frequencies self absorption sets in, see §\[Sec:self-abs\] below) is always the sum of the contributions of the tails of all the electron’s emission: $F_\nu \propto \nu^{1/3}$. This is typical to synchrotron [@Meszaros-Rees93; @Katz94; @CohenKatzP98] and is independent of the exact shape of the electron’s distribution. @Tavani96a [@Tavani96b], for example obtain such a low energy spectrum both for a Gaussian or for a Gaussian and a high energy power-law tail. The observation of bursts (about 1/5 of the bursts) with steeper spectrum at the lower energy part, i.e. below the “synchrotron line of death" [@PreeceEtal98; @Preece02] is one of the problems that this model faces. The problem is even more severe as in order that the GRB will be radiating efficiently, otherwise the efficiency will be very low, it must be in the fast cooling regime and the relevant low energy spectrum will be $\propto \nu^{-1/2}$ [@CohenKatzP98; @GhiselliniCelottiLazzati00]. However, as stressed earlier (see §ref[sec:spec-obs]{}) this problem is not seen in any of the HETE spectrum whose low energy tail is always in the proper synchrotron range with a slope [@BarraudEtal03] and it might be an artifact of the low energy resolution of BATSE in this energy range [@CohenKatzP98].
On the other hand the most energetic electrons will always be cooling rapidly (independently of the behavior of the “typical electron”). These electrons emit practically all their energy $m_e c^2 \gamma$, at their synchrotron frequency. The number of electrons with Lorentz factors $\sim\gamma$ is $\propto\gamma^{1-p}$ and their energy $\propto\gamma^{2-p}$. As these electrons cool, they deposit most of their energy into a frequency range $\sim\nu_{syn}(\gamma)\propto\gamma^2$ and therefore $F_{\nu}\propto\gamma^{-p}\propto\nu^{-p/2}$. Thus the uppermost part of the spectrum will satisfy: $$F_\nu = N[\gamma(\nu)] m_e c^2 \gamma(\nu) d\gamma /d\nu \propto
\nu^{-p/2}. \label{fastfnu}$$
In the intermediate frequency region the spectrum differs between a ‘slow cooling’ if the ‘typical’ electrons with $\ga_{e,min}$ do not cool on a hydrodynamic time scale and ‘fast cooling’ if they do. The critical parameter that determines if the electrons are cooling fast or slow is $\ga_{e,c}$, the Lorentz factor of an electron that cools on a hydrodynamic time scale. To estimate $\ga_{e,c}$ compare $t_{syn}$ (Eq. \[cooling\]) with $t_{hyd}$, the hydrodynamic time scale (in the observer’s rest frame): $$\ga_{e,c} = {{3m_{e} c }\over{4 \sigma _{T} U_{B}\Ga t_{hyd}}}
\label{ga_c}$$ For fast cooling $\ga_{e,min}< \ga_{e,c}$, while $\ga_{e,min} >
\ga_{e,c}$ for slow cooling. In the following discussion two important frequencies play a dominant role: $$\begin{aligned}
\nu_{m} \equiv \nu_{syn}(\ga_{e,min}) \ ; \\ \nonumber
\nu_{c} \equiv \nu_{syn}(\ga_{e,c}) \ . \label{numc}\end{aligned}$$ These are the synchrotron frequencies of electrons with $\ga{e,min}$ and with $\ga_{e,c}$.
[**Fast cooling ($\ga_{e,c} < \ga_{e,min}$):**]{} The typical electron is cooling rapidly hence $\nu_c < \nu_m$. The low frequency spectrum $F_\nu \propto \nu^{1/3}$ extends up to $\nu_c$. In the intermediate range between, $\nu_c$ and $\nu_m$, we observe the energy of all the cooling electrons. The energy of an electron $\propto\gamma$, and its typical frequency $\propto\gamma^2$ the flux per unit frequency is $\propto\gamma^{-1}\propto \nu^{-1/2}$. Overall the observed flux, $F_\nu$, is given by: $$\label{spectrumfast}
F_\nu \propto \cases{ ( \nu / \nu_c )^{1/3} F_{\nu,max}, &
$\nu<\nu_c$, \cr ( \nu / \nu_c )^{-1/2} F_{\nu,max}, &
$\nu_c<\nu<\nu_m$, \cr ( \nu_m / \nu_c )^{-1/2} ( \nu /
\nu_m)^{-p/2} F_{\nu,max}, & $\nu_m<\nu$, \cr }$$ where $\nu_m \equiv \nu_{syn}(\gamma_{e,min}), \nu_{c} \equiv
\nu_{syn}(\gamma_{e,c})$ and $F_{\nu,max}$ is the observed peak flux. The peak flux is at $\nu_c$ $F_{\nu,max}\equiv N_e
P_{\nu,max}/4\pi D^2$ (where D is the distance to the source and I ignore cosmological corrections). The power emitted is simply the power given to the electrons, that is $\epsilon_e $ times the power generated by the shock, $dE/dt$: $$P_{fast} = \epsilon_e {dE \over dt} . \label{Pfast-cool}$$ The peak energy emitted (which corresponds to the peak of $\nu
F_\nu$) is at $\nu_m$. The resulting spectrum is shown in Fig. \[fig:full\_spectrum\].
[**Slow cooling ($\ga_{e,c} > \ga_{e,min}$):**]{} Now only the high energy tail of the distribution (those electrons above $\ga_{e,c}$) cools efficiently. The electrons with $\gamma_e\sim\gamma_{e,min}$, which form the bulk of the population, do not cool. Now $f_\nu \propto \nu^{1/3}$ up to $\nu_m$, and $F_\nu \propto \nu^{-p/2}$ above $\nu_c$. In the intermediate region between these two frequencies: $$F_\nu = N[(\gamma(\nu)] P[( \gamma(\nu)] d\gamma /d\nu \propto
\nu^{-(p-1)/2} ,$$ where $\gamma(\nu)$ is the Lorentz factor for which the synchrotron frequency equals $\nu$, $N[\ga]$ is the number of electrons with a Lorentz factor $\ga$ and $P[\ga]$ the power emitted by an electron with $\ga$. Overall one finds: $$\label{spectrumslow} F_\nu \propto \cases{ (\nu/\nu_m)^{1/3}
F_{\nu,max},
& $\nu<\nu_m$, \cr
(\nu/\nu_m)^{-(p-1)/2} F_{\nu,max},
& $\nu_m<\nu<\nu_c$, \cr
\left( \nu_c/\nu_m \right)^{-(p-1)/2} \left( \nu/\nu_c
\right)^{-p/2} F_{\nu,max},
& $\nu_c<\nu$. \cr
}$$ The peak flux is at $\nu_m$ while the peak energy emitted is at $\nu_c$. The emitted power is determined by the ability of the electrons to radiate their energy: $$P_{slow} = N_e P_{syn} (\ga_{e,min}) \label{Pslow-cool}$$ where, $N_e$ is the number of electrons in the emitting region and $P_{syn} (\ga_{e,min}) $, the synchrotron power of an electron with $\ga_{e,min}$, is given by Eq. \[syn\_power\].
Typical spectra corresponding to fast and slow cooling are shown in Fig. \[fig:full\_spectrum\]. The light curve depends on the hydrodynamic evolution, which in turn determines the time dependence of $\nu_m,\nu_c$ and $F_{\nu,max}$. The spectra presented here are composed of broken power laws. @GranotSari02 present a more accurate spectra in which the asymptotic power law segments are connected by smooth curves. They fit the transitions by $ [(\nu/\nu_b)^{-n\beta_1}
+(\nu/\nu_b)^{-n \beta_2}]^{-1/n}$. The parameter $n$ estimates the smoothness of the transition with $n \approx 1$ for all transitions.
Fast cooling must take place during the GRB itself: the relativistic shocks must emit their energy effectively - otherwise there will be a serious inefficiency problem. Additionally the pulse won’t be variable if the cooling time is too long. The electrons must cool rapidly and release all their energy. It is most likely that during the early stages of an external shock (that is within the afterglow phase - provided that it arises due to external shocks) there will be a transition from fast to slow cooling [@MR97; @Waxman97a; @MesReesWei97; @Waxman97b; @KP97].
@Tavani96a [@Tavani96b] discusses the synchrotron spectrum from a Gaussian electron distribution and from a Gaussian electron distribution with a high energy tail. As mentioned earlier the Gaussian (thermal) distribution has a typical low frequency $\nu^{1/3}$ spectrum. However, as expected, there is a sharp exponential cutoff at high frequencies. Without a high energy tail this spectrum does not fit the observed GRB spectra of most GRBs (see §\[sec:spec-obs\]). Note, however, that it may fit a small subgroup with a NHE [@Pendleton_NHE97]. With an electron distribution composed of a Gaussian and an added high energy tail the resulting spectra has the typical $\nu^{1/3}$ component and an additional high energy tail which depends on the electrons power law index. Such a spectra fits several observed GRB spectra [@Tavani96a; @Tavani96b].
Another variant is the synchrotron spectrum from a power-law electron distribution with $1<p<2$ [@Bhattacharya01; @DaiCheng01]. In this case there must be a high energy cutoff $\gamma_{e,max}$ and the ‘typical’ electron’s energy corresponds to this upper cutoff. A possible cutoff can arise from Synchrotron losses at the energy where the acceleration time equals to the energy loss time (see e.g. @deJagerEtal96 and the discussion in §\[sec:acc\]): $$\gamma_{e,Max} \approx 4 \times 10^7 B^{-1/2} \ .$$ The resulting “typical" Lorentz factor $\gamma_{e,min}$ differs now from the one given by Eq. \[gemin\]. @DaiCheng01 [@Bhattacharya01] find that it is replaced with: $$\gamma_{e,min}=
\left[\left(\frac{2-p}{p-1}\right)\left(\frac{m_p}{m_e}\right)\epsilon_e
\Gamma\gamma_{e,Max}^{p-2}\right]^{1/(p-1)} \ . \label{gemin1}$$ The resulting spectrum is now similar to the one obtained for fast or slow cooling with the new critical frequencies $\nu_m$ given by plugging the result of Eq. \[gemin1\] into Eq. \[numc\].
### Synchrotron Self-Absorption {#Sec:self-abs}
At low frequencies synchrotron self-absorption may take place. It leads to a steep cutoff of the low energy spectrum, either as the commonly known $\nu^{5/2}$ or as $\nu^2$. To estimate the self absorption frequency one needs the optical depth along the line of sight. A simple approximation is: $\alpha'_{\nu'}R/\Gamma$ where $\alpha'_{\nu'}$ is the absorption coefficient [@Rybicki_Lightman79]: $$\label{alpha_nu} \alpha'_{\nu'} = {(p+2) \over 8 \pi m_e
\nu'^2}\int^{\infty}_{\gamma_{min}} d\gamma_e
P'_{\nu',e}(\gamma_e){n(\gamma_e) \over \gamma_e} \ \ .$$ The self absorption frequency $\nu_a$ satisfies: $\alpha'_{\nu'_0} R/\Gamma=1$. It can be estimates only once we have a model for the hydrodynamics and how do $R$ and $\gamma$ vary with time [@Wijers_Galama98; @GPS99b].
The spectrum below the the self-absorption frequency depends on the electron distribution. One obtains the well known [@Rybicki_Lightman79], $\nu^{5/2}$ when the synchrotron frequency of the electron emitting the self absorbed radiation is inside the self absorption range. One obtains $\nu^2$ if the radiation within the self-absorption frequency range is due to the low energy tail of electrons that are radiating effectively at higher energies. For this latter case, which is more appropriate for GRB afterglow (for slow cooling with $\nu_m < \nu_c$) [@PacRho93; @Meszaros-Rees93; @Katz94; @KP97]: $$F_\nu \propto \nu^2 [k_B T_e / (\Gamma m_p c^2)] {R^2},
\label{sa_spec}$$ where $R$ is the radius of the radiating shell and the factor $k_B T_e / (\Gamma m_p c^2)$ describes the degree of electron equipartition in the plasma shock-heated to an internal energy per particle $m_p c^2$ and moving with Lorentz factor $\gamma$.
The situation is slightly different for a shock heated fast cooling i.e. if $\nu_c < \nu_m$ [@GPS00]. In this case we expect the electron’s distribution to be inhomogeneous, as electrons near the shock did not cool yet but electrons further downstream are cool. This leads to a new spectral range $\nu_{sa}
< \nu < \nu_{sa'}$ with $F_\nu \propto \nu^{11/8}$ (see Fig. \[fig:full\_spectrum\]).
Synchrotron self-absorption is probably irrelevant during the GRB itself. Note, however, that under extreme conditions the self absorption frequency might be in the low and this may explain the steep low energy spectra seen in some bursts. These extreme conditions are needed in order to make the system optically thick to synchrotron radiation but keeping it optically thin to Thompson scattering and pair creation [@GPS00]. Self absorption appears regularly during the afterglow and is observed typically in radio emission [@Katz94; @Waxman97b; @KP97; @Wijers_Galama98; @GPS99b]. The expected fast cooling self-absorbed spectra may arise in the early radio afterglow. So far it was not observed.
Inverse Compton {#sec:IC}
---------------
Inverse Compton (IC) scattering may modify our analysis in several ways. IC can influence the spectrum even if the system is optically thin (as it must be) to Compton scattering (see e.g. @Rybicki_Lightman79). In view of the high energies involved a photon is IC scattered only once. After a single IC scattering the photon’s energy is so high that in the electron’s rest frame it is above the Klein-Nishina energy ($m_e c^2 \sim
0.5$Mev), and the decrease in the Compton cross section in this energy range makes a second scattering unlikely. Note that in some cases ([*e.g.*]{} in forward external shocks) even the first scattering may suffer from this problem. The effect of IC depends on the Comptonization parameter $Y=\gamma^2 \tau_e$. For fast cooling one can show [@SNP96] that $Y$ satisfies: $$\begin{aligned}
Y= {\epsilon _e/U _B}~~~ & \ \ {\rm if }\ \ & U_e \ll U _B\\
\nonumber Y= \sqrt{U _e/U_B} & \ \ {\rm if }\ \ & U_e \gg U _B ,
\nonumber\end{aligned}$$ where $U_e$ and $U_B$ are the energy densities of the electron’s and of the magnetic field respectively. IC is unimportant if $Y<1$ and in this case it can be ignored.
If $Y>1$, which corresponds to $U_e > U_B$ (or to $\epsilon_e >
\epsilon_{B}$) and to $Y=\sqrt{U_e/U_B}$, then a large fraction of the low energy synchrotron radiation will be up scattered by IC and a large fraction of the energy will be emitted via the IC processes. Those IC photons might be too energetic, that is their energy may be far beyond the observed energy range. In this case IC will not influence the observed spectra directly. However, as IC will take a significant fraction of the energy of the cooling electrons it will influence the observations in two ways: it will shorten the cooling time (the emitting electrons will be cooled by both synchrotron and IC process). Second, assuming that the observed $\ga$-ray photons results from synchrotron emission, IC will influence the overall energy budget and reduce the efficiency of the production of the observed radiation. I turn now to each of this cases.
An IC scattering boosts the energy of the photon by a factor $\gamma^2_e$. Typical synchroton photon that have been scattered once by IC will be observed at the energy: $$\label{IC_obs} (h\nu_{IC})_{obs}=\frac{\hbar q_eB}{m_ec}\gamma
_e^4 \Ga .$$ The electrons are cooled both by synchrotron and by IC. The latter is more efficient and the cooling is enhanced by the Compton parameter $Y$. The cooling time scale is: $$t_{IC}={\frac{6 \pi c^{3/4}
\sqrt{U_B/U_e} \hbar^{1/4} m_e^{3/4}{q_e}^{1/4}} {B^{7/4}
(h \nu)^{1/4} \Ga^{3/4} \sigma_T}}
\label{tIC}$$
The conditions needed to produce the observed emission using IC are probably not fulfilled in either external or internal shocks (see however @Ghisellini_Celotti99 and the discussion in §\[sec:QIC\] below). However even if IC does not produce the observed $\ga$-ray photons it still influences the process if $Y>
1$. First it will add an ultra high energy component to the GRB spectrum. This component will typically be at around $\ga_e^2$ times the observed $\sim 100$KeV photons, namely at the GeV-TeV range (see e.g. @Vietri97 [@Bottcher_Dermer98] and the discussion in §\[sec:TeV\]). This component might have been already observed in some GRBs during the early afterglow (see §\[sec:spec-obs\]). Inverse Compton will also speed up the cooling of the emitting regions and shorten the cooling time, $t_{syn}$ estimated earlier (Eq. \[tausyn2\]) by a factor of $Y$. At the same time this also reduces the efficiency (for producing the observed $\gamma$-rays) by the same factor.
Quasi-Thermal Comptonization {#sec:QIC}
----------------------------
@Ghisellini_Celotti99 suggested that the prompt GRB emission arises in a quasi-thermal Comptonization process. In their model the optical depth within the emitting region (of internal shocks) is of order unity leading to a copious pair production. The system is optically thick to synchrotron emission. The self-absorbed synchrotron emission is the seed for an Inverse Compton emission produced by the pairs. The effective Compton paramter in the new system, $\tilde Y$, is: $$\tilde Y \equiv 4 \tau ({ kT' \over m_e c^2}) (1 + \tau ) [ 1 + 4
( {kT' \over m_e c^2})] \label{tildeY},$$ where $T'$ is the effective temperature of the pair and $\tau$ is the total cross section for scattering. The pairs act as a thermostat controlling the effective temperature within the emitting region to 30-300kev [@Svensson82; @Svensson84]. The resulting spectrum from this model is a flat spectrum $F_\nu
\propto \nu^0$ between the $h \nu_{sa} \Ga $ and $k T' \Ga$ [@Ghisellini_Celotti99]. The spectrum will evolve rapidly during the burst while the pairs are being created and the effective temperature decreases.
Polarization from Relativistically Moving Sources {#sec:pol_theory}
-------------------------------------------------
Polarization can provide information on both the emission process and on the geometry of the emitting regions. Usually the observed polarization is obtained by first integrating the Stokes parameters of the radiation emitted by the individual electrons over the electron’s distribution. This yields the local polarization. Then we integrate over the emitting region to obtain the global polarization. In GRBs (both in the prompt emission and in the afterglow) the emitting regions move relativistically towards the observed. The implied Lorentz transformations play a very important role in the second integration as they change the direction of propagation of the photons and hence the direction of the local polarization. The final results are sometimes surprising and counter intuitive. For example even if the intrinsic (local) emission is 100% polarized in the same direction the integration over the emitting region would reduce this to 70% polarization. I consider polarization from synchrotron emission here, but the results can be easily applied to IC as well. I apply the results derived in this section to the possible polarization from the prompt emission and from the afterglow in the corresponding sections §\[sec:pol\_prompt\] and \[sec:pol\_after\].
As an example I consider synchrotron emission. Synchrotron emission is polarized with and the intrinsic local polarization level depends on the spectral index of the energy distribution of the emitting electrons, $p$, [@Rybicki_Lightman79]. For typical values ($2<p<3$) it can reach 75%. The polarization vector is perpendicular to the magnetic field and, of course, to the direction of the emitted radiation. The formalism can be easily adopted also to Inverse Compton for which the intrinsic local polarization is higher and could reach 100% when the photons are scattered at 90$^o$.
Consider first a case where the magnetic field is uniform locally (over a regions of angular size $\Gamma^{-1}$). This could happen, for example, if we have an ordered magnetic field along the $\phi$ direction and the observer is more than $\Gamma^{-1}$ away from the symmetry axis. This would be the case within internal shocks if the magnetic field is dragged from the source or within several Poynting flux dominated models. The locally emitted polarization is uniform and is in the plane of the sky and perpendicular to the direction of the magnetic field. In a Newtonian system it would combine so that the observed polarization equals the emitted one. However, the Lorentz transformations induce their own signature on the observed polarization [@GranotKonigl03; @Granot03]. This is depicted in Fig. \[fig:pol\_uniform\]. It is clear from this figure that the polarization vector varies along the observed region (whose angular size is $1/\Gamma$. Consequently the observed global polarization will be smaller than the local polarization.
The observed stokes parameters are weighted averages of the local stokes parameters at different regions of the shell. The instantaneous polarization is calculated using the instantaneous observed flux $ F_{\nu }(y,T)\propto (1+y)^{-(3+\alpha )} $, with $\alpha$ the relevant spectral index at this segment, as the weights, where $ y \equiv (\Gamma \theta )^{2} $ and $ T $ is the observer time. The time integrated polarization is calculated using the fluences as weights: $ \int ^{\infty }_{0}F_{\nu
}(y,T)dT\propto (1+y)^{-(2+\alpha)} $.
The fluxes depend on how the intensity varies with the magnetic field. For $ I_\nu \propto B^0 $, which is relevant for fast cooling[^5] (and the prompt GRB), the time integrated stokes parameters (note that $ V=0 $ as the polarization is linear) and polarization are given by: $$\label{Eq QU ordered} \frac{\left\{ \begin{array}{c}Q \\U \\
\end{array}\right\}}{I}=
\Pi _{synch} \frac{\int _{0}^{2\pi }\int _{0}^{\infty }(1+y)^{-(2+\alpha
)}\left\{ \begin{array}{c} \cos(2\theta _{p}) \\ \sin(2\theta _{p}) \\
\end{array}\right\}dyd\phi }{\int _{0}^{2\pi }\int _{0}^{\infty
}(1+y)
^{-(2+\alpha )}dyd\phi } ,$$ and the relative polarization is given by $$\label{Eq Pi} \Pi
=\frac{\sqrt{U^{2}+Q^{2}}}{I},$$ where $ \theta _{p}=\phi +\arctan(\frac{1-y}{1+y}\cot\phi ) $ [@GranotKonigl03] (see also [@LyutikovPB03]). For $
\alpha =1 $ Eqs. \[Eq QU ordered\]-\[Eq Pi\] yield a polarization level of $ \Pi /\Pi _{synch}\approx 60\% $. I.e. 60% of the maximal synchrotron polarization, or an overall polarization of $\sim 45\%$. Taking the exact values of $\alpha$ and the dependence of $I_\nu$ on $B$ for fast cooling and $p=2.5$ results in an overall polarization of $\sim 50\%$ [@GranotKonigl03; @NakarPiranWaxman03].
It turns out that one can get a polarized emission even from random magnetic field @GruzinovWaxman99 and @MedvedevLoeb99. This happens if the system has non spherical geometry. Consider a two dimensional random magnetic field which is in the plane of the shock and assume that the correlation length of this magnetic field is very short compared to all other length scales in the system. The Lorentz transformation induce in this case a radial polarization pattern going out from the center (where the velocity of the matter is towards the observer and the polarization vanishes). This polarization pattern is shown in Fig. \[fig:pol\_random\]. It is clear that a simple integration over this pattern will lead to a vanishing polarization.
However, a net polarization can arise in several cases if the overall symmetry is broken. Polarization will arise if (see Fig. \[fig:pol\_random\]):
- We observe a jet in an angle so that only a part of the jet is within an angle of $\Gamma^{-1}$.
- If the emission is nonuniform and there are stronger patches with angular size smaller than $\Gamma^{-1}$from which most of the emission arise.
- We observe a standard jet whose emission is angle dependent and this dependence is of the order of $\Gamma^{-1}$.
@Gruzinov99 [@GhiselliniLazzati99; @Sari99; @Waxman03] suggested that polarization can arise from a jet even if the magnetic field is random. @NakarPiranWaxman03 considered a random magnetic field that remains planner in the plane of the shock (for a three dimensional random magnetic field the polarization essentially vanishes). For $I_\nu \propto B^{0} $ the degree of observed polarization of the emission emitted from a small region at angle $y$ is: $ \Pi (y)/\Pi _{synch}=min(y,1/y) $. The overall time integrated stokes parameters are: $$\label{Eq QU rand} \frac{\left\{ \begin{array}{c}Q \\U \\
\end{array}\right\}}{I}=\Pi _{synch}\frac{\int _{0}^{2\pi
}\int_{0}^{\infty}P'_{\nu',m}(1+y)^{-(2+\alpha)}\min(y,1/y)\left\{
\begin{array}{c}
\cos(2\phi) \\ \sin(2\phi)\\
\end{array}\right\}dyd\phi }{\int _{0}^{2\pi }\int_{0}^{\infty }P'_{\nu
',m}(1+y)^{-(2+\alpha )}dyd\phi},$$ where $ P'_{\nu ',m}=P'_{\nu ',m}(y,\phi ) $ is the emitted power at the synchrotron frequency in the fluid rest frame. For a top-hat jet with sharp edges $ P'_{\nu ',m} $ is constant for any $ y $ and $ \phi $ within the jet and zero otherwise. For a structured jet $ P'_{\nu ',m} $ depends on the angle from the jet axis.
The maximal polarization is observed when one sees the edge of the jet. The probability to see the edge of a top-hat jet with sharp edges and an opening angle $ \theta _{j} \Gamma \gg 1 $ is negligible. On the other hand a jet with $ \theta _{j} \Gamma \ll
1 $ is not expected. Thus the only physical cases in which we can expect a large polarization are $ 1 \lesssim \theta _{j}\Gamma
<{\textrm{a few }} $.
Fig. \[fig:pol\_ran2\] depicts the time integrated polarization and the efficiency from sharp edged jets with different opening angles as a function of the angle between the jet axis and the line of sight, $ \theta _{obs} $. The efficiency, $ e_{ff} $ is defined to be the ratio between the observed fluence at $
\theta_{obs} $ and the maximal possible observed fluence at $
\theta _{obs}=0 $. In all these cases the polarization is peaked above 40%, however the efficiency decrease sharply as the polarization increase. Thus the probability to see high polarization grows when $ \theta _{j} $ decrease. The probability that $ \theta _{obs} $ is such that the polarization is larger than $ 30\% $ ($\cdot \Pi_{synch}$) while $ e_{ff}>0.1 $ is 0.68, 0.41, 0.2 & 0.08 for $ \theta _{j} \Gamma =0.5,1,2,4 $ respectively. In reality this probability will be smaller, as the chance to observe a burst increases with its observed flux.
These later calculations also apply for IC emission [@Lazzatietal03; @DarDeRujula03]. However, in this case the intrinsic local polarization is around 100% and hence one can reach a maximal polarization of $\sim 70$%.
Polarization could also arise if the magnetic field is uniform over random patches within a region of size $\Gamma^{-1}$. Here it is difficult, of course to estimate the total polarization without a detailed model of the structure of the jet [@GruzinovWaxman99].
THE GRB AND THE PROMPT EMISSION {#sec:PROMPT}
===============================
I turn now to discussion of the theory of the GRB and the prompt emission. It is generally accepted that both the GRB and the afterglow arise due to dissipation of the kinetic energy of the relativistic flow. The relativistic motion can be dissipated by either external [@MR92; @RM92; @Katz94] or internal shocks [@NPP92; @PaczynskiXu94; @MR94b]. The first involve slowing down by the external medium surrounding the burst. This would be the analogue of a supernova remnant in which the ejecta is slowed down by the surrounding ISM. Like in SNRs external shocks can dissipate all the kinetic energy of the relativistic flow. On the other hand internal shocks are shocks within the flow itself. These take place when faster moving matter takes over a slower moving shell.
@SP97 have shown that external shocks cannot produce variable bursts (see also @Fenimoreetal96). By variable I mean here, following [@SP97] that $\delta t \ll T $, where $T$ is the overall duration of the burst (e.g. $T_{90}$) and $\delta t$ is the duration of a typical pulse (see §\[sec:temp-obs\]). As most GRBs are variable @SP97 concluded that most GRBs are produced by internal shocks [@MR94b]. Internal shocks can dissipate only a fraction of the kinetic energy. Therefore, they must be accompanied by external shocks that follow and dissipate the remaining energy. This leads to the internal-external shocks scenario [@PiranSari98]. GRBs are produced by internal shocks within a relativistic flow. Subsequent external shocks between the flow and the circum-burst medium produce a smooth long lasting emission - the afterglow. Various observations (see §\[sec:transition-obs\]) support this picture. I begin with the discussion with a comparison of internal vs. external shocks. I review then the prompt emission from internal shocks, then the prompt emission from external shocks (which includes contributions to the late part of long GRBs and the prompt optical flash). I also discuss the transition from the observations of one shock to the other.
Internal vs. External Shocks {#sec:ex-int}
-----------------------------
### General Considerations
Consider a “quasi” spherical relativistic emitting shell with a radius $R$, a width $\Delta$ and a Lorentz factor $\Gamma$. This can be a whole spherical shell or a spherical like section of a jet whose opening angle $\theta$ is larger than $\Gamma^{-1}$. Because of relativistic beaming an observer would observe radiation only from a region of angular size $\sim \Gamma^{-1}$. Consider now photons emitted at different points along the shock (see Fig. \[fig:times\]). Photons emitted by matter moving directly towards the observer (point A in Fig. \[fig:times\]) will arrive first. Photons emitted by matter moving at an angle $\Gamma^{-1}$ (point D in Fig. \[fig:times\]) would arrive after $t_{ang} = R/2c\Gamma^2$. This is also, $t_{R}$, the time of arrival of photons emitted by matter moving directly towards the observer but emitted at $2R$ (point C in Fig. \[fig:times\]). Thus, $t_{R} \approx t_{ang}$ [@SP97; @Fenimoreetal96]. This coincidence is the first part of the argument that rules out external shocks in variable GRBs.
At a given point particles are continuously accelerated and emit radiation as long as the shell with a width $\Delta$ is crossing this point. The photons emitted at the front of this shell will reach the observer a time $t_\Delta = \Delta /c$ before those emitted from the rear (point B in Fig. \[fig:times\]). In fact photons are emitted slightly longer as it takes some time for the accelerated electrons to cool. However, for most reasonable parameters the cooling time is much shorter from the other time scales [@SNP96] and I ignore it hereafter.
The emission from different angular points smoothes the signal on a time scale $t_{ang}$. If $t_\Delta \le t_{ang}\approx t_{R}$ the resulting burst will be smooth with a width $t_{ang}\approx
t_{R}$. The second part of this argument follows from the hydrodynamics of external shocks. I show later in §\[sec:Ex-shocks\] (see also @SP97) that for external shocks $\Delta/c \le R/c \Gamma^2 \approx t_{R} \approx t_{ang}$ and for a spreading shell $\Delta \approx R/c \Gamma^2$. Therefore external shocks can produce only smooth bursts!
As we find only two time scales and as the emission is smoothed over a time scale $t_{ang}$, a necessary condition for the production of a variable light curve is that $t_\Delta = \Delta/c
> t_{ang}$. In this case $t_\Delta$ would be the duration of the burst and $t_{ang}$ the variability time scale. This can be easily satisfied within internal shocks (see Fig \[fig:internal\_shocks\]). Consider an “inner engine" emitting a relativistic wind active over a time $t_\Delta =\Delta/c$ ($\Delta$ is the overall width of the flow in the observer frame). The source is variable on a scale $L /c$. Internal shocks will take place at $R_s \approx L \Gamma^2$. At this place the angular time and the radial time satisfy: $t_{ang} \approx t_{R} \approx
L/c $. Internal shocks continue as long as the source is active, thus the overall observed duration $T = t_\Delta$ reflects the time that the “inner engine" is active. Note that now $t_{ang}
\approx L/c < t_\Delta$ is trivially satisfied. The observed variability time scale in the light curve, $\delta t$, reflects the variability of the source $L/c$. While the overall duration of the burst reflects the overall duration of the activity of the “inner engine".
Numerical simulations [@KPS97] have shown that not only the time scales are preserved but the source’s temporal behavior is reproduced on an almost one to one basis in the observed light curve. This can be explained now [@NakarPiran02c] by a simple toy model (see §\[sec:toy\] below).
### Caveats and Complications
Clearly the way to get around the previous argument is if $t_{ang}
< t_{R}$. In this case one can identify $t_{R}$ with the duration of the burst and $t_{ang}$ as the variability time scale. The observed variability would require in this case that: $t_{ang}/t_{R} = \delta t /T$. For this the emitting regions must be smaller than $R/\Gamma$.
One can imagine an inhomogenous external medium which is clumpy on a scale $d \ll R/\Gamma$ (see Fig \[fig:clumps\]). Consider such a clump located at an angle $\theta \sim \Gamma^{-1}$ to the direction of motion of the matter towards the observer. The resulting angular time, which is the difference in arrival time between the first and the last photons emitted from this clump would be:$\sim d/c \Gamma $. Now $t_{ang} \sim {d/c\Gamma}< t_{R}$ and it seems that one can get around the argument presented before.
However, Sari and Piran [@SP97] have shown that such a configuration would be extremely inefficient. This third part of this argument rules out this caveat. The observations limit the size of the clumps to $d \approx c \Gamma \delta t$ and the location of the shock to $R \approx c T \Gamma^2 $. The number of clumps within the observed angular cone with an opening angle $\Gamma^{-1}$ equals the number of pulses which is of the order $T/\delta t$. The covering factor of the clumps can be directly estimated in terms of the observed parameters by multiplying the number of clumps ($T/\delta t$) times their area $d^2= (\delta t
\Gamma)^2$ and dividing by the cross section of the cone $(R/\Gamma)^2$. The resulting covering factor equals $ \delta t
/T \ll 1$. The efficiency of conversion of kinetic energy to $\gamma$-rays in this scenario is smaller than this covering factor which for a typical variable burst could be smaller than $10^{-2}$.
I turn now to several attempts to find a way around this result. I will not discuss here the feasibility of the suggested models (namely is it likely that the surrounding matter will be clumpy on the needed length scale [@Dermer_Mitman99], or can an inner engine eject “bullets" [@Begelman99] with an angular width of $\sim 10^{-2}$ degrees and what keeps these bullets so small even when they are shocked and heated). I examine only the question whether the observed temporal structure can arise within these models.
### External Shocks on a Clumpy Medium [sec:Clumpy]{} {#Clumpy}
@Dermer_Mitman99 claim that the simple efficiency argument of @SP97 was flawed. They point out that if the direction of motion of a specific blob is almost exactly towards the observer the corresponding angular time will be of order $d^2/cR$ and not $d/{c\Gamma}$ used for a “generic” blob. This is narrower by a factor $d\Gamma/R$ than the angular time across the same blob that is located at a typical angle of $\Gamma^{-1}$. These special blobs would produce strong narrow peaks and will form a small region along a narrow cone with a larger covering factor. @Dermer_Mitman99 present a numerical simulation of light curves produced by external shocks on a clumpy inhomogeneous medium with $\delta t/ T \sim 10^{-2} $ and efficiency of up to $\sim 10$%.
A detailed analysis of the light curve poses, however, several problems for this model. While this result is marginal for bursts with $\delta t/T \sim 10^{-2}$ with a modulation of 50% it is insufficient for bursts with $\delta t /T \sim 10^{-3}$ or if the modulation is $\sim 100\%$. Variability on a time scale of milliseconds has been observed [@NakarPiran02b] in many long GRBs (namely $\delta t / T $ can be as small as $10^{-4}$.). Moreover, in this case one would expect that earlier pulses (that arise from blobs along the direction of motion) would be narrower than latter pulses. This is not seen in the observed bursts [@Ramirez-Ruiz_Fenimore00].
Finally the arrival time of individual pulses depends on the position of the emitting clumps relative to the observers. Two following pulses would arise from two different clumps that are rather distant from each other. There is no reason why the pulses and intervals should be correlated in any way. Recall (§\[sec:temp-obs\]) that the duration of a pulse and the subsequent interval are correlated [@NakarPiran02a].
### The Shot-Gun Model {#sec:shot-gun}
@Begelman99 suggested that the “inner engine" operates as a shot-gun emitting multiple narrow bullets with an angular size much smaller than $\Gamma^{-1}$ (see Fig \[fig:bullets\]). These bullets do not spread while propagating and they are slowed down rapidly by an external shock with a very dense circumburst matter. The pulses width is given by $t_{ang}$ or by the slowing down time. The duration of the burst is determined by the time that the “inner engine" emits the bullets.
This model can produce the observed variability and like in the internal shocks model the observed light curve represents also here the temporal activity of the source. However, in this model the width of the pulses is determined by the angular time or the hydrodynamic time or the cooling time of the shocked material. On the other hand the intervals between the pulses depend only on the activity of the inner engine. Again, there is no reason why the two distributions will be similar and why there should be a correlation between them (see §\[sec:temp-obs\] and [@NakarPiran02a]).
### Relativistic Turbulence
An interesting alternative to shocks as a way to dissipate kinetic energy is within plasma turbulence [@SmolskyUsov96; @SmolskyUsov00; @LyutikovBlandford02; @LyutikovBlandford03]. It has been suggested that in this case the kinetic energy of the shock is dissipated downstream to a combination of macroscopic (relativistic) random motion of plasma blobs with a Lorentz factor $\Gamma_b$. Within these blobs the particles have also a (relativistic) random velocity with a Lorentz factor $\Gamma_p$, such that: $\Gamma_s \approx \Gamma_b \Gamma_p$.
Relativistic turbulence may be the only way to produce variability in a situation that the matter is slowed down by the external medium and not by internal interaction. I stress that in this case the process is not described by regular shocks and hence some of the previous arguments do not hold. Two crucial open questions are i) Whether one can produce the observed correlations between pulses and intervals. ii) Why there is no spreading of pulses at later times, as would be expected if the emitting region is slowing down and increasing its radius.
Internal Shocks {#sec:In-shocks}
----------------
### Hydrodynamics of Internal Shocks {#sec:Int-hydro}
Internal shocks take place when a faster shell catches a slower one, namely at: $$R_{int} \approx c \delta t \Gamma^2 = 3 \times 10^{14} {\rm cm}
\Ga_{100}^2 \tilde \delta t \label{Rint}$$ where $\Ga_{100}$ is the typical Lorentz factor in units of $10^{2}$ and $\tilde \delta t$ is the time difference between the emission of the two shells. I show later that $\tilde \delta t$ defined here is roughly equal to the observed fluctuations in the light curve of the burst $\delta t$. Clearly $R_{int}<R_{ext}$ must hold otherwise internal shocks won’t take place. $R_{ext}$ is defined as the location of efficient extraction of energy by external shocks (see §\[sec:Ex-shocks\]). If follows from the discussion in §\[sec:Ex-shocks\] that the condition $R_{int}<R_{ext}$ implies: $$\delta \Gamma^2 < {\rm max}( {l \over \Gamma^{2/3}}, l^{3/4}
\Delta^{1/4})$$ where $l$ is defined by Eq. \[Sedov\] and it is typically of the order of $10^{18}$cm, while $\Delta$ is the width of the shell and it is of order $10^{12}$cm. Both conditions set upper limits on $\Gamma$ (of the order of a few thousands) for internal shocks. If the initial Lorentz factor is too large then internal shocks will take place at large radii and external shocks will take place before the internal shocks could take place. It is possible that this fact plays an important role in limiting the relevant Lorentz factors and hence the range of variability of $E_p$, the peak energy observed in GRBs.
Internal shocks are characterized by a comparable Lorentz factor of order of a few ($1 < \Ga < 10$) reflecting the relative motion of the shells and by comparable densities $n$ in both shells. In this case, for an adiabatic index (4/3), the Loretz factor of the shocked region $\hat \Gamma$ satisfies: $$\label{internal_conditions} \hat\Ga =\sqrt{(\Ga^2+1)/2}\ \ .$$
The shocked density $\hat n$ and energy $\hat e$ are:
$${\hat n} = (4 {\hat \Ga} +3 ) n \approx 4 {\hat \Ga} n \ \ ; \ \
{\hat e} = {\hat \Ga} {\hat n} m_p c^2 \ .$$
Both shocks are mildly relativistic and their strength depends on the relative Lorentz factors of the two shells.
### The Efficiency of Internal Shocks {#sec:efficiency}
Consider collision between two shells with masses $m_{r}$ and $m_{s}$ that are moving at different relativistic velocities: $\Gamma_r \gtrsim \Gamma_s \gg 1$. The resulting bulk Lorentz factor, $\Gamma_{m}$ in an elastic collision is: $$\Ga_{m}\simeq
\sqrt{\frac{m_{r}\Ga_{r}+m_{s}\Ga_{s}}{m_{r}/\Ga_{r}+m_{s}/
\Ga_{s}}}. \label{gammam}$$ The internal energy, ${\cal E}_{int}$, in the local frame and $E_{int}$, in the frame of an external observer, of the merged shell: $E_{int} =\Gamma_m{\cal E}_{int}$, is the difference of the kinetic energies before and after the collision: $$E_{int}=m_{r}c^{2}(\Ga_{r}-\Ga_{m})+m_{s}c^{2}(\Ga_{s}-\Ga_{m}).$$ The conversion efficiency of kinetic energy into internal energy is [@KPS97]: $$\epsilon =1-{(m_{r}+m_{s})\Gamma _{m} \over (m_{r}\Gamma
_{r}+m_{s} \Gamma _{s})} . \label{two-shell-efficiency}$$ As can be expected a conversion of a significant fraction of the initial kinetic energy to internal energy requires that the difference in velocities between the shells will be significant: $\Ga_r \gg \Ga_s$ and that the two masses will be comparable $m_r
\approx m_s$ [@KPS97; @DaigneMochkovitch98].
@Beloborodov_efficiency_00 considered internal shocks between shells with a lognormal distribution of $(\Ga-1)/(\Ga_0-1)$, where $\Gamma_{0}$ is the average Lorentz factor. The dimensionless parameter, $A$, measures the width of the distribution. He shows that the efficiency increases and reached unity when $A$ is of order unity, that is typical fluctuation in $\Ga$ are by a factor of 10 compared to the average. Similarly numerical simulations of @Guetta_Spada_Waxman01 show that a significant fraction of the wind kinetic energy, on the order of 20%, can be converted to radiation, provided the distribution of Lorentz factors within the wind has a large variance and the minimum Lorentz factor is greater than $\approx 10^{2.5}L^{2/9}_{52}$, where $L_{52}$ is the (isotropic) wind luminosity in units of $10^{52}$ergs/sec.
Another problem that involves the efficiency of GRBs is that not all the internal energy generated is emitted. This depends further on $\epsilon_e$, the fraction of energy given to the electron. If this fraction is small and if there is no strong coupling between the electrons and the protons the thermal energy of the shocked particles (which is stored in this case mostly in the protons) will not be radiated away. Instead it will be converted again to kinetic energy by adiabatic cooling. @KobayashiSari01 consider a more elaborated model in which colliding shells that do not emit all their internal energy are reflected from each other, causing subsequent collisions and thereby allowing more energy to be emitted. In this case more energy is eventually emitted than what would have been emitted if we considered only the first collision. They obtain about 60% overall efficiency even if the fraction of energy that goes to electrons is small $\epsilon_e=0.1$. This is provided that the shells’ Lorentz factor varies between 10 and 10$^4$.
### Light Curves from Internal Shocks {#sec:toy}
Both the similarity between the pulse width and the pulse separation distribution and the correlation between intervals and the subsequent pulses [@NakarPiran02a; @QuilliganEtal02] arise naturally within the internal shocks model [@NakarPiran02c]. In this model both the pulse duration and the separation between the pulses are determined by the same parameter - the interval between the emitted shells. I outline here the main argument (see @NakarPiran02c for details). Consider two shells with a separation $ L $. The Lorentz factor of the slower outer shell is $\Gamma_{S}=\Gamma $ and of the Lorentz factor inner faster shell is $ \Gamma_{F}=a\Gamma $ ($ a>2 $ for an efficient collision). Both are measured in the observer frame. The shells are ejected at $ t_{1} $ and $ t_{2}\approx t_{1}+L/c$. The collision takes place at a radius $ R_s\approx 2\Gamma ^{2}L $ (Note that $ R_s $ does not depend on $ \Gamma _{2} $). The emitted photons from the collision will reach the observer at time (omitting the photons flight time, and assuming transparent shells):
$$\label{to} t_{o} \approx t_{1}+R_s/( 2c\Gamma ^{2})\approx
t_{1}+L/c \approx t_{2} \ .$$
The photons from this pulse are observed almost simultaneously with a (hypothetical) photon that was emitted from the “inner engine” together with the second shell (at $ t_{2} $). This explains why various numerical simulations [@KPS97; @DaigneMochkovitch98; @PanaitescuSpadaMeszaros99] find that for internal shocks the observed light curve replicates the temporal activity of the source.
In order to determine the time between the bursts we should consider multiple collisions. It turns out that there are just three types of collisions, (i), (ii) and (iii), that characterize the system and all combinations of multiple collisions can be divided to these three types. Consider four shells emitted at times $ t_{i} $ ($ i=1,2,3,4 $) with a separation of the order of $ L $ between them. In type (i) there are two collisions - between the first and the second shells and between the third and the fourth shells. The first collision will be observed at $ t_{2}
$ while the second one will be observed at $ t_{4} $. Therefore, $
\Delta t\approx t_{4}-t_{2}\approx 2L/c $. A different collision scenario (ii) occurs if the second and the first shells collide, and afterward the third shell takes over and collide with them (the forth shell does not play any roll in this case). The first collision will be observed at $ t_{2} $ while the second one will be observed at $ t_{3} $. Therefore, $ \Delta t\approx
t_{3}-t_{2}\approx L/c. $ Numerical simulations [@NakarPiran02c] show that more then 80% of the efficient collisions follows one of these two scenarios ((i) or (ii)). Therefore one can estimate: $$\Delta t\approx L/c \ . \label{separation}$$ Note that this result is independent of the shells’ masses.
A third type of a multiple collision (iii) arises if the third shell collides first with the second shell. Then the merged shell will collide with the first one (again the fourth shell does not participate in this scenario). In this case the two pulses merge and will arrive almost simultaneously, at the same time with a (hypothetical) photon that would have been emitted from the inner engine simultaneously with the third (fastest) shell. $t \sim
t_3$. Only a 20% fraction exhibits this type of collision.
The pulse width is determined by the angular time (ignoring the cooling time): $ \delta t=R_s/(2c\Gamma ^{2}_{s}) $ where $
\Gamma _{s} $ is the Lorentz factor of the shocked emitting region. If the shells have an equal mass ($ m_{1}=m_{2} $) then $
\Gamma _{s}=\sqrt{a}\Gamma $ while if they have equal energy ($
m_{1}=am_{2} $) then $ \Gamma _{s}\approx \Gamma $. Therefore: $$\delta t \approx
\left\{ \begin{array}{r@{\quad\quad}l}
R_s/2a\Gamma^{2}c\approx L/ac & \rm{equal \ mass}, \\
R_s/2\Gamma ^{2}c \approx L/c & \rm {equal \ energy}.
\end{array} \right .
\label{width}$$ The ratio of the Lorentz factors $ a $, determines the collision’s efficiency. For efficient collision the variations in the shells Lorentz factor (and therefore $ a $) must be large.
It follows from Eqs. \[separation\] and \[width\] that for equal energy shells the $ \Delta t $-$ \delta t $ similarity and correlation arises naturally from the reflection of the shells initial separation in both variables. However, for equal mass shells $ \delta t $ is shorter by a factor of $a$ than $ \Delta t
$. This shortens the pulses relative to the intervals. Additionally, the large variance of $a$ would wipe off the $\Delta
t $-$ \delta t$ correlation. This suggests that equal energy shells are more likely to produce the observed light curves.
External Shocks {#sec:Ex-shocks}
----------------
### Hydrodynamics {#sec:Ex-hydro}
Consider the situation when a cold relativistic shell (whose internal energy is negligible compared to the rest mass) moves into the cold ISM. Generally, two shocks form: an outgoing shock that propagates into the ISM or into the external shell, and a reverse shock that propagates into the inner shell, with a contact discontinuity between the shocked material (see Fig. \[shock\_profile\]).
There dual shocks system is divided to four distinct regions (see Fig. \[shock\_profile\]): the ambient matter at rest (denoted by the subscript 1), the shocked ambient matter which has passed through the forward shock (subscript 2 or f), the shocked shell material which has passed through the reverse shock (subscript 3 or r), and the unshocked material of the shell (subscript 4). The nature of the emitted radiation and the efficiency of the cooling processes depend on the conditions in the shocked regions 2 and 3. Both regions have the same energy density $e$. The particle densities $n_2$ and $n_3$ are, however, different and hence the effective “temperatures,” i.e. the mean Lorentz factors of the random motions of the shocked protons and electrons, are different.
Two quantities determine the shocks’ structure: $\Ga$, the Lorentz factor of the motion of the inner expanding matter (denoted 4) relative to the outer matter (the ISM or the outer shell in the case of internal collisions - denoted 1) , and the ratio between the particle number densities in these regions, $n_4/n_1$. Initially the density contrast between the spherically expanding shell and the ISM is large. Specifically $n_4/n_1 > \Ga^2$. This happens during the early phase of an external shock when the shell is small and dense. This configuration is denoted “Newtonian” because the reverse shock is non-relativistic at most (or mildly relativistic). In this case all the energy conversion takes place in the forward shock. Only a negligible fraction of the energy is converted to thermal energy in the reverse shock if it is Newtonian [@SaP95]. Let $\Gamma_2$ be the Lorentz factor of the motion of the shocked fluid relative to the rest frame of the fluid at 1 and let $\bar \Gamma_3$ be the Lorentz factor of the motion of this fluid relative to the rest frame of the relativistic shell (4): $$\Ga_2 \approx \Ga \ \ \ ; \ \ \ \bar \Gamma_{3} \approx 1 .
\label{nr1}$$ The particle and energy densities $(n, e)$ in the shocked regions satisfy: $$n_2 \approx 4 \Ga n_1, \ \ ; \ \ e \equiv e_2 = 4 \Ga^2 n_1
m_p c^2 \ \ ; \ \ n_3 = 7 n_4, \ \ ; \ \ e_3 = e . \label{nr3}$$
Later, the shell expands and the density ratio decreases (like $R^{-2}$ if the width of the shell is constant and like $R^{-3}$ if the shell is spreading) and $n_4/n_1 < \Ga^2$ (but $n_4/n_1>1$). In this case both the forward and the reverse shocks are relativistic. The shock equations between regions 1 and 2 combined with the contact discontinuity between 3 and 2 yield [@BLmc1; @BLmc2; @Pi94]: $$\Gamma_2 = (n_4/n_1)^{1/4} \Ga^{1/2} /\sqrt 2 \ \ ; \ \ n_2 = 4
\Gamma_2 n_1 \ \ ; \ \ e \equiv e_2 = 4 \Gamma^2_2 n_1 m_p c^2 ,
\label{cond12}$$ Similar relations hold for the reverse shock: $$\bar \Gamma_3 = (n_4/n_1)^{-1/4} \Ga^{1/2} /\sqrt 2 \ \ ; \ \ n_3
= 4 \bar \Gamma_{3} n_4. \label{cond34}$$ Additinally, $$e_3=e \ \ ; \ \ \bar \Gamma_3\cong (\Ga/\Ga _2+\Ga_2/\Ga)/2 \
,$$ which follow from the equality of pressures and velocity on the contact discontinuity. Comparable amounts of energy are converted to thermal energy in both shocks when both shocks are relativistic.
The interaction between a relativistic flow and an external medium depends on the Sedov length that is defined generally as: $$E = m_p c^2 \int_0^l 4 \pi n(r) r^2 dr \ .$$ The rest mass energy within the Sedov sphere equals the energy of the explosion. For a homogeneous ISM: $$l \equiv ( { E \over
(4 \pi /3) n_{ism} m_p c^2 } )^{1/3} \approx 10^{18} {\rm cm} E_{52}^{1/3}n_1^{1/3} \ .
\label{Sedov}$$ Note that in this section $E$ stands for the isotropic equivalent energy. Because of the very large Lorentz factor angular structure on a scale larger than $\Gamma^{-1}$ does not influence the evolution of the system and it behaves as if it is a part of a spherical system. A second length scale that appears in the problem is $\Delta$, the width of the relativistic shell in the observer’s rest frame.
Initially the reverse shocks is Newtonian and only a negligible amount of energy is extracted from the shell. At this stage the whole shell acts “together". Half of the shell’s kinetic energy is converted to thermal energy when the collected external mass is $M/\G$, where $M$ is the shell’s mass [@RM92; @Katz94]. This takes place at a distance: $$R_\Ga = {l \over \Gamma^{2/3}} = \bigg({E \over n_{ism} m_p c^2
\Gamma^2} \bigg)^{1/3} = 5.4 \times 10^{16}~{\rm cm }~
E_{52}^{1/3} n_{1}^{-1/3} \Ga_{100}^{-2/3} , \label{rm}$$ where $E_{52}$ is the equivalent isotropic energy in $10^{52}$ergs, $n_1= n_{ism}/ 1~ {\rm particle/ cm}^3$.
However, the reverse shock might become relativistic before $R_\Ga$. Now energy extraction from the shell is efficient and one passage of the reverse shock through the shell is sufficient for complete conversion of the shell’s energy to thermal energy. The energy of the shell will be extracted during a single passage of the reverse shock across the shell. Using the expression for the velocity of the reverse shock into the shell (Eq. \[cond34\]) one finds that the reverse shock reaches the inner edge of the shell at $R_\Delta$ [@SaP95]: $$R_\Delta = l^{3/4} \Delta^{1/4} \approx 10^{15}{ \rm cm}
l_{18}^{3/4} \Delta_{12}^{1/4}\ . \label{Rdelta}$$ The reverse shock becomes relativistic at $R_N$, where $n_4/n_1 =
\Ga^2$: $$R_N = l^{3/2} /\Delta^{1/2} \Gamma^2$$ Clearly, if $R_N > R_\Ga$ then the energy of the shell is dissipated while the shocks are still “Newtonain". If $R_N<R_\Ga$ the reverse shock becomes relativistic. In this case $R_\Ga$ looses its meaning as the radius where the energy is dissipated. The energy of the shell is dissipated in this “relativistic" case at $r_\Delta$. The question which of the two conditions is relevant depends on the parameter $\xi$ [@SaP95]: $$\label{xi} \xi \equiv (l/ \Delta )^{1/2} \Ga^{-4/3} = 2
(l_{18}/\Delta_{12})^{1/2}\Ga_{100}^{-4/3} .$$ I have used a canonical value for $\Delta$ as $10^{12}$cm. It will be shown later that within the internal-external scenario $\Delta/c$ corresponds to the duration of the bursts and $10^{12}$cm corresponds to a typical burst of $30$sec.
Using $\xi$ one can express the different radii as: $$\label{order0} R_{int}/\zeta = R_\Delta /\xi^{3/2}= R_\ga
\xi^{2} = R_N /\xi^{3} \ .$$ For completeness I have added to this equation $R_{Int}$, where internal shocks take place (see Eq. \[Rint\]). The dimensionless quantity $\zeta$ : $\zeta \equiv \delta /\Delta$. Thus: $$\cases {R_\Delta < {\bf R_\Ga} < R_N
& $\xi>1$ { (Newtonian~reverse~shock)}\cr
R_N < R_\Ga < {\bf R_\Delta}
& $\xi<1$ { (Relativistic~reverse~shock)}.
}$$ I have marked in bold face the location where the effective energy extraction does take place. With typical values for $l$, $\Delta$ and $\Gamma$ $\xi$ is around unity. The radius where energy extraction takes place is marked in bold face!
[**Expanding shell:**]{} A physical shell is expected to expand during as it propagates with $\Delta = \Delta_0 + R\Gamma^2$ [@PShN93]. This will lead to a monotoneously decreasing $\xi$. As the value of $R_\Gamma$ is independent of $\Delta$ it does not vary. However, $R_\Delta$ and $R_N$ decrease from their initial values. If $\Delta_0 < R_\Gamma \Gamma^2$ (corresponding to $\xi_0
> 1$) then $\xi =1$ at $R_\Delta =R_\Gamma = R_N$ and all three radii coincide. Given the fact that with typical parameters $\xi$ is of order unity this seems to be the “typical" case. The reverse shocks becomes mildly relativistic just when the energy extraction becomes efficient. However, if $\xi_0 \ll 1$ then the shell won’t expand enough and still there will be a relativistic reverse shock operating at $R_\Delta$. It is useful to note that in this case the effective energy extraction takes place at $R_\Delta$ for all initial values of $\xi_0$. In the following I denote by $\tilde \xi$ the value of $\xi$ at $R_\Delta$: $\tilde
\xi \approx \xi_0$ if $\xi_0 < 1$ and otherwise $\tilde \xi
\approx 1$.
Overall the external shocks take place at: $$R_{ext}= \cases{ {\rm max}( {l /\Gamma^{2/3}}, l^{3/4}
\Delta^{1/4}),
& Non spreading shell, \cr
l / \Gamma^{2/3} \approx l^{3/4}
\Delta^{1/4} \approx 5 \times 10^{16} {\rm cm}
E_{52}^{1/3}n_1^{1/3} \Ga^{-2/3}_{100},
& Spreading shell. \cr
} \label{Rext}$$ Usually I will use the second relation (the spreading shell one) in the following discussion. Note that in the case of non spreading shell one uses the maximum of the two possible radii. For example in the Newtonian case where the extraction is at $l/\Gamma^{2/3}$ the shocks pass the shall many times and hence $l/Gamma^{2/3}>l^{3/4}\Delta^{1/4}$.
### Synchrotron Spectrum from External Shocks {#sec:Ex-spec}
The bulk of the kinetic energy of the shell is converted to thermal energy via the two shocks at around the time the shell has expanded to the radius $R_\Delta$ (this would be the case in either a thick shell with $\xi< 1$ or with an expanding shell that begins with $\xi_0 > 1$ but reaches $\xi \approx 1$ due to expansion of the shell around the time when $R_\Ga = R_\Delta$ and efficient dissipation takes place . At this radius, the conditions at the forward shock are: $$\label{hydroforward} \Gamma _2 = \Gamma \xi ^{3/4}, \ \ \
n_2 = 4\Gamma _2n_{1}, \ \ \ e_2 = 4\Gamma _2^2n_{1}m_pc^2,$$ while at the reverse shock: $$\label{hydroreverse} \bar \Gamma _3 = \xi^{-3/4}, \ \ \
\Gamma_3 = \Gamma\xi^{3/4}, \ \ \ n_3 = 4\xi
^{9/4}\Gamma ^2n_{1}, \ \ \ e_3 = e_2.$$
Substitution of $\Ga_{sh}=\Ga_2 = \Ga \xi^{3/4}$ in Eq. \[epsilons\] yields, for the equipartition magnetic field: $$B= \sqrt{32 \pi} c \epsilon_B^{1/2} \Ga \xi^{3/4} m_p^{1/2}
n_{1}^{1/2} =(40~{\rm G})~\epsilon_B^{1/2}\xi^{3/4} {\G_{100}}
n_{1}^{1/2}.$$ If the magnetic field in region 2 behind the forward shock is obtained purely by shock compression of the ISM field, the field would be very weak, with $\epsilon_B \ll 1$. Such low fields are incompatible with observations of GRBs. I consider, therefore, the possibility that there may be some kind of a turbulent instability which brings the magnetic field to approximate equipartition [@Medvedevetal03; @Frederiksenetal03]. In the case of the reverse shock, i.e. in region 3, magnetic fields of considerable strength might be present in the pre-shock shell material if the original exploding fireball was magnetic. The exact nature of magnetic field evolution during fireball expansion depends on several assumptions. @Thompson94 found that the magnetic field will remain in equipartition if it started off originally in equipartition. Mészáros, Laguna & Rees [@MLR] on the other hand, estimated that if the magnetic field was initially in equipartition then it would be below equipartition by a factor of $10^{-5}$ by the time the shell expands to $R_\Delta$. It is uncertain which, if either one, of the estimates is right. As in the forward shock, an instability could boost the field back to equipartition. Thus, while both shocks may have $\epsilon_B\ll 1$ with pure flux freezing, both could achieve $\epsilon_B\rightarrow1$ in the presence of instabilities. In principle, $\epsilon_B$ could be different for the two shocks. For simplicity I will consider the same value in the following discussions.
Following the discussion in §\[sec:acc\], I assume that in both regions 2 and 3 the electrons have a power law distribution with a minimal Lorentz factor $\gamma_{e,min}$ given by Eq. \[gemin\] with the corresponding Lorentz factors for the forward and the reverse shocks.
[**Forward shock:**]{} The typical energy of synchrotron photons as well as the synchrotron cooling time depend on the Lorentz factor $\gamma_e$ of the relativistic electrons under consideration and on the strength of the magnetic field. Using Eq. \[gemin\] for $\gamma_{e,min} $ and Eq. \[syn\_obs\] for the characteristic synchrotron energy for the forward shock: $$\label{hnu_gemin}(h\nu_{syn})_{obs|\ga_{e,min}}= 160~ {\rm keV}~
\epsilon_B^{1/2} \epsilon_e^2 \Ga_{2,100}^4 n_1^{1/2} = 0.5~{\rm
keV}~ (\epsilon_B/0.1)^{1/2} (\epsilon_e/0.1)^2 \tilde \xi_0^{3}
\Ga_{100}^4
n_1^{1/2} ,$$ and $$\label{cooling_gemin} t_{syn|\ga_{e,min}}= 0.085~ {\rm sec}
~\epsilon_B^{-1} \epsilon_e^{-1} \Ga_{2,100}^{-4} n_1^{-1} =
0.085~ {\rm sec} ~\epsilon_B^{-1} \epsilon_e^{-1} \tilde \xi^{-3}
\Ga_{100}^{-4} n_1^{-1}$$ The characteristic frequency and the corresponding cooling time for the “typical” electron are larger and shorter by a factor of $[(p-2)/(p-1)]^2$, correspondingly.
The electrons at the forward shock are fast cooling and w the typical cooling frequency is [@SP99b]: $$\nu_c = 6 keV (\epsilon_B/0.1)^{-3/2} (\Ga_2/100)^{-4} n_1^{-3/2}
t_s^{-2} \ ,$$ where $t_s$ is the time in seconds. The photons from the early forward shock are in the low to range, but this depends strongly on the various parameters (note the strong $\Ga_2^4$ dependence in equation \[hnu\_gemin\]). For this set of canonical parameters $\nu_m < \nu_c$. However, the ratio of these two frequencies depends on $\Ga^8$! For $\Ga$ slightly larger then 100 the inequality will reverse and the system will be in the fast cooling regime.
[**Reverse Shock:**]{} The Lorentz factor of the reverse shock, $\bar \Gamma_3$ is smaller by a factor of $\xi^{3/2}\Ga $ than the Lorentz factor of the forward shock $\Ga_2$. Similarly the Lorentz factor of a “typical electron” in the reverse shock is lower by a factor $\xi^{3/2}\Ga $. Therefore the observed energy is lower by a factor $\xi^3 \Ga^2$. The typical synchrotron frequency of the reverse shock is $$\nu_{m|reverse~shock} =1.3 \times 10^{13} {\rm Hz} ~
(\epsilon_B/0.1)^{1/2} (\epsilon_e/0.1)^2 \Ga_{100}^2
\label{nu_syn_r} \ .$$ This is in the IR regions but note again the strong dependence on the Lorentz factor and on $\epsilon_e$, which could easily bring this frequency up to the optical regime. The cooling frequency in the reverse shock region is the same as the cooling frequency of the forward shock (if both regions have the same $\epsilon_B$) [@SP99b] hence: $$\begin{aligned}
\nonumber \nu_{c|reverse~shock} = 8 \times 10^{18}{\rm Hz}
(\epsilon_{B}/0.1)^{-3/2}(\Gamma_2/{100})^{-4}n_{1}^{-3/2}t_{s}^{-2} = \\
8.8 \times 10^{15} {\rm Hz} (\epsilon_B/0.1)^{-3/2} E_{52}^{-1/2}
n_1^{-1} t_s^{-1/2} \ . \label{nu_c_R}\end{aligned}$$
In the forward shock $\nu_m$ is comparable or larger than $\nu_c$. In the reverse shock $\nu_m < \nu_c$ and it is usually in the slow cooling regime. The reverse shocks exists for a short time until it reaches the back of the relativistic shell. Then it turns into a rarefraction wave that propagates forwards. After some back and forth bounces of these wave all the matter behind the forward shock organizes itself in the form of the Blandford-McKee self similar solution discussed latter in §\[sec:Blast\]. This above estimates suggest [@MeszarosRees97; @SP99c; @SP99a; @SP99b] that during the short phase in which the reverse shock exists it should produce a powerful optical flash. This flash should coincide with the late part of the GRB. @Kobayashi00 calculates the light curves and typical frequencies of the reverse shock for a variety of conditions.
The Transition from Internal Shocks to External Shocks {#sec:transition}
-------------------------------------------------------
The internal shocks take place at a distance $R_{int} \sim c
\delta t \Gamma^2 \sim (\delta t/0.3sec) \Gamma_2^2 10^{14}$cm. These shocks last as long as the inner engine is active. The typical observed time scale for this activity $\sim 50 $sec (for long bursts) and $\sim 0.5$sec (for short ones). External shocks begin at $R_{ext} \sim 10^{16}$cm. If $R_{ext} /\Gamma^2 \le
T=\Delta/c $, namely if the burst is long, the afterglows begins while internal shocks are still going on and the initial part of the afterglow overlaps the late part of the GRB [@Sari97]. At the early time the afterglow emission (from the forward shock) peaks in the high X-rays contributing also to the observed $\gamma$-ray flux. One can expect, therefore, a transition within the GRB from hard (pure GRB) to softer and smoother (GRB and afterglow) signal. Some observational evidence for this transition was presented in §\[sec:transition-obs\].
Prompt Polarization {#sec:pol_prompt}
-------------------
In §\[sec:prompt-polarization\] I discussed the detection of very high linear polarization from GRB 021206 [@CoburnBoggs03]. While the data analysis is uncertain several papers claimed that this detection has strong implications. First @CoburnBoggs03 suggest that this polarization indicates that the emission mechanism is synchrotron. @LyutikovPB03 and @Granot03 suggest further that it implies uniform magnetic fields within the emitting regions and the first even conclude that this implies that the relativistic flow is Poynting flux dominated and that the dissipation is in the form of external plasma instability. @Waxman03 and @NakarPiranWaxman03 show however that: (i) Random magnetic field in shock’s plane could produce almost as high polarization as a uniform field (provided that the emitting jet is narrow and one is looking along the edge of the jet). (ii) Even if the magnetic field is uniform the flow does not have to be Poynting flux dominated. They also stress that while in the uniform field case we expect high polarization in almost every burst in the random field one we can expect high polarization only in very few bursts. The different time dependence of the polarization [@NakarPiranWaxman03] could also enable us to distinguish between the two possibilities.
@Lazzatietal03 and @DarDeRujula03 suggest that this polarization implies IC (which can have in principle higher intrinsic polarization). This shows that even the simplest conclusion (that the polarization confirms synchrotron as the emission mechanism) is uncertain. My overall conclusion is that without further data on other bursts (which is, unfortunately, quite unlikely in the nearby future) not much can be learnt from this tentative detection.
THE AFTERGLOW {#sec:afterglow}
==============
It is generally accepted that the afterglow is produced when the relativistic ejecta is slowed down by the surrounding matter [@MeszarosRees97]. The afterglow begins at $R_{ext}$ where most of the energy of the ejecta is transferred to the shocked external medium. For a long burst this takes place while the burst is still going on (see @Sari97 and §\[sec:transition\]). Initially the process might be radiative, namely a significant fraction of the kinetic energy is dissipated and the radiation process effects the hydrodynamics of the shock. I discuss this phase in §\[sec:rad-synch\]. Later the radiation processes become less efficient and an adiabatic phase begins during which the radiation losses are minor and do not influence the hydrodynamics. The hydrodynamic evolution at this stage is adiabatic. If the ejecta is in the form of a jet with an opening angle $\theta$ then a jet transition will take place when $\Gamma$ reaches $\theta^{-1}$. A transition into the Newtonian regime takes place when $\Gamma -1 \approx 0.5$. I begin the discussion of the afterglow with the hydrodynamics of the adiabatic phase and with the resulting synchrotron light curve. I continue with a discussion of the possible early radiative evolution. Then I turn to the jet break and to the Newtonian transition. I continue with various complications and variations on these themes.
Relativistic Blast Waves and the Blandford-McKee solution {#sec:Blast}
----------------------------------------------------------
The theory of relativistic blast waves has been worked out in a classical paper by Blandford & McKee (BM) already in 1976. The BM model is a self-similar spherical solution describing an adiabatic ultra relativistic blast wave in the limit $\Gamma \gg
1$. This solution is the relativistic analogue of the well known Newtonian Sedov-Taylor solution. Blandford and McKee also describe in the same paper a generalization for varying ambient mass density, $\rho =\rho_0 (R/R_0)^{-k}$, $R$ being the distance from the center. The latter case would be particularly relevant for $k=2$, as expected in the case of wind from a progenitor, prior to the GRB explosion.
The BM solution describes a narrow shell of width $\sim
R/\Gamma^2$, in which the shocked material is concentrated. For simplicity I approximate the solution with a thin homogenous shell. Then the adiabatic energy conservation yields: $$E = {\Omega\over 3-k} (\rho_0 R_0^k) R^{3-k} \Gamma^2 c^2 \ ,
\label{ad}$$ where $E$ is the energy of the blast wave and $\Omega$ is the solid angle of the afterglow. For a full sphere $\Omega= 4\pi$, but it can be smaller if the expansion is conical with an opening angle $\theta$: $\Omega = 4 \pi (1-cos \theta) \approx 2 \pi
\theta^2$ (assuming a double sided jet). This expression can be simplified using a generalized Sedov scale: $$l=\left[(3-k)E/ \rho_0 R_0^k c^2\right]^{1/(3-k)} . \label{lSedov}$$ If $\Omega$ does not change with time then the blast wave collects ambient rest mass that equals its initial energy at $R=l$.If we take into account sideway expansion (after the jet break) we find that $\Gamma \approx 1$ and the blast wave becomes Newtonian at: $$R = l (\Omega/4 \pi)^{1/(3-k)} .$$ Using the approximate (the numerical factor in this equation assumes that the shell is moving at a constant velocity) time - radius relation Eq. \[Rt\] one can invert Eq. \[ad\] (using the definition of $l$ , Eq. \[lSedov\]) and obtain $R$ and $\Gamma$ as a function of time: $$\begin{aligned}
R &=& [ { 2 l^{3-k} \over \Omega} ]^{1/(4-k)} t^{1/(4-k)} \\ \nonumber
\Gamma &=& [ { l^{3-k} \over 2^{3-k} \Omega } ]^{1/2(4-k)}
t^{-(3-k)/2(4-k)}
\label{RGamma}
\end{aligned}$$ The time in these expressions is the observer time - namely the time that photons emitted at $R$ arrive to the observer (relative to the time that photons emitted at $R=0$). For spherical (or spherical like) evolution $\Omega$ in these expressions is a constant. In general it is possible that $\Omega$ varies with $R$ or with $\Gamma$ (as is the case in a sideways expansion of a jet). This will produce, of course, a different dependence of $R$ and $\Gamma$ on $t$.
The values of $R$ and $\Gamma$ from Eq. \[RGamma\] can be plugged now into the typical frequencies $\nu_c$, $\nu_m$ and $\nu_{sa}$ as well into the different expression for $F_{\nu,max}$ to obtain the light curve of the afterglow.
Alternatively, one can calculate the light curve using a more detailed integration over the BM density and energy profiles. To perform such integration recall that the radius of the front of the shock is: $$R=\hat t \{1-[2(4-k)\Gamma ^{2}]^{-1} \},$$ where $\Gamma(t)$ is the shock’s Lorentz factor and $\hat t $ is the time since the explosion in its rest frame. The different hydrodynamic parameters behind the shock can be expressed as functions of a dimensionless parameter $\chi $: $$\chi \equiv [1+2(4-k)\Gamma ^{2}](1-R/\hat t) ,$$ as: $$\begin{aligned}
n&=&2\sqrt{2}n_{1}\Gamma {\chi^{-(10-3k)/[2(4-k)]}}, \cr
\gamma^{2}&=&\frac{1}{2}\Gamma ^{2}\chi ^{-1} \cr
p&=&\frac{2}{3}w_{1}\Gamma ^{2}\chi^{-(17-4k)/(12-3k)},\end{aligned}$$ where $ n_{1} $ and $ w_{1} $ are the number density and enthalpy density of the undisturbed circumburst material and $n$ and $p$ are measured in the fluid’s rest frame.
The BM solution is self-similar and assumes $\Gamma \gg 1$. Obviously, it breaks down when $R\sim l$. This Relativistic-Newtonian transition should take place around $$t_{NR}=l/c \approx 1.2 \, {\rm yr} (E_{\rm iso,52}/n_1)^{1/3} \ ,
\label{tNR}$$ where the scaling is for $k=0$, $E_{52}$ is the isotropic equivalent energy, $E_{\rm iso}=4\pi E/\Omega$, in units of $10^{52} {\rm ergs}$ and $n_1$ is the external density in ${\rm
cm}^{-3}$. After this transition the solution will turn into the Newtonian Sedov-Taylor solution with: $$\begin{aligned}
R & = & R_{NR} (t/t_{NR})^{2/5} , \\ \nonumber
v & = & v_{NR}(t/t_{NR})^{-3/5} , \\
e & = & e_{NR} (t/t_{NR})^{-6/5} . \label{SedovNR}\end{aligned}$$
The adiabatic approximation is valid for most of the duration of the afterglow. However, during the first hour or so (or even for the first day, for $k = 2$), the system could be radiative (provided that $\epsilon_e \approx 1$) or partially radiative. During a radiative phase the evolution can be approximated as: $$E = {\Omega\over 3-k} A R^{3-k} \Gamma \Gamma_0 c^2 \ ,
\label{rad}$$ where $\Gamma_0$ is the initial Lorentz factor. @CPS98 derived an analytic self-similar solution describing this phase.
@CP99 describe a solution for the case when energy is continuously added to the blast wave by the central engine, even during the afterglow phase. A self-similar solution arises if the additional energy deposition behaves like a power law. This would arise naturally in some models, e.g. in the pulsar like model [@Usov94].
Light Curves for the “Standard” Adiabatic Synchrotron Model {#sec:ad-synch}
------------------------------------------------------------
In §\[sec:synch-spec\] I discussed the instantaneous synchrotron spectrum. The light curve that corresponds to this spectrum depends simply on the variation of the $F_{\nu,max}$ and the break frequencies as a function of the observer time [@MeszarosRees97; @SPN98]. This in turn depends on the variation of the physical quantities along the shock front. For simplicity I approximate here the BM solution as a spherical homogeneous shell in which the physical conditions are determined by the shock jump between the shell and the surrounding matter. Like in §\[sec:synch-spec\] the calculation is divided to two cases: fast cooling and slow cooling.
@SPN98 estimate the observed emission as a series of power law segments in time and in frequency[^6]: $$F_\nu \propto t^{-\alpha} \nu^{-\beta} \ ,$$ that are separated by break frequencies, across which the exponents of these power laws change: the cooling frequency, $\nu_c$, the typical synchrotron frequency $\nu_m$ and the self absorption frequency $\nu_{sa}$. To estimate the rates one plugs the expressions for $\Gamma$ and $R$ as a function of the observer time (Eq. \[RGamma\]), using for a homogenous external matter $k=0$: $$\begin{aligned}
R(t) \cong (17Et/4\pi m_p n c)^{1/4},
\cr \Gamma(t) \cong
(17E/1024\pi n m_p c^5 t^3)^{1/8} , \label{RGammaISM}\end{aligned}$$ to the expressions of the cooling frequency, $\nu_c$, the typical synchrotron frequency $\nu_m$ and the self absorption frequency $\nu_{sa}$ (Eqs. \[numc\]) and to the expression of the maximal flux (Eq. \[spectrumslow\] for slow cooling and Eq. \[spectrumfast\] for fast cooling). Note that the numerical factors in the above expressions arise from an exact integration over the BM profile. This procedure results in: $$\begin{aligned}
\label{abreaks} \nu_c & = & 0.85 \times 10^{14}{\rm \ Hz}
(\epsilon_B/0.1)^{-3/2} E_{52}^{-1/2} n_1^{-1} t_d^{-1/2} , \cr
\nu_m & = & 1.8 \times 10^{12} {\rm \ Hz}(\epsilon_B/0.1)^{1/2}
(\epsilon_e/0.1)^2 E_{52}^{1/2} t_d^{-3/2}, \cr
F_{\nu,max}& = & 0.35 \times 10^5 \mu {\rm J}
(\epsilon_B/0.1)^{1/2} E_{52} n_1^{1/2} D_{28}^{-2} \ \ .\end{aligned}$$ A nice feature of this light curve is that the peak flux is constant and does not vary with time [@MR97] as it moves to lower and lower frequencies.
At sufficiently early times $\nu_c<\nu_m$, i.e. fast cooling, while at late times $\nu_c>\nu_m$, i.e., slow cooling. The transition between the two occurs when $\nu_c=\nu_m$. This corresponds (for adiabatic evolution) to: $$\label{tfc} t_0= 0.5 ~{\rm hours} (\epsilon_B/0.1)^2
(\epsilon_e/0.1)^2 E_{52} n_1 ~.$$
Additionally one can translate Eqs. \[abreaks\] to the time in which a given break frequency passes a given band. Consider a fixed frequency $\nu=\nu_{15}10^{15}$Hz. There are two critical times, $t_c$ and $t_m$, when the break frequencies, $\nu_c$ and $\nu_m$, cross the observed frequency $\nu$: $$\begin{aligned}
t_c= 0.2 ~{\rm hours} (\epsilon_B/0.1)^{-3} E_{52}^{-1} n_1^{-2}
\nu_{15}^{-2} ~, \cr
t_m= 0.2 ~{\rm hours} (\epsilon_B/0.1)^{1/3} (\epsilon_e/0.1)^{4/3}
E_{52}^{1/3} \nu_{15}^{-2/3} \ .\end{aligned}$$
In the Rayleigh-Jeans part of the black body radiation $I_\nu=kT(2\nu^2/c^2)$ so that $F_\nu \propto kT\nu^2$. Therefore, in the part of the synchrotron spectrum that is optically thick to synchrotron self absorption, we have $F_\nu\propto kT_{eff}\nu^2$. For slow cooling $kT_{eff}
\sim \gamma_m m_e c^2 = const.$ throughout the whole shell of shocked fluid behind the shock, and therefore $F_\nu \propto
\nu^2$ below $\nu_{sa}$ where the optical depth to synchrotron self absorption equals one, $\tau_{\nu_{as}}=1$. For fast cooling, as we go down in frequency, the optical depth to synchrotron self absorption first equals unity due to absorption over the whole shell of shocked fluid behind the shock, most of which is at the back of the shell and has $kT_{eff}\sim\gamma_c$. The observer is located in front of the shock, and the radiation that escapes and reaches the observer is from $\tau_\nu \sim 1$. As $\nu$ decreases below $\nu_{sa}$ the location where $\tau_\nu \sim 1$ moves from the back of the shell toward the front of the shell, where the electrons suffered less cooling so that $kT_{eff}(\tau_\nu=1)
\propto \nu^{-5/8}$. Consequently $F_nu\propto\nu^{11/8}$. At a certain frequency $\tau_\nu \sim 1$ at the location behind the shock where electrons with $\gamma_m$ start to cool significantly. Below this frequency, $(\nu_{ac})$, even though $\tau_\nu \sim 1$ closer and closer to the shock with decreasing $\nu$, the effective temperature at that location is constant: $kT_{eff} \sim \gamma_m m_e c^2 = const.$, and therefore $F_\nu \propto \nu^2$ for $\nu<\nu_{ac}$, while $F_\nu \propto \nu^{11/8}$ for $\nu_{ac} < \nu < \nu_{sa}$. Overall the expression for the self absorption frequency depends on the cooling regime. It divides to two cases, denoted $\nu_{sa}$ and $\nu_{ac}$, for fast cooling and both expression are different from the slow cooling [@GPS00]. For fast cooling: $$\nu _{ac} = 1.7\times 10^{9}\ {\rm Hz}\ (\epsilon_B/0.1)^{-2/5}
(\epsilon_e/0.1)^{-8/5}E_{52}^{-1/10}n_{1}^{3/10}(t/100{\rm sec})^{3/10}\ ,$$ $$\label{ES_ISM} \nu _{sa} = 1.8\times 10^{10}\ {\rm Hz}\
(\epsilon_B/0.1)^{6/5} E_{52}^{7/10}n_{1}^{11/10}(t/100{\rm
sec})^{-1/2}\ .$$ For slow cooling: $$\nu _{sa}= 1.24 \times 10^{9} \ {\rm Hz}
{(p-1)^{3/5}\over(3p+2)^{3/5}} (1+z)^{-1}\bar\epsilon_{e}^{\;
-1}\epsilon_B^{1/5} n_0^{3/5} E_{52}^{1/5}$$
For a given frequency either $t_0>t_m>t_c$ (which is typical for high frequencies) or $t_0<t_m<t_c$ (which is typical for low frequencies). The results are summarized in two tables \[table:Fast\_ISM\] and \[table:Slow-ISM\] describing $\alpha$ and $\beta$ for fast and slow cooling. The different light curves are depicted in Fig. \[fig:full\_spectrum\].
$ $ $\alpha$ $\beta$
------------------------ ----------- ----------------------
$\nu < \nu_a$ 1 2
$\nu_a < \nu < \nu_c $ 1/6 1/3
$\nu_c< \nu < \nu_m $ -1/4 -1/2
$\nu_m < \nu $ -(3p-2)/4 $-p/2=(2\alpha-1)/3$
: $\alpha$ and $\beta$ for fast cooling ($\nu_a< \nu_c <
\nu_m$) into a constant density ISM[]{data-label="table:Fast_ISM"}
$ $ $\alpha$ $\beta$
------------------------ ----------- ----------------------
$\nu < \nu_a$ 1/2 2
$\nu_a < \nu < \nu_m $ 1/2 1/3
$\nu_m < \nu < \nu_c $ -3(p-1)/4 $-(p-1)/2=2\alpha/3$
$\nu_c < \nu $ -(3p-2)/4 $-p/2=(2\alpha-1)/3$
: $\alpha$ and $\beta$ for slow cooling ($\nu_a< \nu_m <
\nu_c $) into a constant density ISM[]{data-label="table:Slow-ISM"}
These results are valid only for $p>2$ (and for $\gamma_{max}$, the maximal electron energy, much higher than $\gamma_{min}$). If $p<2$ then $\gamma_{max}$ plays a critical role. The resulting temporal and spectral indices for slow cooling with $1<p<2$ are given by @DaiCheng01 and by @Bhattacharya01 and summarized in table \[table:Slow\_lowP\] below. For completeness I include in this table also the cases of propagation into a wind (see §\[sec:wind\]) and a jet break (see §\[sec:jets\]).
ISM wind Jet
------------------------ ------------------ ------------------ ---------------
$\nu < \nu_a$ (17p-26)/16(p-1) (13p-18)/18(p-1) 3(p-2)/4(p-1)
$\nu_a < \nu < \nu_m $ (p+1)/8(p-1) 5(2-p)/12(p-1) (8-5p)/6(p-1)
$\nu_m < \nu < \nu_c $ -3(p+2)/16 -(p+8)/8 -(p+6)/4
$\nu_c < \nu $ -(3p+10)/16 -(p+6)/8 -(p+6)/4
: $\alpha$ for slow cooling ($\nu_a< \nu_m < \nu_c $) into a constant density ISM, wind and jet for electron distribution with $1<p<2$.[]{data-label="table:Slow_lowP"}
The simple solution, that is based on a homogeneous shell approximation, can be modified by using the full BM solution and integrating over the entire volume of shocked fluid [@GPS99a]. Following [@NakarPiran03b] I discuss §\[sec:BMlight\] a simple way to perform this integration. The detailed integration yields a smoother spectrum and light curve near the break frequencies, but the asymptotic slopes, away from the break frequencies and the transition times, remain the same as in the simpler theory. @GranotSari02 describe a detailed numerical analysis of the smooth afterglow spectrum including a smooth approximation for the spectrum over the transition regions (see also [@GruzinovWaxman99]). They also describe additional cases of ordering of the typical frequencies which were not considered earlier.
A final note on this “standard" model is that it assumes adiabaticity. However, in reality a fraction of the energy is lost and this influences over a long run the hydrodynamic behavior. This could be easily corrected by an integration of the energy losses and an addition a variable energy to Eq. \[ad\], followed by the rest of the procedure described above [@KumarPanaitescu00c].
Light Curves for the early radiative phase {#sec:rad-synch}
-------------------------------------------
If the electrons’ energy is large (namely if $\epsilon_e$ is not far from unity), then early on during the first few hours of the afterglow there will be a radiative phase in which a significant fraction of the kinetic energy is lost via the radiative processes. One can generalize the BM solution to this radiative stage (see @CPS98 and §\[sec:Blast\]). The essence of the radiative phase is that in this case the energy varies as $E\propto\Gamma$, where $\Gamma \cong (R/L)^{-3}$. Note that $L$ is calculated in terms of $M$ and the initial energy of the explosion, $E_0$, via $M=E_0/\Gamma_0 c^2$, where $\Gamma_0$ is the initial Lorentz factor of the ejecta: $$\begin{aligned}
R(t) \cong (4ct/L)^{1/7} L, \cr
\Gamma(t) \cong (4ct/L)^{-3/7}\end{aligned}$$ The transition time from the radiative to the adiabatic phase takes place when the radiation losses become negligible. This happens at: $$\label{trad} t_{rad}= 0.17 ~{\rm hours} ~(\epsilon_B/0.1)^{7/5}
(\epsilon_e/0.1)^{7/5} E_{52}^{4/5}(\Gamma/100)^{-4/5} n_1^{3/5}
~.$$
Following @SPN98 one can use the above expressions to express the different typical frequencies and fluxes as: $$\begin{aligned}
\label{rbreaks} \nu_c & = & 4.1 \times 10^{14} ~{\rm \ Hz}~
(\epsilon_B/0.1)^{-3/2}
E_{52}^{-4/7} (\Gamma/100)^{4/7} n_1^{-13/14} t_d^{-2/7} , \cr
\nu_m & = & 3.8 \times 10^{11} ~{\rm \
Hz}~(\epsilon_B/0.1)^{1/2} (\epsilon_e/0.1)^2
E_{52}^{4/7} (\Gamma/100)^{-4/7} n_1^{-1/14} t_d^{-12/7}, \cr
F_{\nu,max}& = & 1.4 \times 10^3 ~\mu {\rm J}~ \epsilon_B^{1/2}
E_{52}^{8/7}(\Gamma/100)^{-8/7} n_1^{5/14} D_{28}^{-2} t_d^{-3/7}
\ .\end{aligned}$$ Like in the adiabatic case this can be translated to the times of passage of the break frequencies at a given observed frequency: $$\begin{aligned}
t_c= 0.05 \times 10^{-7} ~{\rm days} (\epsilon_B/0.1)^{-21/4}
E_{52}^{-2} \Gamma_2^{2} n_1^{-13/4} \nu_{15}^{-7/2} ,
\cr
t_m= 0.01 ~{\rm days} (\epsilon_B/0.1)^{7/24}
(\epsilon_e/0.1)^{7/6} E_{52}^{1/3} \Gamma_2^{-1/3}
\nu_{15}^{-7/12} n_1^{-1/24} ~.\end{aligned}$$ Unlike the adiabatic case, here $\nu_c$ must be below $\nu_m$. Otherwise the bulk of the electrons do not cool and the system won’t be radiative. Indeed at $t_{rad}$ (given by Eq. \[trad\] above) $\nu_c=\nu_m$.
Light Curve During the Newtonian transition {#sec:Newtonian}
-------------------------------------------
.
At $t \approx t_{NR}$ (see Eq. \[tNR\]) the afterglow reaches the Newtonian Sedov-Taylor phase. During this phase the adiabatic hydrodynamic is described by Eq. \[SedovNR\]. @FrailWaxmanKulkarni00 calculate the synchrotron spectrum and light curve of the afterglow in this stage. The energy scaling implies that $B \propto t^{-3/5}$ and $\ga_{e,min}
\propto t^{-6/5}$. Combined together this yields $\nu_m \propto
t^{-3}$. Using the standard assumptions of equipartition and of a power law electron’s distribution they find : $$\begin{aligned}
\label{Sedov_light}
\nu_c & = & 10^{13} {\rm
\ Hz} (\epsilon_B/0.3)^{-3/2} E_{51}^{-2/3} n_1^{-5/6}
(t/t_{NR})^{-1/5} , \cr
\nu_m & = & 1 {\rm GHz} (\epsilon_B/0.3)^{1/2} (\epsilon_e/0.3)^2
n_1^{-1/2}, \cr
F_{\nu_m<\nu<\nu_c}& = & 1 {\rm mJ} (\epsilon_B/0.3)^{3/4}
(\epsilon_e/0.3) n_1^{3/4} E_{51} D_{28}^{-2} \nu^{-(p-1)/2}_{GHz}
(t_/t_{NR})^{-3(p-1)/2+3/5} \ .\end{aligned}$$ This late time light curve provides a simple “calorimetric" estimate of the afterglow energy at this stage (see §\[sec:Energetics\]). Additionally as the radio flux is rather large and as it varies on a scale of several month it can be used for search of orphan radio afterglows (see @Levinsonetal02 and §\[sec:orphan\_radio\]).
Generalizations: I. Winds {#sec:wind}
--------------------------
The simplest generalization of the previous models is to allow a variable circuburst density with $n(R) \propto R^{-k}$. The hydrodynamic evolution of a relativistic blast wave in such a medium was considered already in the original paper of @BLmc1. The synchrotron light curve was considered first by @MeszarosReesWijers98 and by @DaiLu99.
@ChevalierLi99 [@ChevalierLi00] stressed the importance of the $n(R) \propto R^{-2}$ case which arises whenever there is a stellar wind ejected by the GRB’s progenitor prior to the burst. This arises naturally in the Collapsar model that is based on the collapse of a massive star. The calculations follow those outlines in the previous sections, with the only difference that the relations determining $R(t)$ and $\Gamma(t)$ for homogeneous circumburst medium, Eqs. \[RGammaISM\], should be replaced by Eqs. \[RGamma\] with $k=2$
The high initial densities in a wind density profile implies a low initial cooling frequency. Unlike the constant density case the cooling frequency here increase with time [@ChevalierLi99]. This leads to a different temporal relations between the different frequencies and cooling regimes. For example it is possible that the cooling frequency will be initially below the synchrotron self absorption frequency. @ChevalierLi00 consider five different evolution of the light curves for different conditions and observed frequencies. We list below the two most relevant cases, the first fits the and optical afterglows while the second is typical for the lower radio frequencies.
$\alpha$ $\beta$
------------------------ ----------- --------------------------
$\nu_c < \nu < \nu_m$ -1/4 -1/2
$\nu_m,\nu_c < \nu $ -(3p-2)/4 $-p/2=(2\alpha-1)/3$
$\nu_m < \nu < \nu_c $ -(3p-1)/4 $-(p-1)/2=(2\alpha+1)/3$
: $\alpha$ and $\beta$ for and optical frequencies from a blast wave into a wind profile when $\nu_a < \nu_c,\nu_m,\nu$ [@ChevalierLi00]. Note that the order of the table is according to the evolution of the light curve at a fixed high observed frequency.[]{data-label="table:XR_wind1"}
Note that for $\nu_m,\nu_c < \nu $ both the spectral slop and the temporal evolution are similar for a wind and for a constant density profile. This poses, of course, a problem in the interpretation of afterglow light curves.
$\alpha$ $\beta$
--------------------------------- ----------- --------------------------
$\nu_c < \nu < \nu_a < \nu_m$ 7/4 5/2
$\nu < \nu_c < \nu_a < \nu_m$ 2 2
$\nu < \nu_a < \nu_m < \nu_c$ 1 2
$\nu_a < \nu < \nu_m < \nu_c$ 0 1/3
$\nu_a < \nu_m < \nu < \nu_c $ -(3p-1)/4 $-(p-1)/2=(2\alpha+1)/3$
: $\alpha$ and $\beta$ for radio frequencies from a blast wave into a wind profile [@ChevalierLi00]. Note that the order of the table is according to the evolution of the light curve at a fixed low observed frequency.[]{data-label="table:XR_wind2"}
Generalizations: II. Energy injection and refreshed shocks {#sec:energy}
-----------------------------------------------------------
The simple adiabatic model assumes that the energy of the GRB is constant. However, the energy could change if additional slower material is ejected behind the initial matter. This would be expected generically in the internal shock model. In this model the burst is produced by a series of collisions between shells moving at different velocities. One naturally expect here also slower moving matter that does not catch up initially with the faster moving one. However, as the initially faster moving matter is slowed down by the circum-burst matter this slower matter eventually catch up and produces refreshed shocks [@ReesMeszaros98; @KP00a; @SariMeszaros00].
There are two implications for the refreshed shocks. First the additional energy injection will influence the dynamics of the blast wave [@ReesMeszaros98; @SariMeszaros00]. This effect can be modelled by modifying $E$ in Eq. \[ad\], but the effect of additional mass carrying the slower energy must be included in some cases. This would change the decay slope from the canonical one and produce a slower decay in the light curve. In the following section §\[sec:density\] I describe a scheme for calculating the light curve resulting from a variable blast wave energy. If the additional matter is emitted sporadically then the shell collision could produce initial temporal variability in the early afterglow signal. @Foxetal03_021004, for example, suggest that refreshed shocks are the origin of the variability in the early afterglow of GRB 021004.
A second effect is the production of a reverse shock propagating into the slower material when it catches up with the faster one [@KP00a]. This is of course in addition to the forward shock that propagates into the outer shell. This reverse shock could be episodal or long lasting depending on the profile of the additional matter. @KP00a consider two shells with energies $E_1$ and $E_2$ in the outer and the inner shells respectively. The outer shell is moving with a bulk Lorentz factor $\Gamma_{0c}\sim 5 (t/day)^{3/8}$ at the (observed) time, t, of the collision. As the inner shell catches up with the outer one when both shells have comparable Lorentz factors the reverse shocks is always mildly relativistic. The calculation of the shock is slightly different than the calculation of a shell propagating into a cold material (another shell or the ISM) discussed earlier. Here the outer shell has already collided with the ISM. Hence it is hot with internal energy exceeding the rest mass energy. The reverse shock produces emission at a characteristic frequency that is typically much lower than the peak of the emission from the outer shell by a factor of $\sim 7 \Gamma_{0c}^2 (E_2/E_1)^{1.1}$, and the observed flux at this frequency from the reverse shock is larger compared to the flux from the outer shell by a factor of $\sim 8 (\Gamma_{0c} E_2/E_1)^{5/3}$. This emission is typically in the radio or the FIR range.
@KP00a suggest that due to angular spreading the refreshed shocks produce an enhancement with a typical time scale $\delta t \sim t$. @GranotNakarPiran03 stress that the fact that energy necessarily increases in refreshed shocks, the overall light curve must have a step-wise shape (above the continues power-law decline) with a break at the corresponding shocks. This behavior was seen in GRB 030329. However there the transitions are fast with $\delta t < t$. @GranotNakarPiran03 point out that if the refreshed shocks take place [*after*]{} the jet break (as is likely the case in GRB 030329) then if the later shells remain cold and do not spread sideways we would have $\delta t \sim t_{jet} < t$. This explains nicely the fast transitions seen in this burst.
Generalizations: III. Inhomogeneous density profiles {#sec:density}
-----------------------------------------------------
An interesting possibility that arose with the observation of the variable light curve of the afterglow of GRB 021004 is that the ejecta encounters surrounding matter with an irregular density profile [@Lazzati02; @NakarPiranGranot03; @HeylPerna03]. To explore this situation one can resort to numerical simulation of the propagation of the blast wave into a selected density profile [@Lazzati02]. Instead one can attempt to model this analytically or almost analytically [@NakarPiran03b]. The key for this analytic model is the approximation of the light curve from an inhomogeneous density profile as a series of emission from instantaneous BM solutions, each with its own external density.
### The light curve of a BM solution {#sec:BMlight}
The observed flux, at an observer time $ t $, from an arbitrary spherically symmetric emitting region is given by [@GPS99a]: $$\label{eq Fnu1}
F_{\nu }(t)=\frac{1}{2D^{2}}\int _{0}^{\infty }dt' \int
_{0}^{\infty }r^{2}dr\int _{-1}^{1}d(cos\theta
)\frac{n'(r)P'_{\nu }(\nu \Lambda ,r)}{\Lambda ^{2}}\delta
(t'-t-\frac{r\, \, cos\theta }{c}),$$ where $ n' $ is the emitters density and $ P'_{\nu } $ is the emitted spectral power per emitter, both are measured in the fluid frame; $ \theta $ is the angle relative to the line of sight, and $ \Lambda ^{-1}=1/\gamma (1-v\, \, cos\theta /c) $ ($ v
$ is the emitting matter bulk velocity) is the blue-shift factor.
@NakarPiran03b show[^7] that using the self-similar nature of the BM profile (with an external density $\propto r^{-k}$) one can reduce Eq. \[eq Fnu1\] to: $$\label{eq Fnu general}
F_{\nu }(t)=\frac{1}{D^{2}}\int _{0}^{R_{max}(t)}A_{\nu
}(R)g_{\beta }(\widetilde{t},k)dR \ .$$ The integration over $R$ is over the shock front of the BM solution. The upper limit $ R_{max} $ corresponds to the shock position from where photons leaving along the line of sight reach the observer at $t$. The factor $ D $ is the distance to the source (neglecting cosmological factors). $ \beta $ is the local spectral index.
The factor $ g_{\beta } $ is a dimensionless factor that describes the observed pulse shape of an instantaneous emission from a BM profile. The instantaneous emission from a thin shell produces a finite pulse (see §\[sec:Temporal\] and Fig. \[fig:thinshell\]). This is generalized now to a pulse from an instantaneous emission from a BM profile. Note that even though the BM profile extends from $0$ to $R$ most of the emission arise from a narrow regions of width $\sim R/\Ga^2$ behind the shock front. $ g_{\beta } $ is obtained by integration Eq. \[eq Fnu1\] over $ cos\theta $ and $ r $, i.e. over the volume of the BM profile. It depends only on the radial and angular structure of the shell. The self-similar profile of the shell enables us to express $ g_{\beta } $ as a general function that depends only on the dimensionless parameter $ \widetilde{t}\equiv
{(t-t_{los}(R))}/{t_{ang}(R)}$, with $t_{los}(R)$ is the time in which a photon emitted at R along the line of sight to the center reaches the observer and $t_{ang} \equiv R/2c\Gamma^{2}$. The second function, $ A_{\nu }$, depends only on the conditions of the shock front along the line-of-sight. It includes only numerical parameters that remain after the integration over the volume of the shell.
When all the significant emission from the shell at radius $ R $ is within the same power-law segment, $ \beta $, (i.e $ \nu $ is far from the break frequencies) then $ A_{\nu } $ and $
g_{\beta } $ are given by: $$\label{eq Anu
general} A_{\nu }(R)=H_{\nu }\left\{ \begin{array}{c} R^{2}\,
n^{4/3}_{ext,0}\, E_{52}^{1/3}\, M_{29}^{-1/3}\quad \nu <\nu
_{m}\\
R^{2}\, n^{(5+p)/4}_{ext,0}\, E_{52}^{p}\, M_{29}^{-p}\quad \nu
_{m}<\nu <\nu _{c}\\
R\, n^{(2+p)/4}_{ext,0}\, E_{52}^{p}\, M_{29}^{-p}\quad \nu >\nu
_{c}
\end{array}\right. \frac{erg}{sec\cdot cm\cdot Hz},$$ where $ R $ is the radius of the shock front, $ n_{ext}(R) $ is the external density, $ E $ is the energy in the blast-wave, $
M(R) $ the total collected mass up to radius $ R $ and $ H_{\nu }
$ is a numerical factor which depends on the observed power law segment (see [@NakarPiran03b] for the numerical values.
$$\label{eq general pulse} g(\widetilde{t},\beta ,k)=\left\{
\begin{array}{c} \frac{2}{(4-k)}\int
^{1+2(4-k)\widetilde{t}}_{1}\chi ^{-\mu (\beta ,k)}\left(
1-\frac{1}{2(4-k)}+\frac{2(4-k)\widetilde{t}+1}{2(4-k)\chi
}\right) ^{-(2-\beta )}d\chi \quad \nu <\nu _{c}\\
(1+\widetilde{t})^{-(2-\beta )}\quad \nu >\nu _{c}
\end{array}\right. ,$$
where $$\label{eq mu} \mu (\beta ,k)\equiv 3\cdot (71-17k)/(72-18k)-\beta
\cdot (37+k)/(24-6k).$$ This set of equations is completed with the relevant relations between the different variables of the blast wave, the observer time and the break frequencies.
These equations describe the light curve within one power law segment of the light curve. Matching between different power laws can be easily done [@NakarPiran03b]. The overall formalism can be used to calculate the complete light curve of a BM blast wave.
### The light curve with a variable density or energy {#sec:variable_light}
The results of the previous section can be applied to study the effect of variations in the external density or in the energy of the blast-wave by approximating the solution as a series of instantaneous BM solutions whose parameters are determined by the instantaneous external density and the energy. Both can vary with time. This would be valid, of course, if the variations are not too rapid. The light curve can be expressed as an integral over the emission from a series of instantaneous BM solutions.
When a blast wave at radius $ R $ propagates into the circumburst medium, the emitting matter behind the shock is replenished within $ \Delta R\approx R(2^{1/(4-k)}-1) $. This is the length scale over which an external density variation relaxes to the BM solution. This approximation is valid as long as the density variations are on a larger length scales than $ \Delta R $. It fails when there is a sharp density increase over a range of $
\Delta R $. However, the contribution to the integral from the region on which the solution breaks is small ($ \Delta R/R\ll 1 $) and the overall light curve approximation is acceptable. Additionally the density variation must be mild enough so that it does not give rise to a strong reverse shock that destroys the BM profile.
A sharp density decrease is more complicated. Here the length scale in which the emitting matter behind the shock is replenished could be of the order of $ R $. As an example we consider a sharp drop at some radius $ R_{d} $ and a constant density for $ R>R_{d} $. In this case the external density is negligible at first, and the hot shell cools by adiabatic expansion. Later the forward shock becomes dominant again. @KumarPanaitescu00a show that immediately after the drop the light curve is dominated by the emission during the adiabatic cooling. Later the the observed flux is dominated by emission from $ R\approx R_{d} $, and at the end the new forward shock becomes dominant. Our approximation includes the emission before the density drop and the new forward shock after the drop, but it ignores the emission during the adiabatic cooling phase.
As an example for this method Fig \[fig:Gaussian over-density\] depicts the $ \nu _{m}<\nu <\nu _{c} $ light curve for a Gaussian ($\Delta R/R=0.1$) over-dense region in the ISM. Such a density profile may occur in a clumpy environment. The emission from a clump is similar to the emission from a spherically over-dense region as long as the clump’s angular size is much larger than $
1/\Gamma $. Even a mild short length-scale, over-dense region (with a maximal over-density of 2) influences the light curve for a long duration (mainly due to the angular spreading). This duration depends strongly on the magnitude of the over-density.
The calculations presented so far do not account, however, for the reverse shock resulting from density enhancement and its effect on the blast-wave. Thus the above models are limited to slowly varying and low contrast density profiles. Now, the observed flux depends on the external density, $n$, roughly as $n^{1/2}$. Thus, a large contrast is needed to produce a significant re-brightening. Such a large contrast will, however, produce a strong reverse shock which will sharply decrease the Lorentz factor of the emitting matter behind the shock, $\Gamma_{sh}$, causing a sharp drop in the emission below $\nu_c$ and a long delay in the arrival time of the emitted photons (the observer time is $\propto \Gamma_{sh}^{-2}$). Both factors combine to suppresses the flux and to set a strong limit on the steepness of the re-brightening events caused by density variations.
The method can be applied also to variations in the blast wave’s energy. Spherically symmetric energy variations are most likely to occur due to refreshed shocks, when new inner shells arrive from the source and refresh the blast wave [@ReesMeszaros98; @KP00a; @SariMeszaros00]. Once more, this approximation misses the effect of the reverse shock that arise in this case [@KP00a]. However it enables a simple calculation of the observed light curve for a given energy profile.
Generalizations: IV. Jets {#sec:jets}
--------------------------
The afterglow theory becomes much more complicated if the relativistic ejecta is not spherical. The commonly called “jets" corresponds to relativistic matter ejected into a cone of opening angle $\theta$. I stress that unlike other astrophysical jets this ejecta is non steady state and generally its width (in the direction parallel to the motion) is orders of magnitude smaller than the radius where the jet is. A “flying pancake" is a better description for these jets.
The simplest implication of a jet geometry, that exists regardless of the hydrodynamic evolution, is that once $\Gamma \sim
\theta^{-1}$ relativistic beaming of light will become less effective. The radiation was initially beamed locally into a cone with an opening angle $\Gamma^{-1}$ remained inside the cone of the original jet. Now with $\Gamma^{-1}> \theta$ the emission is radiated outside of the initial jet. This has two effects: (i) An “on axis" observer, one that sees the original jet, will detect a jet break due to the faster spreading of the emitted radiation. (ii) An “off axis" observer, that could not detect the original emission will be able to see now an “orphan afterglow", an afterglow without a preceding GRB (see §\[sec:orphan\]). The time of this transition is always give by Eq. \[tjet\] below with $C_2=1$.
Additionally the hydrodynamic evolution of the source changes when $\Gamma \sim \theta^{-1}$. Initially, as long as $\Gamma \gg
\theta^{-1}$ [@Pi94] the motion would be almost conical. There isn’t enough time, in the blast wave’s rest frame, for the matter to be affected by the non spherical geometry, and the blast wave will behave as if it was a part of a sphere. When $\Gamma = C_2
\theta^{-1}$, namely at[^8]: $$t_{\rm jet} = {1\over C_1}\left( l\over c \right )
\left({\theta\over C_2}\right)^{2(4-k)\over (3-k)}$$ sideways propagation begins. The constant $C_1$ expresses the uncertainty at relation between the Lorentz factor and the observing time and it depends on the history of the evolution of the fireball. The constant $C_2$ reflects the uncertainty in the value of $\Gamma$, when the jet break begins vs. the value of opening angle of the jet $\theta$. For the important case of constant external density $k=0$ this transition takes place at: $$t_{\rm jet}= {1 \, {\rm day} \over C_1 C_2^{8/3}} \left({E_{\rm
iso,52}\over n_1}\right)^{1/3} \left({\theta\over
0.1}\right)^{8/3} \ . \label{tjet}$$
The sideways expansion continues with $\theta \sim \Gamma^{-1}$. Plugging this relations in Eq. \[RGamma\] and letting $\Omega$ vary like $\Ga^{-2}$ one finds that: $$\begin{aligned}
R \approx {\rm const} \ ; \\ \nonumber \Ga \approx (R/2 t)^{1/2}
\ . \label{Rgammajet}\end{aligned}$$ A more detailed analysis [@Rhoads97; @Rhoads99; @P00; @KumarPanaitescu00a] reveals that according to the simple one dimensional analytic models $\Gamma$ decreases exponentially with $R$ on a very short length scale.[^9]
Table \[table:Slowjet\] describes the parameters $\alpha$ and $\beta$ for a post jet break evolution [@SPH99], The jet break usually takes place rather late, after the radiative transition. Therefore, I include in this table only the slow cooling parameters.
\[table:Slowjet\]
$\alpha$ $\beta$
------------------------ ---------- -------------------------
$\nu < \nu_a$ 0 2
$\nu_a < \nu < \nu_m $ -1/3 1/3
$\nu_m < \nu < \nu_c $ -p $-(p-1)/2=(\alpha+1)/2$
$\nu_c < \nu $ -p $-p/2=\alpha/2$
: $\alpha$ and $\beta$ for slow cooling ($\nu_a< \nu_m <
\nu_c $) after a jet break.
An important feature of the post jet-break evolution is that $\nu_c$, the cooling frequency becomes constant in time . This means that the high frequency (optical and ) optical spectrum does not vary after the jet-break took place. On the other hand the radio spectrum varies (see Fig. \[fig:990510\_radio\]), giving an additional structure that confirms the interpretation of break as arising due to a sideways expansion of a jet (see e.g. [@Harrisonetal99]).
@KumarPanaitescu00c find that the jet break transition in a wind profile will be very long (up to four decades in time) and thus it will be hard to observe a jet break in such a case. On the other hand it is interesting to note that for typical values of $\alpha$ seen after a jet break ($\alpha \approx -2$) the high frequency spectral index, $\beta=\alpha/2 \approx -1$, is similar to the one inferred from a spherically symmetric wind $\beta =(2
\alpha + 1) /3 \approx -1$ [@Halpernetal99]. Note however, that the wind interpretation requires a high ($\approx 3$) p value (which may or may not be reasonable). Still from the optical observations alone it is difficult to distinguish between these two interpretations. Here the radio observations play a crucial role as the radio behavior is very different [@Frail00].
The sideways expansion causes a change in the hydrodynamic behavior and hence a break in the light curve. The beaming outside of the original jet opening angle also causes a break. If the sideways expansion is at the speed of light than both transitions would take place at the same time [@SPH99]. If the sideways expansion is at the sound speed then the beaming transition would take place first and only later the hydrodynamic transition would occur [@PanaitescuMeszaros99]. This would cause a slower and wider transition with two distinct breaks, first a steep break when the edge of the jet becomes visible and later a shallower break when sideways expansion becomes important.
The analytic or semi-analytic calculations of synchrotron radiation from jetted afterglows [@Rhoads99; @SPH99; @PanaitescuMeszaros99; @ModerskiEtal00; @KumarPanaitescu00a] have led to different estimates of the jet break time $t_{\rm
jet}$ and of the duration of the transition. @Rhoads99 calculated the light curves assuming emission from one representative point, and obtained a smooth ‘jet break’, extending $\sim 3-4$ decades in time, after which $F_{\nu>\nu_m}\propto
t^{-p}$. @SPH99 assume that the sideways expansion is at the speed of light, and not at the speed of sound ($c/\sqrt{3}$) as others assume, and find a smaller value for $t_{\rm jet}$. @PanaitescuMeszaros99 included the effects of geometrical curvature and finite width of the emitting shell, along with electron cooling, and obtained a relatively sharp break, extending $\sim 1-2$ decades in time, in the optical light curve. @ModerskiEtal00 used a slightly different dynamical model, and a different formalism for the evolution of the electron distribution, and obtained that the change in the temporal index $\alpha$ ($F_{\nu}\propto t^{-\alpha}$) across the break is smaller than in analytic estimates ($\alpha=2$ after the break for $\nu>\nu_m$, $p=2.4$), while the break extends over two decades in time.
![A relativistic jet at the last time step of the simulation [@Granot01]. ([**left**]{}) A 3D view of the jet. The outer surface represents the shock front while the two inner faces show the proper number density ([*lower face*]{}) and proper emissivity ([*upper face*]{}) in a logarithmic color scale. ([**right**]{}) A 2D ’slice’ along the jet axis, showing the velocity field on top of a linear color-map of the lab frame density.[]{data-label="3Djet"}](piran_fig27.eps "fig:"){width="4.97cm"} ![A relativistic jet at the last time step of the simulation [@Granot01]. ([**left**]{}) A 3D view of the jet. The outer surface represents the shock front while the two inner faces show the proper number density ([*lower face*]{}) and proper emissivity ([*upper face*]{}) in a logarithmic color scale. ([**right**]{}) A 2D ’slice’ along the jet axis, showing the velocity field on top of a linear color-map of the lab frame density.[]{data-label="3Djet"}](piran_fig28.eps "fig:"){width="7.0cm"}
The different analytic or semi-analytic models have different predictions for the sharpness of the ‘jet break’, the change in the temporal decay index $\alpha$ across the break and its asymptotic value after the break, or even the very existence a ‘jet break’ [@HDL00]. All these models rely on some common basic assumptions, which have a significant effect on the dynamics of the jet: (i) the shocked matter is homogeneous (ii) the shock front is spherical (within a finite opening angle) even at $t>t_{\rm jet}$ (iii) the velocity vector is almost radial even after the jet break.
However, recent 2D hydrodynamic simulations [@Granot01] show that these assumptions are not a good approximation of a realistic jet. Using a very different approach @Cannizzoetal04 find in another set of numerical simulations a similar result - the jet does not spread sideways as much. Figure \[3Djet\] shows the jet at the last time step of the simulation of @Granot01. The matter at the sides of the jet is propagating sideways (rather than in the radial direction) and is slower and much less luminous compared to the front of the jet. The shock front is egg-shaped, and quite far from being spherical. Figure \[averages\] shows the radius $R$, Lorentz factor $\Gamma$, and opening angle $\theta$ of the jet, as a function of the lab frame time. The rate of increase of $\theta$ with $R\approx ct_{\rm lab}$, is much lower than the exponential behavior predicted by simple models [@Rhoads97; @Rhoads99; @P00; @KumarPanaitescu00a]. The value of $\theta$ averaged over the emissivity is practically constant, and most of the radiation is emitted within the initial opening angle of the jet. The radius $R$ weighed over the emissivity is very close to the maximal value of $R$ within the jet, indicating that most of the emission originates at the front of the jet[^10], where the radius is largest, while $R$ averaged over the density is significantly lower, indicating that a large fraction of the shocked matter resides at the sides of the jet, where the radius is smaller. The Lorentz factor $\Gamma$ averaged over the emissivity is close to its maximal value, (again since most of the emission occurs near the jet axis where $\Gamma$ is the largest) while $\Gamma$ averaged over the density is significantly lower, since the matter at the sides of the jet has a much lower $\Gamma$ than at the front of the jet. The large differences between the assumptions of simple dynamical models of a jet and the results of 2D simulations, suggest that great care should be taken when using these models for predicting the light curves of jetted afterglows. Since the light curves depend strongly on the hydrodynamics of the jet, it is very important to use a realistic hydrodynamic model when calculating the light curves.
![The radius $R$ ([*left frame*]{}), Lorentz factor $\Gamma-1$ ([*middle frame*]{}) and opening angle $\theta$ of the jet ([ *right frame*]{}), as a function of the lab frame time in days [@Granot01].[]{data-label="averages"}](piran_fig29.eps "fig:"){width="3.97cm"} ![The radius $R$ ([*left frame*]{}), Lorentz factor $\Gamma-1$ ([*middle frame*]{}) and opening angle $\theta$ of the jet ([ *right frame*]{}), as a function of the lab frame time in days [@Granot01].[]{data-label="averages"}](piran_fig30.eps "fig:"){width="4.05cm"} ![The radius $R$ ([*left frame*]{}), Lorentz factor $\Gamma-1$ ([*middle frame*]{}) and opening angle $\theta$ of the jet ([ *right frame*]{}), as a function of the lab frame time in days [@Granot01].[]{data-label="averages"}](piran_fig31.eps "fig:"){width="3.97cm"}
@Granot01 used 2D numerical simulations of a jet running into a constant density medium to calculate the resulting light curves, taking into account the emission from the volume of the shocked fluid with the appropriate time delay in the arrival of photons to different observers. They obtained an achromatic jet break for $\nu>\nu_m(t_{\rm jet})$ (which typically includes the optical and near IR), while at lower frequencies (which typically include the radio) there is a more moderate and gradual increase in the temporal index $\alpha$ at $t_{\rm jet}$, and a much more prominent steepening in the light curve at a latter time when $\nu_m$ sweeps past the observed frequency. The jet break appears sharper and occurs at a slightly earlier time for an observer along the jet axis, compared to an observer off the jet axis (but within the initial opening angle of the jet). The value of $\alpha$ after the jet break, for $\nu>\nu_m$, is found to be slightly larger than $p$ ($\alpha=2.85$ for $p=2.5$). Due to the fact that a significant fraction of the jet break occurs due to the relativistic beaming effect (that do not depend on the hydrodynamics) in spite of the different hydrodynamic behavior the numerical simulations show a jet break at roughly the same time as the analytic estimates.
Generalizations: V. Angular Dependent Jets and the Structured Jet Model {#sec:structured}
------------------------------------------------------------------------
In a realistic jet one can expect either a random or regular angular dependent structure. Here there are two dominant effects. As the ejecta slows down its Lorentz factor decreases and an observer will detect radiation from an angular region of size $\Gamma^{-1}$ (see §\[sec:patchy-shell\]). At the same time the mixing within the ejecta will lead to an intrinsic averaging of the angular structure. Thus, both effects lead to an averaging over the angular structure at later times.
Several authors [@Lipunov_Postnov_Pro01; @Rossi02; @Zhang02] suggested independently a different interpretation to the observed achromatic breaks in the afterglow light curves. This interpretation is based on a jet with a regular angular structure. According to this model all GRBs are produced by a jet with a fixed angular structure and the break corresponds to the viewing angle. @Lipunov_Postnov_Pro01 considered a “universal" jet with a 3-step profile: a spherical one, a $20^o$ one and a $3^o$ one. @Rossi02 and @Zhang02 considered a special profile where the energy per solid angle, $\varepsilon(\theta) $, and the Lorentz factor $\Gamma(t=0,\theta)$ are: $$\varepsilon=\left\{\begin{array}{ll}
\varepsilon_{c} & \;\;\;0 \leq \theta \leq \theta_{c}\\
\varepsilon_{c}\big(\frac {\theta}{\theta_{c}})^{-a} &
\;\;\;\theta_{c}
\leq \theta \leq \theta_{j}
\end{array}\right.
\label{eq:Etheta}$$ and $$\Gamma=\left\{\begin{array}{lll}
\Gamma_{c}& &\;\;\;0 \leq \theta \leq \theta_{c}\\
\Gamma_{c}\big(\frac
{\theta}{\theta_{c}})^{-b},&b>0 & \;\;\; \theta_{c} \leq \theta
\leq \theta{j},
\end{array}\right.
\label{eq:Gtheta}$$ where $\theta_j$ is a maximal angle and core angle, $\theta_c$, is introduced to avoid a divergence at $\theta=0$ and the parameters $a$ and $b$ here define the energy and Lorentz factor angular dependence. This core angle can be taken to be smaller than any other angle of interest. The power law index of $\Gamma$, $b$, is not important for the dynamics of the fireball and the computation of the light curve as long as $\Gamma(t=0,\theta)\equiv\Gamma_{0}(\theta)>\theta^{-1}$ and $\Gamma_{0}(\theta) \gg 1$.
To fit the constant energy result [@Frail01; @PanaitescuK01; @Piranetal01] @Rossi02 consider a specific angular structure with $a=2$. @Rossi02 approximate the evolution assuming that at every angle the matter behaves as if it is a part of a regular BM profile (with the local $\varepsilon$ and $\Gamma(t,\theta)$) until $\Gamma(t,\theta) =\theta^{-1}$. Then the matter begins to expand sideways. The resulting light curve is calculated by averaging the detected light resulting from the different angles. They find that an observer at an angle $\theta_o$ will detect a break in the light curve that appears around the time that $\Gamma(t,\theta_o)=\theta_o^{-1}$ (see Fig. \[fig:Rossi\]). A simple explanation of break is the following: As the evolution proceeds and the Lorentz factor decreases an observer will detect emission from a larger and larger angular regions. Initially the additional higher energy at small angles, $\theta<\theta_o$ compensates over the lower energies at larger angles $\theta>
\theta_o$. Hence the observer detects a roughly constant energy per solid angle and the resulting light curve is comparable to the regular pre-jet break light curve. This goes on until $\Gamma^{-1}(0)=\theta_{(o)}$. After this stage an further increase in the viewing angle $\Gamma^{-1}$ will result in a decrease of the energy per unit solid angle within the viewing cone and will lead to a break in the light curve.
This interpretation of the breaks in the light curves in terms of the viewing angles of a standard structured jets implies a different understanding of total energy within GRB jets and of the rate of GRBs. The total energy in this model is also a constant but now it is larger as it is the integral of Eq. \[eq:Etheta\] over all viewing angles. The distribution of GRB luminosities, which is interpreted in the uniform jet interpretation as a distribution of jet opening angles is interpreted here as a distribution of viewing angles. As such this distribution is fixed by geometrical reasoning with $P(\theta_o) d\theta_o \propto
\sin\theta_o d\theta_o$ (up to the maximal observing angle $\theta_j$). This leads to an implied isotropic energy distribution of $$P(log(E_{iso})) \propto E^{-1}_{iso} \ .$$ @GuettaPiranWaxman03 and @NakarGranotGuetta03 find that these two distributions are somewhat inconsistent with current observations. However the present data that suffers from numerous observational biases is insufficient to reach a definite conclusions.
In order to estimate better the role of the hydrodynamics on the light curves of a structured jet @GranotKumar02 [@GranotKumarP03] considered two simple models for the hydrodynamics. In the first (model 1) there is no mixing among matter moving at different angles i.e. $\varepsilon(\theta,t)=\varepsilon(\theta,t_0)$. In the second (model 2) $\varepsilon$ is a function of time and it is averaged over the region to which a sound wave can propagate (this simulates the maximal lateral energy transfer that is consistent with causality). They consider various energy and Lorentz factors profiles and calculate the resulting light curves (see Fig. \[fig:GranotKumar\]).
@GranotKumar02 find that the light curves of models 1 and 2 are rather similar in spite of the different physical assumptions. This suggests that the widening of the viewing angle has a more dominant effect than the physical averaging. For models with a constant energy and a variable Lorentz factor ($(a,b)=(0,2)$) the light curve initially rises and there is no jet break, which is quite different from observations for most afterglows. For $(a,b)=(2,2),\,(2,0)$ they find a jet break at $t_j$ when $\Gamma(\theta_o)\sim\theta_o^{-1}$. For $(a,b)=(2,2)$ the value, $\alpha_1$, of the temporal decay slope at $t < t_{j}$ increases with $\theta_o$, while $\alpha_2=\alpha(t>t_j)$ decreases with $\theta_o$. This effect is more prominent in model 1, and appears to a lesser extent in model 2. This suggests that $\delta\alpha=\alpha_1-\alpha_2$ should increase with $t_j$, which is not supported by observations. For $(a,b)=(2,0)$, there is a flattening of the light curve just before the jet break (also noticed by @Rossi02), for $\theta_o > 3\theta_c$. Again, this effect is larger in model 1, compared to model 2 and again this flattening is not seen in the observed data.
Clearly a full solution of an angular dependent jet requires full numerical simulations. @KumarGranot03 present a simple 1-D model for the hydrodynamics that is obtained by assuming axial symmetry and integrating over the radial profile of the flow, thus considerably reducing the computation time. The light curves that they find resemble those of models 1 and 2 above indicating that these crude approximations are useful. Furthermore they find relatively little change in $\epsilon(\theta)$ within the first few days, suggesting that model 1 is an especially useful approximation for the jet dynamics at early times, while model 2 provides a better approximation at late times.
Afterglow Polarization - a tool that distinguished between the different jet models {#sec:pol_after}
-----------------------------------------------------------------------------------
Synchrotron emission from a jet (in which the spherical symmetry is broken) would naturally produce polarized emission [@Gruzinov99; @GhiselliniLazzati99; @Sari99]. Moreover, the level and the direction of the polarization are expected to vary with time and to give observational clues on the geometrical structure of the emitting jet and our observing angle with respect to it.
The key feature in the determination of the polarization during the afterglow is the varying Lorentz factor and (after jet break) varying jet width. This changes changes the overall geometry (see Fig. \[fig:pol\_random\]) and hence the observer sees different geometries [@Sari99; @Hurleyetal02]. Initially, the relativistic beaming angle $1/\Gamma$ is narrower than the physical size of the jet $\theta_0$, and the observer see a full ring and therefore the radial polarization averages out (the first frame, with $\Gamma\theta_0=4$ of the left plot in Fig. \[fig:polfig\]). As the flow decelerates, the relativistic beaming angle $1/\Gamma$ becomes comparable to $\theta_0$ and only a fraction of the ring is visible; net polarization is then observed. Assuming, for simplicity, that the magnetic field is along the shock then the synchrotron polarization will be radially outwards. Due to the radial direction of the polarization from each fluid element, the total polarization is maximal when a quarter ($\Gamma\theta_0=2$ in Figure \[fig:polfig\]) or when three quarters ($\Gamma\theta_0=1$ in Figure \[fig:polfig\]) of the ring are missing (or radiate less efficiently) and vanishes for a full and a half ring. The polarization, when more than half of the ring is missing, is perpendicular to the polarization direction when less than half of it is missing.
At late stages the jet expands sideways and since the offset of the observer from the physical center of the jet is constant, spherical symmetry is regained. The vanishing and re-occurrence of significant parts of the ring results in a unique prediction: there should be three peaks of polarization, with the polarization position angle during the central peak rotated by $90^{\circ }$ with respect to the other two peaks. In case the observer is very close to the center, more than half of the ring is always observed, and therefore only a single direction of polarization is expected. A few possible polarization light curves are presented in Fig. \[fig:polfig\].
The predicted polarization from a structured jet is drastically different from the one from a uniform jet, providing an excellent test between the two models [@Rossietalpolarization02]. Within the structured jet model the polarization arises due to the gradient in the emissivity. This gradient has a clear orientation. The emissivity is maximal at the center of center of the jet and is decreases monotonously outwards. The polarization will be maximal when the variation in the emissivity within the emitting beam are maximal. This happens around the jet break when $\theta_{obs} \sim \Gamma^{-1}$ and the observed beam just reaches the center. The polarization expected in this case is around 20% [@Rossietalpolarization02] and it is slightly larger than the maximal polarization from a uniform jet. As the direction of the gradient is always the same (relative to a given observer) there should be no jumps in the direction of the polarization.
According to the patchy shell model [@KP00b] the jet can includes variable emitting hot spots. This could lead to a fluctuation in the light curve (as hot spots enter the observed beam) and also to corresponding fluctuations in the polarization [@Granot03; @NakarOren03]. There is a clear prediction [@NakarPiranGranot03; @NakarOren03] that if the fluctuations are angular fluctuations have a typical angular scale $\theta_f$ then the first bump in the light curve should take place on the time when $\Gamma^{-1} \sim \theta_f$ (the whole hot spot will be within the observed beam). The following bumps in the light curve should decrease in amplitude (due to statistical fluctuations). @NakarOren03 show analytically and numerically that the jumps in the polarization direction should be random, sharp and accompanied by jumps in the amount of polarization.
Orphan Afterglows {#sec:orphan}
-----------------
Orphan afterglows arise as a natural prediction of GRB jets. The realization that GRBs are collimated with rather narrow opening angles, while the following afterglow could be observed over a wider angular range, led immediately to the search for orphan afterglows: afterglows which are not associated with observed prompt GRB emission. While the GRB and the early afterglow are collimated to within the original opening angle, $\theta_j$, the afterglow can be observed, after the jet break, from a viewing angle of $\Ga^{-1}$. The Lorentz factor, $\Ga$, is a rapidly decreasing function of time. This means that an observer at $\theta_{obs}>\theta_j$ couldn’t see the burst but could detect an afterglow once $\Ga^{-1}=\theta_{obs}$. As the typical emission frequency and the flux decrease with time, while the jet opening angle $\theta$ increases, this implies that observers at larger viewing angles will detect weaker and softer afterglows. orphan afterglows can be observed several hours or at most a few days after the burst (depending of course on the sensitivity of the detector). Optical afterglows (brighter than 25th mag ) can be detected in R band for a week from small ($\sim 10^\circ$) angles away from the GRB jet axis. On the other hand, at very late times, after the Newtonian break, radio afterglows could be detected by observers at all viewing angles.
The search for orphan afterglows is an observational challenge. One has to search for a $~10^{-12}$ergs/sec/cm$^2$ signal in the , a 23th or higher magnitude in the optical or a mJy in radio (at GHz) transients. Unlike afterglow searches that are triggered by a well located GRB for the orphan afterglow itself there is no information where to search and confusion with other transients is rather easy. So far there was no detection of any orphan afterglow in any wavelength.
@Rhoads97 was the first to suggest that observations of orphan afterglows would enable us to estimate the opening angles and the true rate of GRBs. @DalalGriestPruet02 have pointed out that as the post jet-break afterglow light curves decay quickly, most orphan afterglows will be dim and hence undetectable. They point out that if the maximal observing angle, $\theta_{max}$, of an orphan afterglow will be a constant factor times $\theta_j$ the ratio of observed orphan afterglows, $R_{orph}^{obs}$, to that of GRBs, $R_{GRB}^{obs}$, will not tell us much about the opening angles of GRBs and the true rate of GRBs, $R_{GRB}^{true} \equiv f_b R_{GRB}^{obs}$. However as we see below this assumption is inconsistent with the constant energy of GRBs that suggests that all GRBs will be detected to up to a fixed angle which is independent of their jet opening angle.
### Optical Orphan Afterglow {#sec:orphan-optical}
Optical orphan afterglow is emitted at a stage when the outflow is still relativistic. The observation that GRBs have a roughly constant total energy [@Frail01; @PanaitescuK01; @Piranetal01] and that the observed variability in the apparent luminosity arises mostly from variation in the jet opening angles leads to a remarkable result: The post jet-break afterglow light curve is universal [@GranotEtal02]. Fig. \[fig:orphan\_schmatic\] depicts this universal light curve. This implies that for a given redshift, $z$, and a given limiting magnitude, $m$, there will be a fixed $\theta_{max}(z,m)$ (independent of $\theta_j$, for $\theta_j < \theta_{max}$) from within which orphan afterglow can be detected.
This universal post jet-break light curve can be estimated from the observations [@TotaniPanaitescu02] or alternatively from first principles [@Nakar_P_Granot02] . An observer at $\theta_{obs}> \theta_j$ will (practically) observe the afterglow emission only at $t_\theta$ when $\Gamma = \theta_{obs}^{-1}$. Using Eq. \[tjet\] and the fact that $\Gamma \propto t^{-1/2}$ after the jet break (Eq. \[Rgammajet\]) one can estimate the time, $t_{\theta}$ when a emission from a jet would be detected at $\theta_{obs}$: $$t_\theta= A (\theta_{obs}/\theta_j)^2 t_{jet} \ , \label{ttheta}$$ where $A$ is a factor of order unity, and $t_{jet}$ is the time of the jet break (given by Eq. \[tjet\]). The flux at this time is estimated by substitution of this value into the post-jet-break light curve (see @Nakar_P_Granot02 for details): $$F(\theta_{obs}) = F_0 f(z) \theta_{obs}^{-2p} \ , \label{Fnumax2a}$$ where $F_0$ is a constant and $f(z)=(1+z)^{1+\beta}D_{L28}^{-2}$ includes all the cosmological effects and $D_{L28}$ is the luminosity distance in units of $10^{28}$cm. One notices here a very strong dependence on $\theta_{obs}$. The peak flux drops quickly when the observer moves away from the axis. Note also that this maximal flux is independent of the opening angle of the jet, $\theta_j$. The observations of the afterglows with a clear jet break (GRB990510 [@Harrisonetal99; @Staneketal99], and GRB000926 [@Harrisonetal01]) can be used to calibrate $F_0$.
Now, using Eq. \[Fnumax2a\], one can estimate $\theta_{max}(z,m)$ and more generally the time, $(t_{obs}(z,\theta,m)$ that a burst at a redshift, $z$, can be seen from an angle $\theta$ above a limiting magnitude, $m$: $$t_{obs}(z,\theta,m) \approx {A t_{jet} \over \theta_j^2}
(\theta_{max}^2 - \theta_{obs}^2) \ . \label{tobs}$$ One can then proceed and integrate over the cosmological distribution of bursts (assuming that this follows the star formation rate) and obtain an estimate of the number of orphan afterglows that would appear in a single snapshot of a given survey with a limiting sensitivity $F_{lim}$: $$N_{orph}= \int_0^{\infty} {n(z)\over (1+z)} {dV(z) \over dz} dz
\times\int_{\theta_j}^{\theta_{max}(z,m)} t_{obs}(z,\theta,m)
\theta d\theta \propto (F_0/F_{lim})^{2/p} \ . \label{Rate2}$$ where n(z) is the rate of GRBs per unit volume and unit proper time and dV(z) is the differential volume element at redshift $z$. Note that modifications of this simple model may arise with more refined models of the jet propagation [@GranotEtal02; @Nakar_P_Granot02].
The results of the intergration of Eq. \[Rate2\] are depicted in Fig. \[fig:orphan\_rate\].
Clearly the rate of a single detection with a given limiting magnitude increases with a larger magnitude. However, one should ask what will be the optimal strategy for a given observational facility: short and shallow exposures that cover a larger solid angle or long and deep ones over a smaller area. The exposure time that is required in order to reach a given limiting flux, $F_{lim}$, is proportional to $F_{lim}^{-2}$. Dividing the number density of observed orphan afterglows (shown in Fig. \[fig:orphan\_rate\]) by this time factor results in the rate per square degree per hour of observational facility. This rate increases for a shallow surveys that cover a large area. This result can be understood as follows. Multiplying Eq. \[Rate2\] by $F_{lim}^2$ shows that the rate per square degree per hour of observational facility $\propto F_{lim}^{2-2/p}$. For $p>1$ the exponent is positive and a shallow survey is preferred. The limiting magnitude should not be, however, lower than $\sim$ 23rd as in this case more transients from on-axis GRBs will be discovered than orphan afterglows.
Using these estimates @Nakar_P_Granot02 find that with their most optimistic parameters 15 orphan afterglows will be recorded in the Sloan Digital Sky Survey (SDSS) (that covers 10$^4$ square degrees at 23rd mag) and 35 transients will be recorded in a dedicated 2m class telescope operating full time for a year in an orphan afterglow search. @TotaniPanaitescu02 find a somewhat higher rate (a factor $\sim 10$ above the optimistic rate). About 15% of the transients could be discovered with a second exposure of the same area provided that it follows after 3, 4 and 8 days for $m_{lim}=23$, 25 and 27. This estimate does not tackle the challenging problem of identifying the afterglows within the collected data. @Rhoads01 suggested to identify afterglow candidates by comparing the multi-color SDSS data to an afterglow template. One orphan afterglow candidate was indeed identified using this technique [@VandenBerketal02]. However, it turned out that it has been a variable AGN [@Gal-Yam02]. This event demonstrates the remarkable observational challenge involved in this project.
### Radio Orphan Afterglow {#sec:orphan_radio}
After the Newtonian transition the afterglow is expanding spherical. The velocities are at most mildly relativistic so there are no relativistic beaming effects and the afterglow will be observed from all viewing angles. This implies that observations of the rate of orphan GRB afterglows at this stage will give a direct measure of the beaming factor of GRBs. Upper limits on the rate of orphan afterglows will provide a limit on the beaming of GRBs [@PernaLoeb98]. However, as I discuss shortly, somewhat surprisingly, upper limits on the rate of orphan radio afterglow (no detection of orphan radio afterglow) provide a lower (and not upper) limit on GRB beaming [@Levinsonetal02].
@FrailWaxmanKulkarni00 estimate the radio emission at this stage using the Sedov-Taylor solution for the hydrodynamics (see §\[sec:Newtonian\]). They find that the radio emission at GHz will be around 1 mJy at the time of the Newtonian transition (typically three month after the burst) and it will decrease like $t^{-3(p-1)/2+3/5}$ (see Eq. \[Sedov\_light\]). Using this limit one can estimate the rate of observed orphan radio afterglow within a given limiting flux. The beaming factor $f^{-1}_b$ arises in two places in this calculations. First, the overall rate of GRBs: $R_{GRB}^{true} \equiv f_b R_{GRB}^{obs}$, increases with $f_b$. Second the total energy is proportional to $f_b^{-1}$ hence the flux will decrease when $f_B$ increases. The first factor implies that the rate of orphan radio afterglows will increase like $f_b$. To estimate the effect of the second factor @Levinsonetal02 use the fact that (for a fixed observed energy) the time that a radio afterglow is above a given flux is proportional to $E^{10/9}$ in units of the NR transition time which itself is proportional to $E^{1/3}$. Overall this is proportional to $E^{13/9}$ and hence to $f_b^{-13/9}$. To obtain the overall effect of $f_b$ @Levinsonetal02 integrate over the redshift distribution and obtain the total number of orphan radio afterglow as a function of $f_b$. For a simple limit of a shallow survey (which is applicable to current surveys) typical distances are rather “small", i.e. less than 1Gpc and cosmological corrections can be neglected. In this case it is straight forwards to carry the integration analytically and obtain the number of radio orphan afterglows in the sky at any given moment [@Levinsonetal02]: $$N_R\simeq 10^{4}f_b^{5/6} (R/0.5)\left(\frac{f_{\nu min}}{5 mJy}
\right)^{-3/2}\left(\frac{\epsilon_e}{0.3}
\right)^{3/2}\left(\frac{\epsilon_B}{0.03}\right)^{9/8}n_{-1}^
{19/24}E_{\rm iso,54}^{11/6}\nu_9^{-3/4}(t_i/3 t_{NR})^{-7/20}.
\label{NR}$$ where $R$ is the observed rate of GRBs per Gpc$^3$ per year, and $t_i$ is the time in which the radio afterglow becomes isotropic.
@Levinsonetal02 search the FIRST and NVSS surveys for point-like radio transients with flux densities greater than 6 mJy. They find 9 orphan candidates. However, they argue that the possibility that most of these candidates are radio loud AGNs cannot be ruled out without further observations. This analysis sets an upper limit for the all sky number of radio orphans, which corresponds to a lower limit $f_b^{-1}>10$ on the beaming factor. Rejection of all candidates found in this search would imply $f_b^{-1}>40$ [@GuettaPiranWaxman03].
Generalizations: VI. Additional Physical Processes
---------------------------------------------------
With the development of the theory of GRB afterglow it was realized that several additional physical ingredients may influence the observed afterglow light emission. In this section I will review two such processes: (i) Pre acceleration of teh surrounding matter by the prompt emission and (ii) Decay of neutrons within the outflow.
### Pre-acceleration {#sec:pre-acceleration}
The surrounding regular ISM or even stellar wind is optically thin to the initial pulse. Still the interaction of the pulse and the surrounding matter may not be trivial. @ThompsonMadau00 pointed out that a small fraction of the radiation will be Compton scattered on the surrounding electrons. The backscattered photons could now interact with the outwards going flux and produce pairs. The pairs will increase the rate of backscattering and this could lead to an instability. When sufficient number of pairs will be produced the surrounding matter will feel a significant drag by the flux and it will be accelerated outwards [@MadauThompson00]. These pre-acceleration of the ambient medium could have several implications to the early afterglow [@Meszaros-Ramirez-Rees01; @Beloborodov_front_02].
The key issue is that while the optical depth of the surrounding medium (as “seen" the photons) is very small, the mean free path of an ambient electron within the photons is large (at small enough radius) and each electron scatters many photons. While the medium absorbs only a small fraction of the prompt energy, the effect of this energy can be significant. @Beloborodov02a characterizes the interaction of the radiation front with the surrounding medium by a dimensionless parameter[^11]: $$\eta = \frac{\sigma_T E_{iso}}{4 \pi R^2 m_ec^2}=6.5
E_{52}R_{16}^{-2} \ ,$$ the energy that a single electron scatters relative to its rest mass. @Beloborodov_front_02 calculates the Lorentz factor of the ambient medium and the number of pairs per initial electron as functions of $\eta$. where $\eta_{load}=20-30$, depending on the spectrum of the gamma-rays, $\eta_{acc}=5\eta_{load}=100-150$, and $f_{acc}=[\exp(\eta_{acc}/\eta_{load})+\exp(-\eta_{acc}
/\eta_{load})]/2=74$.
If $\eta<\eta_{load} \approx 20-30$, depending on the spectrum of the gamma-rays, the medium remains static and $e^\pm$-free. When the front has $\eta>\eta_{load}$, a runaway $e^\pm$ loading occurs. The number of loaded pairs depends exponentially on $\eta$ as long as $\eta<\eta_{acc} =5\eta_{load}=100-150$. The medium is accelerated if $\eta>\eta_{acc}$. $\eta_{acc}$ is around 100 because the electrons are coupled to the ambient ions, and and the other hand the loaded $e^\pm$ increase the number of scatters per ion. At $\eta=\eta_{gap}\approx 3\times 10^3$, the matter is accelerated to a Lorentz factor $\Gamma_{ambient}$ that exceeds the Lorentz factor of the ejecta. It implies that the radiation front pushes the medium away from the ejecta and opens a gap.
As the GRB radiation front expands, the energy flux and hence $\eta$ decreases $\propto R^{-2}$. $\eta$ passes through $\eta_{gap}$, $\eta_{acc}$, and $\eta_{load}$ at $R_{gap}$, $R_{acc}$, and $R_{load}$, respectively. These three characteristic radii define four stages:
[**i.**]{} $R<R_{gap}\approx R_{acc}/3$: The ejecta moves in a cavity produced by the radiation front with $\Gamma_{ambient}>\Ga_{ejecta}$.
[**II.**]{} $R_{gap}<R<R_{acc} \approx 3 \times 10^{15} {\rm ~cm}
E_{52}^{1/2}{\rm ~cm}$: The ejecta sweeps the $e^\pm$-rich medium that has been preaccelerated to $1\ll\Gamma_{ambient}<\Ga_{ejecta}$.
[**III.**]{} $R_{acc}<R<R_{load}\approx 2.3 R_{acc}$. The ejecta sweeps the “static” medium ($\Gamma_{ambient}\approx 1$) which is still dominated by loaded $e^\pm$.
[**IV.**]{} $R>R_{load}$. The ejecta sweeps the static pair-free medium.
This influence of the on the surrounding matter may modify the standard picture of interaction of external shocks with the surrounding medium (see §\[sec:Ex-hydro\]. This depends mostly on the relation between $R_{ext}$ and $R_{gap} \approx
10^{15}E_{52}^{1/2}$ cm. If $R_{ext} > R_{gap}$ this effect won’t be important. However, if $R_{ext} < R_{gap}$ then effective decceleration will begin only at $R_{gap}$. At $R<R_{gap}$ the ejecta freely moves in a cavity cleared by the radiation front and only at $R=R_{gap}$ the blast wave gently begins to sweep the preaccelerated medium with a small relative Lorentz factor. With increasing $R>R_{gap}$, $\Gamma_{ambient}$ falls off quickly, and it approaches $\Gamma_{ambient}=1$ at $R=R_{acc} \approx 3 R_{gap}$ as $\Gamma_{ambient}=(R/R_{acc})^{-6}$. Thus, after a delay, the ejecta suddenly “learns” that there is a substantial amount of ambient material on its way. This resembles a collision with a wall and results in a sharp pulse (see Fig. \[fig:flash\]).
While $R_{gap}$ does not depend on the external density $R_{ext}$ does (see Eq. \[Rext\]). The condition $R_{ext}<R_{gap}$ implies: $$E_{52}^{1/6}n_1^{1/3} \Ga^{2/3}_{100} > 0.02$$ Thus it requires a dense external medium and large initial Lorentz factor. Otherwise $R_{gap}$ is too large and the deceleration takes place after the gap is closed. Hence the conditions for pre-acceleration will generally occur if the burst takes place in a dense circumburst regions, like in a Wolf-Rayet progenitor [@Beloborodov_front_02]. @KumarPanatescu03 elaborate on this model and find that the observational limits by LOTIS and ROTSE on prompt emission from various burst limit the ambient ISM density (within $10^{16}$cm to less than $10^{3}{rm cm}^{-3}$. Similarly the find that in case of a wind the progenitors mass loss to wind’s velocity ratio is below $10^{-6}M_\odot$/yr/(10$^3$km/sec).
### Neutron decoupling and decay {#sec:neutrons}
@DerishevKocharovskyKocharovsky01a [@DerishevKocharovskyKocharovsky99b] pointed out that neutrons that are included initially in the fireball will change its dynamics and modify the standard afterglow evolution. While the protons slow down due to the interaction with the surrounding matter the neutrons will coast freely after they decouple with $\Gamma_n$, which equals to the Lorentz factor while decoupling took place.
At $$R_{decay} \approx 0.3 \times 10^{16} {\rm cm} (\Gamma_n/100)$$ the neutrons decay. A new baryonic shell forms ahead of the original fireball shell, with energy comparable to the initial energy of the protons’ shell (this depends, of course, on the initial ratio of neutrons to protons). At this stage the neutrons front that is not slowed down like the rest of the fireball is at a distance: $$\Delta R= R (1/ 2\Gamma^2
- 1/2Gamma^2_n) \ ,$$ from the fireball front, where $\Gamma$ is the current Lorentz factor of the fireball.
Once more the situation depends on whether $R_{decay}$ is smaller or larger than $R_{ext}$, the original deceleration radius. If $R_{decay}< R_{ext}$: $$E_52^{1/3}n_1^{-1/3} \Ga^{-2/3}_{100} (\Gamma_n/300)^{-1} < 0.06 \
,$$ the decaying neutron products will mix with the original protons and won’t influence the evolution significantly (apart from adding their energy to the adiabatic fireball energy). Otherwise, they situation depends on $\Gamma_n$ the Lorentz factor at decoupling.
@PruetDalal02 consider a situation in which the neutron decouple with a low $\Gamma_n$. In this case one will get a delayed shock scenario when the neutronic decay produce will eventually catch up with the slowing down protons (when their Lorentz factor is of order $\Gamma_n$. Along the same line of thought @DalalGriestPruet02 suggest that a large neutronic component that may exist within the initial fireball material may help to eliminate the baryon load problem [@SP90].
@Beloboradov03neutron considers a situation when $\Gamma_n \approx \Ga_0$, the initial Lorentz factor of the protons. In this case the decaying neutrons’ products will be ahead of the shell of the protons. The decaying products will interact with the surrounding matter and will begin to slow down. There will be a triple interaction between the two shells and the surrounding ambient medium (resembling to some extend the pre-acceleration scenario described earlier) . This will take place at radii of a few times $R_{decay}$ and at an observed time of a few $\times R_{decay}/2 c \Gamma^2 \approx {\rm a few
seconds} /(\Gamma_n/300)$, i.e. extremely early. This will produce brightening when the fronts pass $R_{decay}$.
The neutrons could also influence the behavior of the relativistic flow during the prompt (internal shocks) phase. Specifically inelastic collisions between differentially streaming protons and neutrons can produce pions and eventually $\nu_\mu$ of 10 GeV as well as $\nu_e$ of 5 GeV [@BahcallMeszaros00; @MeszarosRees00]. These neutrino fluxes could produce $\sim 7$ events/year in km3 neutrino detectors. GeV photons will also be produced but it is unlikely that they could be detected.
\[sec:Afterglow-IC\]
Additional Emission from GRBs {#sec:Other}
==============================
TeV {#sec:TeV}
----
@Hurley94 reported on detection of 18 GeV photons from GRB 940217. Milagrito - A TeV detector - discovered a possible TeV signal coincident with GRB 970417 [@Milagrito_970417]. @Gonzalez03 discovered a high energy tail that extended up to 200 MeV from GRB 941017.
A natural source for high energy is the SSC (Synchrotron self Compton) component produced by IC from the burst itself or from the afterglow [@MR94a; @Meszaros-Rees-Papathanassiou94]. The SSC photons energy should be $\ga_e^2$ higher than the synchrotron photons. Typical random Lorentz factors of electrons, $\gamma_e$, within internal shocks are of order a thousand (in the fluid’s rest frame). This implies that if the observed emission is produced by synchrotron in internal shocks then the IC emission would produce a second peak around a few hundred GeV. This would be the analogue of the high energy component observed in Blazars. Note that emission above $\sim 10-100$GeV might be self absorbed by pair production with the source [@Papathanassiou-Meszaros96; @Pilla-Loeb98; @GuettaGranot03z].
The SSC component would be even higher from the early afterglow. The synchrotron emission from the forward shock is expected to be around 10 keV (if the observed early afterglow is indeed produced by the external shocks). With a Lorentz factor of a typical electron around $10^5$ the expected SSC component should be around $100$TeV. Finally the reverse shock emission is expected to produce 100 eV photons [@SP99b]. With typical electrons Lorentz factor of a few thousand this should correspond to SSC photons with typical energy of 100 MeV. Depending on the relevant Y parameter the fluxes of these high energy components should be comparable or even larger than the prompt GRB fluxes. This emission should be simultaneous with the GRB emission. It is also possible that the forward shock electrons will IC reverse shocks photons. It is likely that this is the cause of the high energy emission seen in GRB 941017 [@PeerWaxman04; @PiranNakarGranot03].
Other mechanisms can produce high energy emission as well. @Vietri97 suggested that as GRB can accelerate protons up to $10^{20}$eV (see §\[sec:UHECRs\] below). These protons can emit 0.01 of the GRB energy as high energy with energies up to 300 GeV. @Bottcher_Dermer98 considered the synchrotron spectrum resulting from high energy protons and leptons produced in cascade initiated by photo-pion production. They predict a significant flux of 10Mev-100GeV photons.
While the high energy photons flux could be significant these photons might not be detectable on earth. The high energy photon flux above 1 TeV would be attenuated significantly due to pair production of such high energy photons with the intergalactic NIR flux [@GouldSchreder67]. @DaiLu02 suggest that secondary emission produced via these interaction (upscattering of the CMB by the produced pairs) would still point towards the initial direction and hence might be detectable as a delayed GeV emission. However, even a tiny intergalactic magnetic field ($>
10^{-22}$G would be sufficient to deflect the electrons and dilute these signal [@GuettaGranot03z].
Neutrinos {#sec:neutrinos}
----------
Neutrinos can be produced in several regions within GRB sources. First some models, like the Collapsar model or the neutron star merger model predict ample ($\sim 10^{53}$ ergs) production of low (MeV) neutrinos. However, no existing or planned detector could see those from cosmological distances. Furthermore, this signal will be swamped in rate by the much more frequent SN neutrino signals which would typically appear closer.
However, GRBs could be detectable sources of high energy neutrinos, with energies ranging from $10^{14}$eV to $10^{17}$eV. These neutrinos are produced by internal or external shocks of the GRB process itself and hence are independent of the nature of the progenitor.
To understand the process of neutrino emission recall that neutrinos are “best" produced in nature following pions production in proton-photon or proton-proton collisions. The proton-photon process requires that the photon’s energy is around the $\Delta$ resonance in the proton’s energy frame: namely at $\sim 200$MeV. The resulting pion decays emitting neutrinos with a typical energy of $\sim 50$ MeV in the proton’s rest frame. If the proton is moving relativistically, with a Lorentz factor $\gamma_p$ within the laboratory frame the required photon energy in the lab frame is smaller by a factor of $\gamma_p$ and the resulting neutrino energy is larger by a factor of $\gamma_p$. Depending on the surrounding environment very energetic pions may lose some of their energy before decaying producing a “cooling break" in the neutrino spectrum. In this case the resulting neutrinos’ energy will be lower than this naive upper limit.
Within GRBs protons are accelerated up to $10^{20}$eV [@Waxman95; @Vietri95]. The relevant Lorentz factors of these protons range from $\Gamma$ up to $10^{11}$ (at the very high energy tail of the protons distribution). Thus we expect neutrinos up to $10^{19}$eV provided that there is a sufficient flux of photons at the relevant energies so that the pions can be produced and there are no energy loses to the pions.
@PaczynskiXu94 and @WaxmanBahcall97 calculated the flux of VHE neutrinos from internal shocks. They found that a significant flux of $\sim 10^{14}$eV neutrinos can be produced by interaction of the accelerated internal shocks protons with the GRB photons. @GuettaSpadaWaxman01 estimate that on average each GRB produces a flux of $\sim 10^{-9}$ GeV/cm$^2$ sec sr corresponding to 0.01 events in a km$^3$ detector. Calculations of specific fluxes from individual bursts (that factor in the observed spectrum) were performed by @Guettaetal03. @WaxmanBahcall00 suggest that protons accelerated at the reverse shock (that arises at the beginning of the afterglow) would interact with the optical - uv flux of the afterglow and produce $10^{18}$eV neutrinos.
Within the Collapsar model @WaxmanMeszaros00 [@Razzaque-Meszaros-Waxman03] suggested that as the jet punches the stellar shell it can produce a flux of TeV neutrinos. Within the Supranova model the internal shock protons [@GuettaGranot03] or external shocks protons [@Dermer03] can also interact with external, pulsar wind bubble, photons producing $10^{16}$eV neutrinos with a comparable detection rate to the one obtained form interaction of the internal shock protons with photons. If the external magnetic field is sufficiently large (as in the pulsar wind bubble) external shocks can also accelerate protons to high energy [@VietriDeMarcoGuetta03]. In this case the protons can interact with afterglow photons and can produce neutrinos up to $10^{17}$eV [@LiDaiLu02].
Cosmic Rays and Ultra High Energy Cosmic Rays {#sec:UHECRs}
----------------------------------------------
Already in 1990 @SP90 noticed that a fireball may produce cosmic rays. However the flux of “low" energy (up to $10^{14}$ eV) that they considered was smaller by several orders of magnitude than the observed flux of cosmic rays that are accelerated in SNRs. Hence this component isn’t important.
@Waxman95 and independently @Vietri95 noticed that protons can be accelerated up to $10^{20}$eV within the relativistic shocks that take place in GRBs. Namely internal shocks or the reverse shock in GRBs are among the few locations in the Universe where the shock acceleration condition (Eq. \[emax\_acc\] needed to accelerate protons up to $10^{20}$eV, the Hillas criterion can be satisfied. Moreover to within an order of magnitude the flux of reaching earth from GRBs is comparable to the observed flux of UHECRs (Ultra High Energy Cosmic Rays) [@Waxman95]. Thus, if GRBs produce a comparable energy in and in UHECRs they could be the source of the highest energy Cosmic rays.
@Greisen66 and @ZK66 (GZK) pointed out that the highest energy CR (above $10^{19.5}$ eV) are attenuated as they propagate via the Cosmic Microwave background (CMBR). This happens because at this high energies the protons can interact with the CMBR photons and produce pions. The typical mean free path of a ultra high energy proton in the CMBR decreases rapidly with energy and for a $10^{20}$eV proton it is only several tens Mpc. Thus, the observed UHECRs at energies above the GZK energy ($\sim 10^{19.5}$eV must arrive from relatively nearby (on cosmological scale) sources. However, there are no known steady state sources within this distance (see however @FarrarPiran00). GRBs as a transient phenomenon could be a “hidden" source of UHECRs. There won’t be a direct association between GRBs and arrival of UHECRs as the later are deflected by the intergalactic magnetic field. This leads to an angular deflection as well as a long time delay. If GRBs are sources of UHECRs then we expect a break in the UHECR spectrum at the GZK energy - below the GZK energy we will detect UHECRs from the whole universe. Above the GZK energy we will detect only “local" UHECRs from within the nearest several dozen Mpc. @BahcallWaxman03 suggested that recent observations imply that such a break has been seen. However, the observational situation is not clear as yet and a final resolution would most likely require the Auger UHECR detector.
Gravitational Radiation {#sec:Gravitational-rad}
------------------------
Like GRBs, typical sources of gravitational radiation involve the formation of compact objects. Hence it is reasonable to expect that gravitational waves will accompany GRBs. This association is indirect: the gravitational waves are not directly related to the GRB. Additinionally, GRBs have their own, albeit weak, gravitational radiation pulse which arises during the acceleration of the jets to relativistic velocities. Unfortunately this signal is weak and moreover it is perpendicular to the GRB signal.
To estimate the rates of observed gravitational radiation events associated with GRB we use the rate of long GRBs. The nearest (long) GRB detected within a year would be at 1Gpc. As GRBs are beamed the nearest (long) event would be at would be much nearer, at $135 \theta_{0.1}^2$Mpc. However, this burst would be directed away from us. Still a GRB that is beamed away from us is expected to produce an “orphan" afterglow.
The rate of short bursts is less certain. @Schmidt01a estimates that the rate of short GRBs is smaller by a factor of two than the rate of long ones. In this case the distances mentioned above should be revised up by a factor of 1.25. However, if the rate of short GRBs is larger by a factor 10 than the rate of long ones then the corresponding distances should be revised downwards by a factor of $10^{-1/3}$ This would put one event per year at $\sim 80\theta_{0.1}^2$Mpc, but once again this burst won’t be pointing towards us. The nearest event with a burst in our direction would be at $\sim 450$Mpc.
### Gravitational Radiation from Neutron Star Mergers {#sec:GRBNS}
Binary neutron star mergers are the “canonical” sources of gravitational radiation emission. LIGO and VIRGO both aim in detecting these sources. Specifically the goal of these detectors is to detect the characteristic“chirping" signals arising from the in-spiraling phase of these events. The possibility of detection of such signals has been extensively discussed (see e.g. [@LIGO-merger]). Such events could be detected up to a distance of several tens of Mpc with LIGO I and up to $\sim 100
Mpc$ with LIGO II.
Comparing with GRB rates we find that if, as some expect, neutron star mergers are associated with short GRBs and if the rate of short GRBs is indeed large, then we have one event per year within the sensitivity of LIGO II and marginally detectable by LIGO I. However, this burst will be pointing away from us.
The detection of the chirping merger signal is based on fitting the gravitational radiation signal to pre-calculated templets. @Kochaneck_Piran93 suggested that the detection of a merger gravitational radiation signal would require a lower S/N ratio if this signal coincides with a GRB. This would increase somewhat the effective sensitivity of LIGO and VIRGO to such events. @Finnetal99 suggest using the association of GRBs with sources of gravitational waves in a statistical manner and propose to search for enhanced gravitational radiation activity towards the direction of a GRB during the short periods when GRBs are detected. Given the huge distances of observed GRBs it is not clear if any of these techniques will be useful.
### Gravitational Radiation from Collapsars {#sec:GRCol}
The Collapsar model [@Woosley93; @Pac98; @MacFadyen_W99] is based on the collapse of the core of a massive star to a black hole surrounded by a thick massive accretion disk. As far as gravitational radiation is concerned this system is very similar to a regular supernova. Rotating gravitational collapse has been analyzed by @Stark_Piran85. They find that the gravitational radiation emission emitted in a rotating collapse to a black hole is dominated by the black hole’s lowest normal modes, with a typical frequency of $~20c^3/GM$. The total energy emitted is: $${\Delta E_{GW}} = \epsilon M c^2 = {\rm min}(1.4 \cdot 10^{-3}
a^4, \epsilon_{max}) M c^2 \ ,$$ where $a$ is the dimensionaless specific angular momentum and $\epsilon_{max}$ is a maximal efficiency which is of the order ${\rm a few} \times 10^{-4}$. The expected amplitude of the gravitational radiation signal, $h$, would be of the order of $\sqrt{\epsilon} GM/c^2 d$ where $d$ is the distance to the source. Even LIGO II won’t be sensitive enough to detect such a signal from a distance of 1Gpc or even from 100 Mpc.
### Gravitational Radiation from Supranova {#sec:GRSUP}
According to the Supranova model a GRB arises after a neurton star collapse to a black hole. This collapse takes PLACE several weeks or months after the Supernova that formed the neutron star (see \[sec:Supranova\]). The expected gravitational waves signal from a Supranova [@VietriStella98] includes two components. First the signal from the initial supernova is similar to the gravtitational waves from the collapsar model. However, here the first collapse (the Supernova) takes place several weeks or months before the GRB. Thus, there won’t be any correlation between the gravitational waves emitted by the first collapse and the GRB. A second component may arise from the second collapse from the supramassive neutron star to a black hole. This signal should coincide with the GRB.
### Gravitational Radiation from the GRB {#sec:GRGRB}
The most efficient generation of gravitational radiation could take place here during the acceleration phase, in which the mass is accelerated to a Lorentz factor $\Gamma$. To estimate this emission I follow @Weinberg73 analysis of gravitational radiation emitted from a relativistic collision between two particles. Consider the following simple toy model: two particles at rest with a mass $M$ that are accelerated instantly at $t=0$ to a Lorentz factor $\Gamma$ and energy $E$. Conservation of energy requires that some (actually most) of the rest mass is converted to kinetic energy during the acceleration and the rest mass of the accelerated particle is $m = E/\Gamma = M/\Gamma$. The energy emitted per unit frequency per unit solid angle in the direction at an angle $\alpha$ relative to $\vec \beta$ is: $${d E \over d \Omega d \omega} = {G M^2 \beta^2 \over c \pi^2}
\big[ {\Gamma^2 (\beta^2 - \cos^2\alpha) \over (1 - \beta^2
\cos^2\alpha)^2} + { \cos^2\alpha \over \Gamma^2 (1 - \beta^2
\cos^2\alpha)^2} \big]\ . \label{fluxgrav}$$ The result is independent of the frequency, implying that the integral over all frequency will diverge. This nonphysical divergence arises from the nonphysical assumption that the acceleration is instantaneous. In reality this acceleration takes place over a time $\delta t$, which is of order 0.01sec. This would produce a cutoff $\omega_{max} \sim 2 \pi / \delta t$ above which Eq. \[fluxgrav\] is not valid. The angular distribution found in Eq. \[fluxgrav\] is disappointing. The EM emission from the ultrarelativistic source is beamed forwards into a small angle $1/\Gamma$, enhancing the emission in the forwards direction by a large factor ($\Gamma^2$). The gravitational radiation from this relativistic ejecta is spread rather uniformly in almost all $4\pi$ steradians. Instead of beaming there is “anti-beaming" with no radiation at all emitted within the forward angle $1/\Gamma$ along the direction of the relativistic motion.
Integration of the energy flux over different directions yields: $${d E \over d \omega} = {G M^2 \over c \pi^2} [ 2 \Gamma^2 + 1+ {(
1 - 4 \Gamma^2) \over \Gamma^2 \beta} \arctan(\beta)] \
.\label{energy_flux}$$ As expected the total energy emitted is proportional to $m^2
\Gamma^2$. Further integration over frequencies up to the cutoff $2 \pi / \delta t$ yields: $$E \approx { 2 G M^2 \Gamma^2 \over c \pi \delta t } \ .$$
In reality the situation is much more complicated than the one presented here. First, the angular width of the emitted blobs is larger than $1/\Gamma$. The superposition of emission from different directions washes out the no emission effect in the forward direction. Additionally according to the internal shocks model the acceleration of different blobs go on independently. Emission from different blobs should be combined to get the actual emission. Both effects [*reduce*]{} the effective emission of gravitational radiation and makes the above estimate an upper limit to the actual emission.
The gravitational signal is spread in all directions (apart from a narrow beam along the direction of the relativistic motion of the GRB). It ranges in frequency from $0$ to $f_{max} \approx
100$Hz. The amplitude of the gravitational radiation signal at the maximal frequency, $f_{max} \approx 100$Hz, would be: $ h \approx
(GM\Gamma^2 /c^2 d) $. For typical values of $E=M\Gamma =
10^{51}$ ergs, $\delta t = 0.01$ sec and a distance of $500$ Mpc, $ h \approx .5 \cdot 10^{-25}$, it is far below the sensitivity of planned gravitational radiation detectors. Even if we a burst is ten times nearer this “direct” gravitational radiation signal would still be undetectable .
Some specific models for GRBs’ inner engine predict additional amount of energy. For example @vanPutten01 [@vanPuttenLevinson01] suggest a model of a black hole - accretion torus in which a large fraction of the emitted energy of the black hole - accretion torus system escapes as gravitational radiation. The radiation arises due to instabilities within the torus that break down the axial symmetry. They estimate that as much as $10^{53}$ergs would be emitted as gravitational radiation which will have a characteristic signature corresponding to the normal mode of the black hole - accretion torus system with typical frequencies around few hundred Hz, conveniently within the frequency range of LIGO/VIRGO. If correct than GRBs are the most powerful burst-sources of gravitational waves in the Universe [@vanPutten01].
MODELS OF INNER ENGINES {#sec:inner-engine}
========================
The Fireball model tells us how GRBs operate. However, it does not answer the most interesting astrophysical question: what produces them? which astrophysical process generates the energetic ultrarelativistic flows needed for the Fireball model? Several observational clues help us answer these questions. The total energy involved is large $\sim 10^{51}$ergs, a significant fraction of the binding energy of a stellar compact object. [*The “inner engine" must be able to generate this energy and accelerate $\sim 10^{-5}M_\odot$ (or the equivalent in terms of Poynting flux) to relativistic velocities.*]{} Most GRBs are collimated with typical opening angles $1^o<\theta<20^o$. [*The “inner engine" must be able to collimate the relativistic flow.*]{} The bursts are divided to two groups according to their overall duration. Long bursts with $T>2$sec and short ones with $T<2$sec. As the duration is determined by the inner engine this may imply that there are two different inner engines. GRBs take place once per $ 3
\cdot 10^5$ yr per galaxy. [*GRBs are very rare at about 1/3000 the rate of supernovae.*]{} The variability time scale, $\delta t$, is at times as short as 1ms. The overall duration (of long GRBs), $T$, is of the order of 50sec. According to the internal shocks model these time scales are determined by the activity of the “inner engine". [*$\delta t
\sim 1$ ms suggests a compact object. $T \sim 50$sec is much longer than the dynamical time scale, suggesting a prolonged activity.[^12]. This requires two (or possibly three [@Ramirez-Ruiz_Merloni01; @NakarPiran02a]) different time scales operating within the “inner engine". This rules out any “explosive" model that release the energy in a single explosion.*]{}
These clues, most specifically the last one suggest that GRBs arise due to accretion of a massive ($\sim 0.1 m_\odot$) disk onto a compact object, most likely a newborn black hole. A compact object is required because of the short time scales. Accretion is needed to produce the two different time scales, and in particular the prolonged activity. A massive ($\sim 0.1
m_\odot$) disk is required because of the energetics. Such a massive disk can form only simultaneously with the formation of the compact object. This leads to the conclusions that GRBs accompany the formation of black holes. This model is supported by the observations of relativistic (but not as relativistic as in GRBs) jets in AGNs, which are powered by accretion onto black holes.
An important alternative to accretion is Usov’s model [@Usov92; @Usov94] in which the relativistic flow is mostly Poynting flux and it is driven by the magnetic and rotational energies of a newborn rapidly rotating neutron star.
Black hole accretion {#sec:accretion}
--------------------
Several scenarios could lead to a black hole - massive accretion disk system. This could include mergers (NS-NS binaries [@Eichler_LPS89; @NPP92], NS-BH binaries [@Pac91] WD-BH binaries [@Fryer_WHD99], BH-He-star binaries [@Fryer_Woosley98]) and models based on “failed supernovae” or “Collapsars” [@Woosley93; @Pac98; @MacFadyen_W99]. @Narayan_P_Kumar01 have recently shown that accretion theory suggests that from all the above scenarios only Collapsars could produce long bursts and only NS-NS (or NS-BH) mergers could produce short bursts. The basic idea is that the duration of the accretion depends on the size of the disk. So short burst must be produced by small disks and those are natureally produced in mergers. On the other hand long burst require large disks. However, those are inefficient. One can overcome this if we have a small disk that is fed continuously. In this case the efficiency can be large and the duration long. This happens naturally within the collapsar model.
The Pulsar Model {#sec:Pulsar}
----------------
Several “inner engine" models involve pulsar like activity of the inner engine which is directly connected to a Poynting flux decimated relativistic flow (in a contrast to a baryonic flux dominated flow). Energy considerations require an extremely large magnetic fields of the order of $10^{15}$G within such sources.
@Usov92 suggested that GRB arise during the formation of rapidly rotating highly magnetized neutron stars. Such objects could form by the gravitational collapse of accreting white dwarfs with anomalously high magnetic fields in binaries, as in magnetic cataclysmic binaries. The rapidly rotating and strongly magnetized neutron stars would lose their rotational kinetic energy on a timescale of seconds or less In a pulsar like mechanism. The energy available in this case is the rotational and magnetic energies of the neutron star that are of the order of a few $\times 10^{51}$ergs for a neutron star rotating near breakup. The rotation of the magnetic field creates a strong electric field and an electron-positron plasma which is initially optically thick and in quasi-thermodynamic equilibrium. Additionally a very strong magnetic field forms. The pulsar produces a relativistic Poynting flux dominated flow.
While a Poynting flux dominated flow may be dissipated in a regular internal shocks. @Usov94 and @Thompson94 discuss a scheme in which the energy is dissipated from the magnetic field to the plasma and then via plasma instability to the observed outside the photosphere, which is at around $10^{13}$cm. At this distance the MHD approximation of the pulsar wind breaks down and intense electromagnetic waves are generated. The particles are accelerated by these electromagnetic waves to Lorentz factors of $10^6$ and produce the non thermal spectrum. @SmolskyUsov96 [@SmolskyUsov00] and @SpruitDaigneDrenkhahn01 [@DrenkhahnSpruit02] discuss various aspects of the conversion of the Poynting flux energy to but these issues are more related to the nature of the emitting regions and only indirectly to the nature of the inner engine.
Usov’s model is based on rotating highly magnetized neutron star and from this point of view it indeed resembles to a large extend a regular pulsar. Other authors consider pulsar like activities in other contexts. @Katz97, for example, considers a black hole - thick disk model in which the electromagnetic process turn rotational energy to particle energy in a pulsar like mechanism. @MR97b discuss related idea on the formation of a Poynting flux dominated flow within a black hole accretion disk model.
Rotating black holes and the Blandford Znajek mechanism
-------------------------------------------------------
It is possible and even likely that the process of energy extraction involves the Blandford-Znajek mechanism [@BlandfordZnajek] in which the black hole - torus system is engulfed in a magnetic field and the rotational energy of the black hole is extracted via this magnetic field. The exploration of the Blandford-Znajek mechanism involves relativistic MHD consideration which are beyond the scope of this review. I refer the reader to several recent extensive reviews on this subject (see e.g. @LeeWijersBrown00).
The Collapsar Model {#sec:Collapsar}
--------------------
The evidence for the association of (long) GRBs with supernovae (see @Bloom02 and §\[sec:obs-SN\]) provides a strong support for the Collapsar model. @Woosley93 proposed that GRB arise from the collapse of a single Wolf-Rayet star endowed with fast rotation (’failed’ Type Ib supernova). @Pac98 pointed out that there is tentative evidence that the GRBs 970228, 970508, and 970828 were close to star-forming regions and that this suggests that GRBs are linked to cataclysmic deaths of massive stars. @MacFadyen_W99 begun a series of calculations [@AloyEtal00; @MacFadyenWoosleyHeger01M; @ZhangWoosleyMacFadyen03] of a relativistic jet propagation through the stellar envelope of the collapsing star which is the most important ingredient unique to this model (other features like the accretion process onto the black hole, the corresponding particle acceleration and to some extend the collimation process are common to other models). The collimation of a jet by the stellar mantle was shown to occur analytically by @Meszaros-Rees01. @ZhangWoosleyMacFadyen03 numerically confirmed and extended the basic features of this collimation process.
According to the Collapsar model the massive iron core of a rapidly rotating massive star, of mass $M>30M_\odot$, collapses to a black hole (either directly or during the accretion phase that follows the core collapse). An accretion disk form around this black hole and a funnel forms along the rotation axis, where the stellar material has relatively little rotational support. The mass of the accretion disk is around 0.1 $M_\odot$. Accretion of this disk onto the black hole takes place several dozen seconds and powers the GRB. Energy can be extracted via neutrino annihilation [@MacFadyen_W99] or via the Bladford-Znajek mechanism. The energy deposited in the surrounding matter will preferably leak out along the rotation axis producing jets with opening angles of $<10^o$. If the jets are powerful enough they would penetrate the stellar envelope and produce the GRB.
@ZhangWoosleyMacFadyen03 find that relativistic jets are collimated by their passage through the stellar mantle. Starting with an initial half-angle of up to $20^o$, the jet emerges with half-angles that, though variable with time, are around $5^o$. The jet becomes very hot in this phase and it has only a moderate Lorentz factor, modulated by mixing, and a very large internal energy (more than $80\%$ of the total energy). As the jet escapes, conversion of the remaining internal energy into kinetic energy gives terminal Lorentz factors along the axis of $\sim 150$ (depending, of course, on the initial conditions considered). Because of the large ratio of internal to kinetic energy in both the jet and its cocoon, the opening angle of the final jet is significantly greater than at breakout. A small amount of material emerges at large angles, but with a Lorentz factor still sufficiently large to make a weak GRB. When the jet breaks out from the star it may produce a thermal precursor (seen in several GRBs) [@Pac98; @Ramirez-RuizMacFadyenLazzati02; @WaxmanMeszaros03]. Instabilities in the accretion process, or in the passage of the jet through the stellar envelope [@AloyEtal02; @ZhangWoosleyMacFadyen03] can produce the required variability in the Lorentz factor that is needed to produce internal shocks.
The processes of core collapse, accretion along the polar column (which is essential in order to create the funnel) and the jet propagation through the stellar envelope take together $\sim
10$sec [@MacFadyen_W99]. The duration of the accretion onto the black hole is expected to take several dozen seconds. These arguments imply that Collapsars are expected to produce long GRBs (see however, @ZhangWoosleyMacFadyen03 for a suggestion that the breakout of a relativistic jet and its collision with the stellar wind will produce a brief transient with properties similar to the class of “short-hard” GRBs.).
The Supranova Model {#sec:Supranova}
-------------------
@VietriStella98 suggested that GRBs take place when a “supermassive" (or supramassive as @VietriStella98 call it) neutron star (namely a neutron star that is above the maximal cold nonrotating neutron star mass) collapses to a black hole. The collapse can take place because the neutron star losses angular momentum via a pulsar wind and it looses the extra support of the centrifugal force. Alternatively the supramassive neutron star can simply cool and become unstable if rotation alone is not enough to support it. The neutron star could also become over massive and collapse if it accretes slowly matter from a surrounding accretion disk [@VietriStella99]. In this latter case the time delay from the SN could be very large and the SNR will not play any role in the GRB or its afterglow.
The Supranova model is a two step event. First, there is a supernova, which may be more energetic than an average one, in which the supermassive neutron star forms. Then a few weeks or months later this neutron star collapses producing the GRB. While both the Supranova and the Collapsar (or hypernova) events are associated with Supernovae or Supernovae like event the details of the model are very different. First, while in the Collapsar model one expect a supernova bump on the afterglow light curve, such a bump is not expected in the Supranova model unless the time delay is a few days. On the other hand while it is not clear in the Collapsar model how does the Fe needed for the Fe lines reach the implied large distances form the center, it is obvious in this model, as the supernova shell was ejected to space several month before the GRB. As mentioned earlier (see §\[sec:obs-SN\]) the association of GRB 030329 with SN 2003dh [@Stanek03SN; @Hjorth03SN] is incompatible with the Supranova model. Proponents of this model, argue however, that there might be a distribution of delay times between the first and second collapses.
The models are also very different in their physical content. First in the Supranova model the GRB jet does not have to punch a whole through the stellar envelope. Instead the ejecta propagates in almost free space polluted possibly by a pulsar wind [@GranotKonigl; @GuettaGranot03]. In both models, like in many other models, the GRB is powered by accretion of a massive accretion disk surrounding the newborn black hole. This accretion disk forms, from the debris of the collapsing neutron star at the same time that the black hole is formed. Again, the time scale of the burst is determined by the accretion time of this disk. @Narayan_P_Kumar01 (see also §\[sec:accretion\]) point however that long lived (50 sec) accretion disks must be large and hence extremely inefficient. This may pose a problem for this model.
@GranotKonigl, @GuettaGranot03 and @GuettaInue considered the effects a strong pulsar wind (that may exist after the SN and before the second collapse) on this scenario. The pulsar wind can have several effects. First it would produce a denser highly magnetized medium into which the GRB jet propagates. The strong magnetic field will be amplified by the afterglow shock. This resolves the problem of the source of the strong magnetic field needed for the synchrotron afterglow model. This can also explain the high energy emission detected by EGRET in GRB 940217 (@Hurley94 and §\[sec:spec-obs\]) by Inverse Compton scattering on the pulsar wind bubble photons. On the other hand the density of this wind matter ($\sim
10^3$cm$^{-3}$) might be too high for the spherical model. Note however, that this depends on the time delay as $t^{-3}$. However, the pulsar wind won’t be spherical and one would expect that it will form an elongated supernova shell cavity within which the pulsar wind is bounded. If, as expected, the pulsar jet coincides with the GRB jet then the relativistic ejecta will move along the elongated direction of this shell.
Merging neutron stars {#sec:NSmergers}
---------------------
Neutron star binary mergers [@Eichler_LPS89; @NPP92] or neutron star - black hole binary mergers [@Pac91] (hereafter called mergers) also produce a black hole - accretion disk system and are candidates for the inner engines of GRBs, specifically of short GRBs. These mergers take place because of the decay of the binary orbits due to gravitational radiation emission as was beautifully demonstrated in the famous binary pulsar PSR 1913+16 [@TaylorWeisberg82].
These mergers take place at a rate of $\approx 10^{-6}$ events per year per galaxy [@NPS91; @Phinney91; @vandenHeuvelLorimer96]. This is the rate of merger of binaries of the type of PSR 1913+16 whose life time is or order of several $10^8$ years. Various population synthesis calculations suggest that there is also another population of short lived binaries [@TutukovYungelson93; @TutukovYungelson94; @BelczynskiBulikKalogera02; @PernaBelczynski02]. These binaries form with very close orbits and hence with short lifetimes of the order of $10^5$yrs. Even thought the overall rate of such mergers could be comparable to those of the PSR 1913+16 type one cannot expect to catch a binary in our galaxy in such a stage. Similarly unlike the long lived mergers that may be kicked from their host galaxy within their long life time [@NPP92; @BulikBelczynskiZbijewski99] this short lived population remains within the galaxy when they merge [@BelczynskiBulikKalogera02].
Earlier simulations of mergers focused on the gravitational radiation from this system. @DaviesEtal94 begun a series of numerical simulation of neutron star merger that focused on GRB related aspects [@RosswogEtal99; @RosswogEtal00; @AyalEtal01; @RosswogDavies02]. Using a SPH scheme they followed NS mergers under different assumptions (Newtonian with ad hoc addition of gravitational radiation back reaction or Post Newtonian), with different equations of state (adiabatic or realistic) and with different initial spin axis and mass rations and different estimates of the effects of neutrino cooling. A parallel set of simulations was carried out by @RuffertJankaSchafer95 [@JankaRuffert96; @RuffertJanka98; @RuffertJanka99; @RuffertJanka01] who used particle in cell methods. Both kinds of simulations yield comparable results. The merger results in a black hole - accretion disk system. The mass of the accretion disk is of order 0.1$M_\odot$ and it depends, of course somewhat on the orientation of the spins and the relative masses of the two neutron stars.
A merger releases $\sim 5 \times 10^{53}$ergs but most of this energy is in the form of low energy neutrinos and gravitational waves. Still there is enough energy available to power a GRB but is not clear how the GRB is produced. A central question is, of course, how does a merger generate the relativistic wind required to power a GRB. @Eichler_LPS89 suggested that about one thousandth of these neutrinos annihilate and produce pairs that in turn produce gamma-rays via $\nu \bar \nu \rightarrow e^+
e^- \rightarrow \gamma\gamma$. This idea was criticized on several grounds by different authors the main problem is that it does not produce enough energy. For example @jaroszynksi96 pointed out that a large fraction of the neutrinos will be swallowed by the black hole that forms. An alternative source of energy within the merger model is the accretion power of a disk that forms around the black hole. This brings us back to the canonical black hole - accretion disk scenario.
Open Questions and Future Prospects
===================================
I believe that overall we have a basic understanding of the GRB phenomenon. As usual some aspects are understood better than others.
There is a very good understanding of the afterglow. Here there are numerous observations over a wide range of wavelengths with which the theory can be confronted. The overall picture, of a slowing down relativistic flow and of synchrotron emission fit the data to a large extent (see e.g. @WijersGalama99 [@PanaitescuK01] and many other fits of the observations to the model). We have already learned that the “cow is not spherical", namely that the relativistic flow is collimated. New observations, like those of GRB 021004 and GRb 030329, poses at times new puzzles and suggest that the basic simple picture has to be refined. It seems however, that the answers are within the scope of the current model, such as: refreshed shocks, patchy shells and variable external densities. All these phenomena are fairly reasonable in a realistic environment. Within the afterglow, I believe that the lines pose the greatest current puzzle, in terms of their energy requirements and other implications on the source (see @Lazzati02). Another interesting open question is what distinguished between GHOSTs and OTGRBs - an environmental (extinction or “improper conditions within the circum-burst matter) or an intrinsic mechanism?
The main observational challenges concerning the afterglow are the determination whether short GRBs have afterglow. A wealth of information on long GRBs arises from the information on hosts, environments and redshifts, that are determined from the afterglow observations. All these are missing for short GRBs. If short GRBs don’t have afterglows, then an immediate theoretical question is why? Is it possible that they are produced in a very different environment than long ones (such as outside their original galaxies) in a region with no circum-burst matter suitable for producing the afterglow? At the moment the observational situation is not clear. The coming Swift satellite may resolve this mystery.
Another important observational question involves the search for orphan afterglows (either in radio or in optical). Their detection will establish the collimated jets picture. But even firm upper limits will set independent limits on the rates of GRBs. However, as mentioned in §\[sec:orphan\] this is a very challenging observational task. This has important implication for the nature of the jets - are GRB jets standard with a fixed angular structure [@Lipunov_Postnov_Pro01; @Rossi02; @Zhang02]? This question is related both to the overall energetics and to the rate of GRBs.
Another interesting challenge will be the resolution of the afterglow image (see @GPS99c). This may be possible in radio for a nearby burst and the afterglow of GRB 030329 provides an excellent candidate for that. Polarization measures could pave the way for understanding of the collimation geometry and for a confirmation of the synchrotron emission process.
As we move backwards with time towards the burst we encounter the very early afterglow and the optical flash that coincides with the burst itself. Here a great progress was made with recent observations triggered by HETE II (e.g. the almost complete light curve of GRB 021004 [@Foxetal03]). SWIFT may contribute a lot to this issue. These observations could shed a light on issues like the role of pre-acceleration and neutrons that are unclear as yet. Here, I stress the importance of early and continuous radio observations, which could determine whether there are refreshed shocks during the early afterglow, that have a clear radio signature [@KP00a].
The understanding of the emitting regions is less clear. Here, within the internal shocks model there is a reasonable understanding of the temporal structure (in terms of activity of the inner engine). However, it is not clear how is the observed spectrum produces and it seems that the simple synchrotron spectrum has to be modified (see e.g. @LloydPetrosian00 [@Medvedev00] for ideas on such modifications). Another possibly related puzzle is the origin of the narrow $E_p$ distributions (see however, e.g. @DaigneMochkovitch98 [@Guetta_Spada_Waxman01; @DaigneMochkovitch03]). Another set of open questions is what is the origin of the intrinsic correlation between luminosity (which in fact reflects the collimation angle [@Frail01; @PanaitescuK01]) discovered by @Fenimore_Ramirez-Ruiz01 or the lag-luminosity relation discovered by @Norris_lags00. Similarly or even more puzzling are the implied correlations between redshift and intrinsic luminosity [@Lloyd-RonningFryerRamirez-Ruiz02] and between redshift and intrinsic hardness [@Schmidt01] (note that this later correlation is essential in view of the narrow $E_p$ distribution of GRBs). Here pairs [@Ghisellini_Celotti99] and IC can play an important role. Theoretical open basic physical questions that arise here (as well as in the theory of the afterglow) deal with the processes of the behavior of collisionless shocks (see e.g. @Medvedev01 [@NiktoMedvedev01], particle acceleration (see §\[sec:acc\]) and the generation of strong magnetic field (see [@MedvedevLoeb99]). Issues like relativistic turbulence and relativistic plasma instabilities might play an important role here (see e.g. @LyutikovBlandford02).
>From an observational point, it will be a challenge to beat the statistical power of the BATSE data in terms of number of bursts. Somewhat surprisingly, the questions what is the luminosity function of GRBs and what is the rate of GRBs as a function of redshift and to what extend GRBs follow the star formation rate are still open. Detectors with better spectral resolutions could shed some additional light on the spectrum. Another hope for new data, or at least for upper limits, arises from observational windows in higher bands. On the low energy side it seems that there is a continuum between XRFs and GRBs [@BATSE_XRF; @BarraudEtal03]. This result still has to be fully understood in the context of the narrow $E_p$ distribution.
Looking far into the future one can hope to observe neutrinos or gravitational radiation correlated to GRBs. UHE neutrinos (Fluxes of MeV neutrinos would be too weak to be detected from cosmological distances) could confirm that protons are accelerated to UHE energies within GRBs. In turn this would proof (or disprove) the possible role of GRBs as sources of UHECRs. Gravitational radiation could give a direct clue on the activity of the inner engine (see §\[sec:GRGRB\] and identify, for example, merger events.
There is a lot of observational evidence associating long GRBs with core collapse SNes. This gives a clear clue on what is the inner engine of long GRBs. There is no direct or indirect evidence on the progenitors of short GRBs. Even with this clue the situation is far from clear when we turn to the inner engine. Here most models assume some variant of a black-hole - torus system with various energy extraction mechanisms ranging from neutrino annihilation (which is less likely) to variants on the theme of electromagnetic extraction (magnetic turbulence within the accretion disk; the Blandford-Znajek mechanism which involves a disk-black hole-magnetic field interaction; pulsar like activity). Here there are open questions all around: What is the content of the ultrarelativistic flow - baryonic or Poynting flux? How is the flow accelerated and collimated? What determines the variability of the flow (required for internal shocks) and the different time scales? This part of the model seems to be in a rather poor shape - but this is understandable as we don’t have any direct observations of this inner engine. One hope that arises is that there seem to be an emerging similarity between GRBs, galactic micro quasars and AGNs. All these systems accelerate collimated flows to relativistic velocities and they all seem to involve accretion onto black holes. Hopefully, this similarity could lead to a common resolution of how inner engines operate in all those systems.
Acknowledgments {#acknowledgments .unnumbered}
===============
I would like to thank J. Granot, D. Guetta, P. Kumar, E. Nakar and R. Sari for many helpful discussions and J. Bloom, J. Hjorth, P. M[' e]{}sz[' a]{}ros, E. Pian, K. Stanek, P. Vreeswijk and an anonymous referee for remarks. This research was supported by a grant from the US-Israel Binational Science Foundation.
[447]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, , , and , , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , , and , , ****, .
, , , , and , , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , , , and , , ****, .
, and , , ****, .
, and , , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, and , , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, and , , ****, .
, , and , , ****, .
, , ****, .
, , , .
, , ****, .
, , ****, .
, , and , , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , and , , ****, .
, , and , , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , ****, .
, and , , ****, .
, and , , ****, .
, and , , ****, .
, and , , ****, .
, , in **.
, , and , , , .
, , and , , ****, .
, , and , , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, and , , .
, and , , ****, .
, and , , ****, .
, , in **, pp. .
, and , , ****, .
, , and , , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , , , , , , and , , ****, .
, and , , ****, .
, , and , , ****, .
, and , , ****, .
, and , , ****, .
, , , , and , , ****, .
, and , , ****, .
, , and , , ****, .
, and , , ****, .
, , , , , and , , ****, .
, and , , ****, .
, and , , ****, .
, , and , , ****, .
, , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , , , , , , , , and , , ****, .
, and , , ****, .
, and , , ****, .
, and , , ****, .
, and , , ****, .
, and , , , .
, , and , , ****, .
, , , .
, and , , .
, , , and , , ****, .
, , , , , , and , , ****, .
, , , , , , , , , , , and , , .
, , and , , ****, .
, , and , , ****, .
, and , , , .
, and , , ****, .
, , , , , , and , , ****, .
, and , , in **
, , , , , and , , ****, .
, , , , , and , , ****, .
, , , , , and , , ****, .
, , , , , , , , , , , , *et al.*, , in **, pp. .
, and , , ****, .
, , , and , , ****, .
, and , , ****, .
, and , , ****, .
, and , , ****, .
, and , , ****, .
, , and , , ****, .
, , , , and , , ****, .
, , , , and , , ****, .
, , and , , ****, .
, and , , .
, , ****, .
, , and , , ****, .
, and , , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , , , and , , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , , , and , , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , , , , , , , and , , ****, .
, , and , , ****, .
, , , and , , .
, and , , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , , , , , , , , , , , *et al.*, , in **, pp. .
, and , , ****, .
, , , and , , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , , , , , , , , , and , , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , ** ().
, , , , and , , ****, .
, and , , in **, pp. .
, , , , , , , , , , , , *et al.*, , ****, .
, and , , ****, .
, and , , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, and , , ****, .
, , and , , ****, .
, and , , ****, .
, , and , , in **, pp. .
, , , and , , ****, .
, , , , , , , and , , ****, .
, , , , , and , , ****, .
, , ****, .
, and , , ****, .
, , ****, .
, and , , ****, .
, and , , , .
, , and , , , .
, , , , and , , in **, pp. .
, , and , , ****, .
, , , and , , ****, .
, , and , , ****, .
, , and , , ****, .
, , and , , ****, .
, , and , , ****, .
, and , , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , ****, .
, , ****, .
, and , , ****, .
, and , , , .
, and , , ****, .
, , , , and , , , .
, and , , .
, , and , , .
, , and , , ****, .
, , and , , ****, .
, , , , , and , , ****, .
, , , and , , ****, .
, and , , in **, pp. .
, , , , , , , , , , , , *et al.*, , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, and , , ****, .
, and , , ****, .
, , in **.
, , , and , , in **, pp. .
, and , , ****, .
, and , , in **, pp. .
, , , , , , , , , , , , *et al.*, , ****, .
, , in **, pp. .
, and , , ****, .
, , ****, .
, , , and , , ****, .
, , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , , , , , , , , , , and , , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , and , , .
, , and , , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, and , , ****, .
, , ****, .
, , and , , ****, .
, and , , ****, .
, , ****, .
, , ****, .
, and , , ****, .
, and , , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , , , , and , , in **, pp. .
, , , and , , ****, .
, , and , , ****, .
, , ****, .
, , and , , ****, .
, , and , , ****, .
, and , , ****, .
, and , , ****, .
, , , , , , and , , in **, pp. .
, , , , , , , and , , ****, .
, and , , ****, .
, , , , , , , , and , , ****, .
, , ****, .
, and , , , .
, and , , ****, .
, and , , ****, .
, and , , .
, and , , ****, .
, and , , ****, .
, and , , ****, .
, , .
, , and , , ****, .
, , , and , , .
, , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , and , , ****, .
, , , and , , ****, .
, , , and , , , .
, , and , , ****, .
, and , , ****, .
, , and , , ****, .
, and , , ****, .
, and , , ****, .
, and , , ****, .
, , and , , ****, .
, and , , ****, .
, and , , ****, .
, and , , , .
, and , , .
, , and , , ****, .
, , , , and , , ****, .
, , and , , ****, .
, , ****, .
, , ****, .
, , and , , ****, .
, and , , ****, .
, and , , ****, .
, and , , ****, .
, and , , ****, .
, and , , ****, .
, and , , ****, .
, and , , ****, .
, and , , ****, .
, and , , ****, , [http://adsabs.harvard.edu/cgi-bin/nph-bib\_query?bibcode=1999A%
pJ...524..262M&db\_key=AST](http://adsabs.harvard.edu/cgi-bin/nph-bib_query?bibcode=1999A%
pJ...524..262M&db_key=AST).
, , and , , ****, .
, and , , ****, .
, , and , , ****, .
, , , , , , and , , ****, .
, , , , and , , in **, pp. .
, , and , , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , , , , , , , and , , ****, .
, , , , and , , ****, .
, , in **, pp. .
, , ****, .
, , in **, pp. .
, and , , ****, .
, , , , , , , and , , ****, .
, , and , , ****, .
, and , , ****, .
, and , , ****, .
, , and , , ****, .
, , and , , ****, .
, , and , , ****, .
, , ****, .
, , and , , ****, .
, , , , , and , , ****, .
, , and , , .
, and , , .
, and , , ****, .
, and , , ****, .
, and , , ****, .
, and , , ****, .
, and , , ****, .
, , and , , ****, .
, , and , , ****, .
, , and , , ****, .
, , and , , ****, .
, , and , , ****, .
, , and , , ****, .
, and , , ****, .
, , and , , ****, .
, , , , , , , and , , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, and , , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , ****, .
, , ****, .
, and , , ****, .
, and , , ****, .
, and , , ****, .
, and , , ****, .
, and , , ****, .
, , and , , ****, .
, and , , ****, .
, and , , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, and , , ****, .
, and , , ****, .
, , and , , ****, .
, , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , , , , , , , , , , , *et al.*, , .
, and , , ****, .
, , ****, .
, , in **, pp. .
, , in **, pp. .
, , in **, volume , pp. .
, , ****, .
, , ****, .
, , , and , , ****, .
, and , , .
, , and , , .
, and , , in **, pp. .
, and , , in **, pp. .
, , and , , ****, .
, , in **, pp. .
, , , , , , , , , , , , *et al.*, , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, and , , ****, .
, , , , , , and , , ****, .
, , , , , and , , ****, .
, , , , , and , , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, and , , ****, .
, , , , , and , , ****, .
, , , , , and , , ****, .
, and , , ****, .
, , and , , ****, .
, and , , ****, .
, , and , , ****, .
, , and , , , .
, and , , ****, .
, and , , ****, .
, and , , ****, .
, and , , ****, .
, , , , , , , , , , and , , ****, .
, and , , , .
, , ****, .
, , ****, .
, , , , , and , , ****, .
, and , , ****, .
, , ****, .
, , ****, .
, , ****, .
, , , , , , , , , , , and , , ****, .
, , and , , ****, .
, , , and , , .
, and , , ****, .
, , , and , , ****, .
, , , , , and , , ****, .
, , ****, .
, , ****, .
, and , , ****, .
, and , , ****, .
, and , , ****, .
, , and , , ****, .
, , , , and , , ****, .
, and , , .
, and , , ****, .
, and , , ** ().
, , and , , ****, .
, , ****, .
, , in **.
, , ****, .
, and , , ****, .
, , and , , ****, .
, and , , ****, .
, and , , ****, .
, and , , ****, .
, and , , ****, .
, and , , ****, .
, and , , ****, .
, , and , , ****, .
, , and , , ****, .
, , and , , ****, .
, , ****, .
, , and , , ****, .
, , ****, .
, , ****, .
, , ****, .
, and , , ****, .
, and , , ****, .
, , ****, .
, and , , ****, .
, , , , , and , , ****, .
, and , , ****, .
, and , , ****, .
, , , , , , , , , , , and , , ****, .
, and , , in **, pp. .
, , , , , , , , and , , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , and , , ****, .
, , , , and , , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , , , , , , , , , , , *et al.*, , , .
, and , , ****, .
, , , and , , ****, .
, , ****, .
, , ****, .
, , ****, .
, , ****, .
, , ****, .
, , and , , ****, .
, and , , ****, .
, , ****, .
, and , , ****, .
, , ****, .
, and , , ****, .
, and , , ****, .
, and , , ****, .
, , ****, .
, , ****, .
, and , , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , and , , ****, .
, , ****, .
, and , , ****, .
, and , , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, , **, Ph.D. thesis, .
, , ****, .
, , ****, .
, , , .
, , and , , , .
, and , , ****, .
, and , , ****, .
, , , , , , , and , , ****, .
, , ****, .
, , ****, .
, , ****, .
, , ****, .
, and , , ****, .
, and , , ****, .
, and , , ****, .
, , and , , ****, .
, , ** ().
, , , and , , ****, .
, and , , ****, .
, and , , ****, .
, , , , , , , , , , , , *et al.*, , ****, .
, and , , ****, .
, , ****, , [http://adsabs.harvard.edu/cgi-bin/nph-bib\_query?bibcode=1993A%
pJ...405..273W&db\_key=AST](http://adsabs.harvard.edu/cgi-bin/nph-bib_query?bibcode=1993A%
pJ...405..273W&db_key=AST).
, , , , , , , and , , ****, .
, and , , ****, .
, and , , ****, .
, , and , , ****, .
[^1]: BATSE is the Burst and Transient Source Experiment on the CGRO (Compton Gamma-Ray Observatory), see e.g. http://cossc.gsfc.nasa.gov/batse/. It operated for almost a decade detecting several thousand bursts, more than any satellite before or after it. The BATSE data was published in several catalogues see @Paciesas99 [@Paciesas00] for the most recent one
[^2]: Low/high energy implies the low vs. the high BATSE channels. The four BATSE channels at 20-50keV, 50-100keV, 100-300keV and $>300$keV.
[^3]: see http://www.asdc.asi.it/bepposax/ for information on BeppoSAX and its the different instruments.
[^4]: HETE II is a dedicated GRB satellite that aims at locating quickly bursts with high positional accuracy. See http://space.mit.edu/HETE/ for a description of HETE II and its instruments
[^5]: see however @Granot03.
[^6]: The following notation appeared in the astro-ph version of [@SPN98]. Later during the proofs that author realized that $\alpha$ is used often in astrophysics to denote a spectral index and in the Ap. J. version of [@SPN98] the notations have been changed to $F_\nu \propto t^{-\beta} \nu^{-\alpha}$. However, in the meantime the astro-ph notation became generally accepted. I use these notations here.
[^7]: See [@GPS99a] for an alternative method for integrating Eq. \[eq Fnu1\].
[^8]: The exact values of the uncertain constants $C_2$ and $C_1$ are extremely important as they determine the jet opening angle (and hence the total energy of the GRB) from the observed breaks, interpreted as $t_{\rm jet}$, in the afterglow light curves.
[^9]: Note that the exponential behavior is obtained after converting Eq. \[ad\] to a differential equation and integrating over it. Different approximations used in deriving the differential equation lead to slightly different exponential behavior, see [@P00].
[^10]: This may imply that the expected rate of orphan afterglows should be smaller than estimated assuming significant sideways expansion!
[^11]: Note that @Beloborodov02a uses the notation $\xi$ for this parameter
[^12]: The ratio $\delta t/T \ll 1$ for short bursts as well [@NakarPiran02b]
|
---
abstract: 'Trial-and-error learning requires evaluating variable actions and reinforcing successful variants. In songbirds, vocal exploration is induced by LMAN, the output of a basal ganglia-circuit that also contributes a corrective bias to the vocal output. This bias is gradually consolidated in RA, a motor cortex analogue downstream of LMAN. We develop a new model of such two-stage learning. Using stochastic gradient descent, we derive how the activity in ‘tutor’ circuits (*e.g.,* LMAN) should match plasticity mechanisms in ‘student’ circuits (*e.g.,* RA) to achieve efficient learning. We further describe a reinforcement learning framework through which the tutor can build its teaching signal. We show that mismatches between the tutor signal and the plasticity mechanism can impair learning. Applied to birdsong, our results predict the temporal structure of the corrective bias from LMAN given a plasticity rule in RA. Our framework can be applied predictively to other paired brain areas showing two-stage learning.'
author:
- Tiberiu Teşileanu
- Bence Ölveczky
- Vijay Balasubramanian
bibliography:
- 'library.bib'
title: 'Rules and mechanisms for efficient two-stage learning in neural circuits'
---
Introduction
============
Two-stage learning has been described in a variety of different contexts and neural circuits. During hippocampal memory consolidation, recent memories, that are dependent on the hippocampus, are transferred to the neocortex for long-term storage [@Frankland2005]. Similarly, the rat motor cortex provides essential input to sub-cortical circuits during skill learning, but then becomes dispensable for executing certain skills [@Kawai2015]. A paradigmatic example of two-stage learning occurs in songbirds learning their courtship songs [@Andalman2009; @Turner2010; @Warren2011]. Zebra finches, commonly used in birdsong research, learn their song from their fathers as juveniles and keep the same song for life [@Immelmann1969].
The birdsong circuit has been extensively studied; see Figure \[fig:bird\_vs\_model\]A for an outline. Area HVC is a timebase circuit, with projection neurons that fire sparse spike bursts in precise synchrony with the song [@Hahnloser2002; @Lynch2016; @Picardo2016]. A population of neurons from HVC projects to the robust nucleus of the arcopallium (RA), a pre-motor area, which then projects to motor neurons controlling respiratory and syringeal muscles [@Simpson1990; @Yu1996; @Leonardo2005]. A second input to RA comes from the lateral magnocellular nucleus of the anterior nidopallium (LMAN). Unlike HVC and RA activity patterns, LMAN spiking is highly variable across different renditions of the song [@Olveczky2005; @Kao2008]. LMAN is the output of the anterior forebrain pathway, a circuit involving the song-specialized basal ganglia [@Perkel2004].
Because of the variability in its activity patterns, it was thought that LMAN’s role was simply to inject variability into the song [@Olveczky2005]. The resulting vocal experimentation would enable reinforcement-based learning. For this reason, prior models tended to treat LMAN as a pure Poisson noise generator, and assume that a reward signal is received directly in RA [@Fiete2007]. More recent evidence, however, suggests that the reward signal reaches Area X, the song-specialized basal ganglia, rather than RA [@Kubikova2010; @Hoffmann2016; @Gadagkar2016]. Taken together with the fact that LMAN firing patterns are not uniformly random, but rather contain a corrective bias guiding plasticity in RA [@Andalman2009; @Warren2011], this suggests that we should rethink our models of song acquisition.
Here we build a general model of two-stage learning where one neural circuit “tutors” another. We develop a formalism for determining how the teaching signal should be adapted to a specific plasticity rule, to best instruct a student circuit to improve its performance at each learning step. We develop analytical results in a rate based model, and show through simulations that the general findings carry over to realistic spiking neurons. Applied to the vocal control circuit of songbirds, our model reproduces the observed changes in the spiking statistics of RA neurons as juvenile birds learn their song. Our framework also predicts how the LMAN signal should be adapted to properties of RA synapses. This prediction can be tested in future experiments.
Our approach separates the mechanistic question of *how* learning is implemented from what the resulting learning rules are. We nevertheless demonstrate that a simple reinforcement learning algorithm suffices to implement the learning rule we propose. Our framework makes general predictions for how instructive signals are matched to plasticity rules whenever information is transferred between different brain regions.
![Relation between the song system in zebra finches and our model. **A.** Diagram of the major brain regions involved in birdsong. **B.** Conceptual model inspired by the birdsong system. The line from output to tutor is dashed because the reinforcement signal can reach the tutor either directly or, as in songbirds, indirectly. **C.** Plasticity rule measured in bird RA (measurement done in slice). When an HVC burst leads an LMAN burst by about $100\,\mathrm{ms}$, the HVC–RA synapse is strengthened, while coincident firing leads to suppression. Figure adapted from [@Mehaffey2015]. **D.** Plasticity rule in our model that mimics the @Mehaffey2015 rule. \[fig:bird\_vs\_model\]](figures/bird_vs_model_colors){width="6in"}
Results
=======
Model
-----
We considered a model for information transfer that is composed of three sub-circuits: a conductor, a student, and a tutor (see Figure \[fig:bird\_vs\_model\]B). The conductor provides input to the student in the form of temporally precise patterns. The goal of learning is for the student to convert this input to a predefined output pattern. The tutor provides a signal that guides plasticity at the conductor–student synapses. For simplicity, we assumed that the conductor always presents the input patterns in the same order, and without repetitions. This allowed us to use the time $t$ to label input patterns, making it easier to analyze the on-line learning rules that we studied. This model of learning is based on the logic implemented by the vocal circuits of the songbird (Figure \[fig:bird\_vs\_model\]A). Relating this to the songbird, the conductor is HVC, the student is RA, and the tutor is LMAN. The song can be viewed as a mapping between clock-like HVC activity patterns and muscle-related RA outputs. The goal of learning is to find a mapping that reproduces the tutor song.
Birdsong provides interesting insights into the role of variability in tutor signals. If we focus solely on information transfer, the tutor output need not be variable; it can deterministically provide the best instructive signal to guide the student. This, however, would require the tutor to have a detailed model of the student. More realistically, the tutor might only have access to a scalar representation of how successful the student rendition of the desired output is, perhaps in the form of a reward signal. A tutor in this case has to solve the so-called ‘credit assignment problem’—it needs to identify which student neurons are responsible for the reward. A standard way to achieve this is to inject variability in the student output and reinforcing the firing of neurons that precede reward (see for example [@Fiete2007] in the birdsong context). Thus, in our model, the tutor has a dual role of providing both an instructive signal and variability, as in birdsong.
![Schematic representation of our rate-based model. **A.** Conductor neurons fire precisely-timed bursts, similar to HVC neurons in songbirds. Conductor and tutor activities, $c(t)$ and $g(t)$, provide excitation to student neurons, which integrate these inputs and respond linearly, with activity $s(t)$. Student neurons also receive a constant inhibitory input, $x_\text{inh}$. The output neurons linearly combine the activities from groups of student neurons using weights $M_{aj}$. The linearity assumptions were made for mathematical convenience but are not essential for our qualitative results (see Appendix). **B.** The conductor–student synaptic weights $W_{ij}$ are updated based on a plasticity rule that depends on two parameters, $\alpha$ and $\beta$, and two timescales, $\tau_1$ and $\tau_2$ (see eq. and Methods). The tutor signal enters this rule as a deviation from a constant threshold $\theta$. The figure shows how synaptic weights change ($\Delta W$) for a student neuron that receives a tutor burst and a conductor burst separated by a short lag. Two different choices of plasticity parameters are illustrated in the case when the threshold $\theta = 0$. **C.** The amount of mismatch between the system’s output and the target output is quantified using a loss (error) function. The figure sketches the loss landscape obtained by varying the synaptic weights $W_{ij}$ and calculating the loss function in each case (only two of the weight axes are shown). The blue dot shows the lowest value of the loss function, corresponding to the best match between the motor output and the target, while the orange dot shows the starting point. The dashed line shows how learning would proceed in a gradient descent approach, where the weights change in the direction of steepest descent in the loss landscape. \[fig:linear\_model\]](figures/linear_model_in_figure){width="6in"}
We described the output of our model using a vector $y_a(t)$ where $a$ indexed the various output channels (Figure \[fig:linear\_model\]A). In the context of motor control $a$ might index the muscle to be controlled, or, more abstractly, different features of the motor output, such as pitch and amplitude in the case of birdsong. The output $y_a(t)$ was a function of the activity of the student neurons $s_j(t)$. The student neurons were in turn driven by the activity of the conductor neurons $c_i(t)$. The student also received tutor signals to guide plasticity; in the songbird, the guiding signals for each RA neuron come from several LMAN neurons [@Canady1988; @Herrmann1991; @Garst-Orozco]. In our model, we summarized the net input from the tutor to the $j$th student neuron as a single function $g_j(t)$.
We started with a rate-based implementation of the model (Figure \[fig:linear\_model\]A) that was analytically tractable but averaged over tutor variability. We further took the neurons to be in a linear operating regime (Figure \[fig:linear\_model\]A) away from the threshold and saturation present in real neurons. We then relaxed these conditions and tested our results in spiking networks with initial parameters selected to imitate measured firing patterns in juvenile birds prior to song learning. The student circuit in both the rate-based and spiking models included a global inhibitory signal that helped to suppress excess activity driven by ongoing conductor and tutor input. Such recurrent inhibition is present in area RA of the bird [@Spiro1999]. In the spiking model we implemented the suppression as an activity-dependent inhibition, while for the analytic calculations we used a constant negative bias for the student neurons.
Learning in a rate-based model {#sec:efficient_learning}
------------------------------
Learning in our model was enabled by plasticity at the conductor–student synapses that was modulated by signals from tutor neurons (Figure \[fig:linear\_model\]B). Many different forms of such hetero-synaptic plasticity have been observed. For example, in rate-based synaptic plasticity high tutor firing rates lead to synaptic potentiation and low tutor firing rates lead to depression [@Chistiakova2009; @Chistiakova2014]. In timing-dependent rules, such as the one recently measured by @Mehaffey2015 in slices of zebra finch RA (see Figure \[fig:bird\_vs\_model\]C), the relative arrival times of spike bursts from different input pathways set the sign of synaptic change. To model learning that lies between these rate and timing-based extremes, we introduced a class of plasticity rules governed by two parameters $\alpha$ and $\beta$ (see also Methods and Figure \[fig:linear\_model\]B): $$\label{eq:learningRuleIntro}
\begin{split}
\frac {dW_{ij}} {dt} &= \eta \tilde c_i(t) \bigl(g_j(t) - \theta\bigr) \,,\\
\tilde c_i(t) &= \int_0^t dt' c_i(t') \, \left[\frac {\alpha} {\tau_1} e^{-(t-t')/\tau_1} - \frac {\beta} {\tau_2} e^{-(t-t')/\tau_2}\right]\,,
\end{split}$$ where $W_{ij}$ is the weight of the synapse from the $i$th conductor to the $j$th student neuron, $\eta$ is a learning rate, $\theta$ is a threshold on the firing rate of tutor neurons, and $\tau_1$ and $\tau_2$ are timescales associated with the plasticity. This is similar to an STDP rule, except that the dependence on postsynaptic activity was replaced by dependence on the input from the tutor. Thus plasticity acts heterosynaptically, with activation of the tutor–student synapse controlling the change in the conductor–student synaptic weight. The timescales $\tau_1$ and $\tau_2$, as well as the coefficients $\alpha$ and $\beta$, can be thought of as effective parameters describing the plasticity observed in student neurons. As such, they do not necessarily have a simple correspondence in terms of the biochemistry of the plasticity mechanism, and the framework we describe here is not specifically tied to such an interpretation.
If we set $\alpha$ or $\beta$ to zero in our rule, eq. , the sign of the synaptic change is determined solely by the firing rate of the tutor $g_j(t)$ as compared to a threshold, reproducing the rate rules observed in experiments. When $\alpha/\beta \approx 1$, if the conductor leads the tutor, potentiation occurs, while coincident signals lead to depression (Figure \[fig:linear\_model\]B), which mimics the empirical findings from [@Mehaffey2015]. For general $\alpha$ and $\beta$, the sign of plasticity is controlled by both the firing rate of the tutor relative to the baseline, and by the relative timing of tutor and conductor. The overall scale of the parameters $\alpha$ and $\beta$ can be absorbed into the learning rate $\eta$ and so we set $\alpha - \beta = 1$ in all our simulations without loss of generality (see Methods). Note that if $\alpha$ and $\beta$ are both large, it can be that $\alpha - \beta = 1$ and $\alpha/\beta \approx 1$ also, as needed to realize the @Mehaffey2015 curve.
We can ask how the conductor–student weights $W_{ij}$ (Figure \[fig:linear\_model\]A) should change in order to best improve the output $y_a(t)$. We first need a loss function $L$ that quantifies the distance between the current output $y_a(t)$ and the target $\bar y_a(t)$ (Figure \[fig:linear\_model\]C). We used a quadratic loss function, but other choices can also be incorporated into our framework (see Appendix). Learning should change the synaptic weights so that the loss function is minimized, leading to a good rendition of the targeted output. This can be achieved by changing the synaptic weights in the direction of steepest descent of the loss function (Figure \[fig:linear\_model\]C).
We used the synaptic plasticity rule from eq. to calculate the overall change of the weights, $\Delta W_{ij}$, over the course of the motor program. This is a function of the time course of the tutor signal, $g_j(t)$. Not every choice for the tutor signal leads to motor output changes that best improve the match to the target. Imposing the condition that these changes follow the gradient descent procedure described above, we derived the tutor signal that was best matched to the student plasticity rule (detailed derivation in Methods). The result is that the best tutor for driving gradient descent learning must keep track of the motor error $$\label{eq:motorError}
\epsilon_j(t) = \sum_a M_{aj} (y_a(t) - \bar y_a(t))$$ integrated over the recent past $$\label{eq:matchingTutor}
g_j(t) = \theta - \frac {\zeta} {\alpha - \beta} \frac 1 {\tau_\text{tutor}}\int_0^t \epsilon_j(t') e^{-(t-t')/\tau_\text{tutor}}\, dt'\,,\\$$ where $M_{aj}$ are the weights describing the linear relationship between student activities and motor outputs (Figure \[fig:linear\_model\]A) and $\zeta$ is a learning rate. Moreover, for effective learning, the timescale $\tau_\text{tutor}$ appearing in eq. , which quantifies the timescale on which error information is integrated into the tutor signal, should be related to the synaptic plasticity parameters according to $$\label{eq:tutorTimescale}
\begin{aligned}
\tau_\text{tutor} &= \tau_\text{tutor}^* \,, \quad \text{where}\\
\tau_\text{tutor}^* &\equiv \frac {\alpha \tau_1 - \beta \tau_2} {\alpha - \beta}%\,.
\end{aligned}$$ is the optimal timescale for the error integration.
In short, motor learning with a heterosynaptic plasticity rule requires convolving the motor error with a kernel whose timescale is related to the structure of the plasticity rule, but is otherwise independent of the motor program.[^1] As explained in more detail in Methods, this result is derived in an approximation that assumes that the tutor signal does not vary significantly over timescales of the order of the student timescales $\tau_1$ and $\tau_2$. Given eq. , this implies that we are assuming $\tau_\text{tutor}\gg \tau_{1,2}$. This is a reasonable approximation because variations in the tutor signal that are much faster than the student timescales $\tau_{1,2}$ have little effect on learning since the plasticity rule blurs conductor inputs over these timescales.
Matched *vs.* unmatched learning
--------------------------------
Our rate-based model predicts that when the timescale on which error information is integrated into the tutor signal ($\tau_\text{tutor}$) is matched to the student plasticity rule as described above, learning will proceed efficiently. A mismatched tutor should slow or disrupt convergence to the desired output. To test this, we numerically simulated the birdsong circuit using the linear model from Figure \[fig:linear\_model\]A with a motor output $y_a$ filtered to more realistically reflect muscle response times (see Methods). We selected plasticity rules as described in eq. and Figure \[fig:linear\_model\]B and picked a target output pattern to learn. The target was chosen to resemble recordings of air-sac pressure from singing zebra finches in terms of smoothness and characteristic timescales [@Veit2011], but was otherwise arbitrary. In our simulations, the output typically involved two different channels, each with its own target, but for brevity, in figures we typically showed the output from only one of these.
For our analytical calculations, we made a series of assumptions and approximations meant to enhance tractability, such as linearity of the model and a focus on the regime $\tau_\text{tutor} \gg \tau_{1,2}$. These constraints can be lifted in our simulations, and indeed below we test our numerical model in regimes that go beyond the approximations made in our derivation. In many cases, we found that the basic findings regarding tutor–student matching from our analytical model remain true even when some of the assumptions we used to derive it no longer hold.
We tested tutors that were matched or mismatched to the plasticity rule to see how effectively they instructed the student. Figure \[fig:ratebased\_convergence\_and\_mismatch\]A and online Video 1 show convergence with a matched tutor when the sign of plasticity is determined by the tutor’s firing rate. We see that the student output rapidly converged to the target. Figure \[fig:ratebased\_convergence\_and\_mismatch\]B and online Video 2 show convergence with a matched tutor when the sign of plasticity is largely determined by the relative timing of the tutor signal and the student output. We see again that the student converged steadily to the desired output, but at a somewhat slower rate than in Figure \[fig:ratebased\_convergence\_and\_mismatch\]A.
To test the effects of mismatch between tutor and student, we used tutors with timescales that did not match eq. . All student plasticity rules had the same effective time constants $\tau_1$ and $\tau_2$, but different parameters $\alpha$ and $\beta$ (see eq. ), subject to the constraint $\alpha - \beta=1$ described in section \[sec:efficient\_learning\]. Different tutors had different memory time scales $\tau_\text{tutor}$ (eq. ). Figures \[fig:ratebased\_convergence\_and\_mismatch\]C and \[fig:ratebased\_convergence\_and\_mismatch\]D demonstrate that learning was more rapid for well-matched tutor-student pairs (the diagonal neighborhood, where $\tau_\text{tutor} \approx \tau_\text{tutor}^*$). When the tutor error integration timescale was shorter than the matched value in eq. , $\tau_\text{tutor} < \tau_\text{tutor}^*$, learning was often completely disrupted (many pairs below the diagonal in Figures \[fig:ratebased\_convergence\_and\_mismatch\]C and \[fig:ratebased\_convergence\_and\_mismatch\]D). When the tutor error integration timescale was longer than the matched value in eq. , $\tau_\text{tutor} > \tau_\text{tutor}^*$ learning was slowed down. Figure \[fig:ratebased\_convergence\_and\_mismatch\]C also shows that a certain amount of mismatch between the tutor error integration timescale $\tau_\text{tutor}$ and the matched timescale $\tau_\text{tutor}^*$ implied by the student plasticity rule is tolerated by the system. Interestingly, the diagonal band over which learning is effective in Figure \[fig:ratebased\_convergence\_and\_mismatch\]C is roughly of constant width—note that the scale on both axes is logarithmic, so that this means that the tutor error integration timescale $\tau_\text{tutor}$ has to be within a constant factor of the optimal timescale $\tau_\text{tutor}^*$ for good learning. We also see that the breakdown in learning is more abrupt when $\tau_\text{tutor} < \tau_\text{tutor}^*$ than in the opposite regime.
![\[fig:ratebased\_convergence\_and\_mismatch\]Learning with matched or mismatched tutors in rate-based simulations. **A.** Error trace showing how the average motor error evolved with the number of repetitions of the motor program for a rate-based ($\alpha = 0$) plasticity rule paired with a matching tutor. (See online Video 1.) **B.** The error trace and final motor output shown for a timing-based learning rule matched by a tutor with a long integration timescale. (See online Video 2.) In both **A** and **B** the inset shows the final motor output for one of the two output channels (thick orange line) compared to the target output for that channel (dotted black line). The output on the first rendition and at two other stages of learning indicated by orange arrows on the error trace are also shown as thin orange lines. **C.** Effects of mismatch between student and tutor on reproduction accuracy. The heatmap shows the final reproduction error of the motor output after 1000 learning cycles in a rate-based simulation where a student with parameters $\alpha$, $\beta$, $\tau_1$, and $\tau_2$ was paired with a tutor with memory timescale $\tau_\text{tutor}$. On the $y$ axis, $\tau_1$ and $\tau_2$ were kept fixed at $80\,\mathrm{ms}$ and $40 \, \mathrm{ms}$, respectively, while $\alpha$ and $\beta$ were varied (subject to the constraint $\alpha - \beta = 1$; see text). Different choices of $\alpha$ and $\beta$ lead to different optimal timescales $\tau_\text{tutor}^*$ according to eq. . The diagonal elements correspond to matched tutor and student, $\tau_\text{tutor} = \tau_\text{tutor}^*$. Note that the color scale is logarithmic. **D.** Error evolution curves as a function of the mismatch between student and tutor. Each plot shows how the error in the motor program changed during 1000 learning cycles for the same conditions as those shown in the heatmap. The region shaded in light pink shows simulations where the mismatch between student and tutor led to a deteriorating instead of improving performance during learning.](figures/ratebased_convergence_and_mismatch){width="5.85in"}
Video 1: Evolution of motor output during learning in a rate-based simulation using a rate-based ($\alpha = 0$) plasticity rule paired with a matching tutor. This video relates to Figure \[fig:ratebased\_convergence\_and\_mismatch\]A.
Video 2: Evolution of motor output during learning in a rate-based simulation using a timing-based ($\alpha \approx \beta$) plasticity rule paired with a matching tutor. This video relates to Figure \[fig:ratebased\_convergence\_and\_mismatch\]B.
An interesting feature of the results from Figures \[fig:ratebased\_convergence\_and\_mismatch\]C, 3D is that the difference in performance between matched and mismatched pairs becomes less pronounced for timescales shorter than about $100\, \mathrm{ms}$. This is due to the fact that the plasticity rule (eq. ) implicitly smooths over timescales of the order of $\tau_{1,2}$, which in our simulations were equal to $\tau_1 = 80\,\mathrm{ms}$, $\tau_2 = 40\,\mathrm{ms}$. Thus, variations of the tutor signal on shorter timescales have little effect on learning. Using different values for the effective timescales $\tau_{1,2}$ describing the plasticity rule can increase or decrease the range of parameters over which learning is robust against tutor–student mismatches (see Appendix).
Robust learning with nonlinearities
-----------------------------------
In the model above, firing rates for the tutor were allowed to grow as large as necessary to implement the most efficient learning. However, the firing rates of realistic neurons typically saturate at some fixed bound. To test the effects of this nonlinearity in the tutor, we passed the ideal tutor activity through a sigmoidal nonlinearity, $$\tilde g_j(t) = \theta - \rho \, \tanh \frac {\zeta} {\alpha - \beta} \frac 1 {\tau_\text{tutor}} \int_0^t \epsilon_j(t') e^{-(t-t')/\tau_\text{tutor}} \, dt'\,.
\label{eq:tutor_rates_constrained}$$ where $2 \rho$ is the range of firing rates. We typically chose $\theta = \rho = 80\,\mathrm{Hz}$ to constrain the rates to the range $0$–$160 \,\mathrm{Hz}$ [@Olveczky2005; @Garst-Orozco]. Learning slowed down with this change (Figure \[fig:ratebased\_firingrate\_constraint\]A and online Video 3) as a result of the tutor firing rates saturating when the mismatch between the motor output and the target output was large. However, the accuracy of the final rendition was not affected by saturation in the tutor (Figure \[fig:ratebased\_firingrate\_constraint\]A, inset). An interesting effect occurred when the firing rate constraint was imposed on a matched tutor with a long memory timescale. When this happened and the motor error was large, the tutor signal saturated and stopped growing in relation to the motor error before the end of the motor program. In the extreme case of very long integration timescales, learning became sequential: early features in the output were learned first, before later features were addressed, as in Figure \[fig:ratebased\_firingrate\_constraint\]B and online Video 4. This is reminiscent of the learning rule described in [@Memmesheimer2014].
![\[fig:ratebased\_firingrate\_constraint\]Effects of adding a constraint on the tutor firing rate to the simulations. **A.** Learning was slowed down by the firing rate constraint, but the accuracy of the final rendition stayed the same (inset, shown here for one of two simulated output channels). Here $\alpha=0$, $\beta=-1$, and $\tau_\text{tutor} = \tau_\text{tutor}^* = 40\,\mathrm{ms}$. (See online Video 3.) **B.** Sequential learning occurred when the firing rate constraint was imposed on a matched tutor with a long memory scale. The plots show the evolution of the motor output for one of the two channels that were used in the simulation. Here $\alpha=24$, $\beta=23$, and $\tau_\text{tutor} = \tau_\text{tutor}^* = 1000\,\mathrm{ms}$. (See online Video 4.)](figures/ratebased_constraint_effects){width="6in"}
Video 3: Effects of adding a constraint on tutor firing rates on the evolution of motor output during learning in a rate-based simulation. The plasticity rule here was rate-based ($\alpha = 0$). This video relates to Figure \[fig:ratebased\_firingrate\_constraint\]A.
Video 4: Evolution of the motor output showing sequential learning in a rate-based simulation, which occurs when the firing rate constraint is imposed on a tutor with a long memory timescale. This video relates to Figure \[fig:ratebased\_firingrate\_constraint\]B.
Nonlinearities can similarly affect the activities of student neurons. Our model can be readily extended to describe efficient learning even in this case. The key result is that for efficient learning to occur, the synaptic plasticity rule should depend not just on the tutor and conductor, but also on the activity of the postsynaptic student neurons (details in Appendix). Such dependence on postsynaptic activity is commonly seen in experiments [@Chistiakova2009; @Chistiakova2014].
The relation between student neuron activations $s_j(t)$ and motor outputs $y_a(t)$ (Figure \[fig:linear\_model\]A) is in general also nonlinear. Compared to the linear assumption that we used, the effect of a monotonic nonlinearity, $y_a = N_a(\sum_j M_{aj} s_j)$, with $N_a$ an increasing function, is similar to modifying the loss function $L$, and does not significantly change our results (see Appendix). We also checked that imposing a rectification constraint that conductor–student weights $W_{ij}$ must be positive does not modify our results either (see Appendix). This shows that our model continues to work with biologically realistic synapses that cannot change sign from excitatory to inhibitory during learning.
Spiking neurons and birdsong
----------------------------
To apply our model to vocal learning in birds, we extended our analysis to networks of spiking neurons. Juvenile songbirds produce a “babble” that converges through learning to an adult song strongly resembling the tutor song. This is reflected in the song-aligned spiking patterns in pre-motor area RA, which become more stereotyped and cluster in shorter, better-defined bursts as the bird matures (Figure \[fig:spiking\_results\]A). We tested whether our model could reproduce key statistics of spiking in RA over the course of song learning. In this context, our theory of efficient learning, derived in a rate-based scenario, predicts a specific relation between the teaching signal embedded in LMAN firing patterns, and the plasticity rule implemented in RA. We tested whether these predictions continued to hold in the spiking context.
Following the experiments of @Hahnloser2002, we modeled each neuron in HVC (the conductor) as firing one short, precisely timed burst of 5-6 spikes at a single moment in the motor program. Thus the population of HVC neurons produced a precise timebase for the song. LMAN (tutor) neurons are known to have highly variable firing patterns that facilitate experimentation, but also contain a corrective bias [@Andalman2009]. Thus we modeled LMAN as producing inhomogeneous Poisson spike trains with a time-dependent firing rate given by eq. in our model. Although biologically there are several LMAN neurons projecting to each RA neuron, we again simplified by “summing” the LMAN inputs into a single, effective tutor neuron, similarly to the approach in [@Fiete2007]. The LMAN-RA synapses were modeled in a current-based approach as a mixture of AMPA and NMDA receptors, following the songbird data [@Stark1999; @Garst-Orozco]. The initial weights for all synapses were tuned to produce RA firing patterns resembling juvenile birds [@Olveczky2011a], subject to constraints from direct measurements in slice recordings [@Garst-Orozco] (see Methods for details, and Figure \[fig:spiking\_results\]B for a comparison between neural recordings and spiking in our model). In contrast to the constant inhibitory bias that we used in our rate-based simulations, for the spiking simulations we chose an activity-dependent global inhibition for RA neurons. We also tested that a constant bias produced similar results (see Appendix).
![Results from simulations in spiking neural networks. **A.** Spike patterns recorded from zebra finch RA during song production, for a juvenile (top) and an adult (bottom). Each color corresponds to a single neuron, and the song-aligned spikes for six renditions of the song are shown. Adapted from [@Olveczky2011a]. **B.** Spike patterns from model student neurons in our simulations, for the untrained (top) and trained (bottom) models. The training used $\alpha=1$, $\beta=0$, and $\tau_\text{tutor} = 80\,\mathrm{ms}$, and ran for 600 iterations of the song. Each model neuron corresponds to a different output channel of the simulation. In this case, the targets for each channel were chosen to roughly approximate the time course observed in the neural recordings. **C.** Progression of reproduction error in the spiking simulation as a function of the number of repetitions for the same conditions as in panel B. The inset shows the accuracy of reproduction in the trained model for one of the output channels. (See online Video 5.) **D.** Effects of mismatch between student and tutor on reproduction accuracy in the spiking model. The heatmap shows the final reproduction error of the motor output after 1000 learning cycles in a spiking simulation where a student with parameters $\alpha$, $\beta$, $\tau_1$, and $\tau_2$ was paired with a tutor with memory timescale $\tau_\text{tutor}$. On the $y$ axis, $\tau_1$ and $\tau_2$ were kept fixed at $80\,\mathrm{ms}$ and $40 \, \mathrm{ms}$, respectively, while $\alpha$ and $\beta$ were varied (subject to the constraint $\alpha - \beta = 1$; see section \[sec:efficient\_learning\]). Different choices of $\alpha$ and $\beta$ lead to different optimal timescales $\tau_\text{tutor}^*$ according to eq. . The diagonal elements correspond to matched tutor and student, $\tau_\text{tutor} = \tau_\text{tutor}^*$. Note that the color scale is logarithmic. \[fig:spiking\_results\]](figures/spiking_results_v2){width="6in"}
Video 5: Evolution of motor output during learning in a spiking simulation. The plasticity rule parameters were $\alpha=1$, $\beta = 0$, and the tutor had a matching timescale $\tau_\text{tutor}=\tau_\text{tutor}^*=80\,\mathrm{ms}$. This video relates to Figure \[fig:spiking\_results\]C.
Synaptic strength updates followed the same two-timescale dynamics that was used in the rate-based models (Figure \[fig:linear\_model\]B). The firing rates $c_i(t)$ and $g_j(t)$ that appear in the plasticity equation were calculated in the spiking model by filtering the spike trains from conductor and tutor neurons with exponential kernels. The synaptic weights were constrained to be non-negative. (See Methods for details.)
As long as the tutor error integration timescale was not too large, learning proceeded effectively when the tutor error integration timescale and the student plasticity rule were matched (see Figure \[fig:spiking\_results\]C and online Video 5), with mismatches slowing down or abolishing learning, just as in our rate-based study (compare Figure \[fig:spiking\_results\]D with Figure \[fig:ratebased\_convergence\_and\_mismatch\]C). The rate of learning and the accuracy of the trained state were lower in the spiking model compared to the rate-based model. The lower accuracy arises because the tutor neurons fire stochastically, unlike the deterministic neurons used in the rate-based simulations. The stochastic nature of the tutor firing also led to a decrease in learning accuracy as the tutor error integration timescale $\tau_\text{tutor}$ increased (Figure \[fig:spiking\_results\]D). This happens through two related effects: (1) the signal-to-noise ratio in the tutor guiding signal decreases as $\tau_\text{tutor}$ increases once the tutor error integration timescale is longer than the duration $T$ of the motor program (see Appendix); and (2) the fluctuations in the conductor–student weights lead to some weights getting clamped at 0 due to the positivity constraint, which leads to the motor program overshooting the target (see Appendix). The latter effect can be reduced by either allowing for negative weights, or changing the motor output to a push-pull architecture in which some student neurons enhance the output while others inhibit it. The signal-to-noise ratio effect can be attenuated by increasing the gain of the tutor signal, which inhibits early learning, but improves the quality of the guiding signal in the latter stages of the learning process. It is also worth emphasizing that these effects only become relevant once the tutor error integration timescale $\tau_\text{tutor}$ becomes significantly longer than the duration of the motor program, $T$, which for a birdsong motif would be around 1 second.
Spiking in our model tends to be a little more regular than that in the recordings (compare Figure \[fig:spiking\_results\]A and Figure \[fig:spiking\_results\]B). This could be due to sources of noise that are present in the brain which we did not model. One detail that our model does not capture is the fact that many LMAN spikes occur in bursts, while in our simulation LMAN firing is Poisson. Bursts are more likely to produce spikes in downstream RA neurons particularly because of the NMDA dynamics, and thus a bursty LMAN will be more effective at injecting variability into RA [@Kojima2013]. Small inaccuracies in aligning the recorded spikes to the song are also likely to contribute apparent variability between renditions in the experiment. Indeed, some of the variability in Figure \[fig:spiking\_results\]A looks like it could be due to time warping and global time shifts that were not fully corrected.
Robust learning with credit assignment errors
---------------------------------------------
The calculation of the tutor output in our rule involved estimating the motor error $\epsilon_j$ from eq. . This required knowledge of the assignment between student activities and motor output, which in our model was represented by the matrix $M_{aj}$ (Figure \[fig:linear\_model\]A). In our simulations, we typically chose an assignment in which each student neuron contributed to a single output channel, mimicking the empirical findings for neurons in bird RA. Mathematically, this implies that each column of $M_{aj}$ contained a single non-zero element. In Figure \[fig:reinforcement\_results\]A, we show what happened in the rate-based model when the tutor incorrectly assigned a certain fraction of the neurons to the wrong output. Specifically, we considered two output channels, $y_1$ and $y_2$, with half of the student neurons contributing only to $y_1$ and the other half contributing only to $y_2$. We then scrambled a fraction $\rho$ of this assignment when calculating the motor error, so that the tutor effectively had an imperfect knowledge of the student–output relation. Figure \[fig:reinforcement\_results\]A shows that learning is robust to this kind of mis-assignment even for fairly large values of the error fraction $\rho$ up to about 40%, but quickly deteriorates as this fraction approaches 50%.
![Credit assignment and reinforcement learning. **A.** Effects of credit mis-assignment on learning in a rate-based simulation. Here, the system learned output sequences for two independent channels. The student–output weights $M_{aj}$ were chosen so that the tutor wrongly assigned a fraction of student neurons to an output channel different from the one it actually mapped to. The graph shows how the accuracy of the motor output after 1000 learning steps depended on the fraction of mis-assigned credit. **B.** Learning curve and trained motor output (inset) for one of the channels showing two-stage reinforcement-based learning for the memory-less tutor ($\tau_\text{tutor} = 0$). The accuracy of the trained model is as good as in the case where the tutor was assumed to have a perfect model of the student–output relation. However, the speed of learning is reduced. (See online Video 6.) **C.** Learning curve and trained motor output (inset) for one of the output channels showing two-stage reinforcement-based learning when the tutor circuit needs to integrate information about the motor error on a certain timescale. Again, learning was slow, but the accuracy of the trained state was unchanged. (See online Video 7.) **D.** Evolution of the average number of HVC inputs per RA neuron with learning in a reinforcement example. Synapses were considered pruned if they admitted a current smaller than 1 nA after a pre-synaptic spike in our simulations.\[fig:reinforcement\_results\]](figures/reinforcement_results){width="6in"}
Video 6: Evolution of motor output during learning in a spiking simulation with a reinforcement-based tutor. Here the tutor was memory-less ($\tau_\text{tutor} = 0$). This video relates to Figure \[fig:reinforcement\_results\]B.
Video 7: Evolution of motor output during learning in a spiking simulation with a reinforcement-based tutor. Here the tutor needed to integrate information about the motor error on a timescale $\tau_\text{tutor} = 440\,\mathrm{ms}$. This video relates to Figure \[fig:reinforcement\_results\]C.
Due to environmental factors that affect development of different individuals in different ways, it is unlikely that the student–output mapping can be innate. As such, the tutor circuit must learn the mapping. Indeed, it is known that LMAN in the bird receives an indirect evaluation signal *via* Area X, which might be used to effect this learning [@Andalman2009; @Kubikova2010; @Hoffmann2016; @Gadagkar2016]. One way in which this can be achieved is through a reinforcement paradigm. We thus considered a learning rule where the tutor circuit receives a reward signal that enables it to infer the student–output mapping. In general the output of the tutor circuit should depend on an integral of the motor error, as in eq. , to best instruct the student. For simplicity, we start with the memory-less case, $\tau_\text{tutor} = 0$, in which only the instantaneous value of the motor error is reflected in the tutor signal; we then show how to generalize this for $\tau_\text{tutor} > 0$.
As before, we took the tutor neurons to fire Poisson spikes with time-dependent rates $f_j(t)$, which were initialized arbitrarily. Because of stochastic fluctuations, the actual tutor activity on any given trial, $g_j(t)$, differs somewhat from the average, $\bar g_j(t)$. Denoting the difference by $\xi_j(t) = g_j(t) - \bar g_j(t)$, the update rule for the tutor firing rates was given by $$\label{eq:tutorUpdateInst}
\Delta f_j(t) = \eta_\text{tutor} (R(t) - \bar R) \xi_j(t) \,,$$ where $\eta_\text{tutor}$ is a learning rate, $R(t)$ is the instantaneous reward signal, and $\bar R$ is its average over recent renditions of the motor program. In our implementation, $\bar R$ is obtained by convolving $R(t)$ with an exponential kernel (timescale $= 1$ second). The reward $R(t_\text{max})$ at the end of one rendition becomes the baseline at the start of the next rendition $R(0)$. The baseline $\bar g_j(t)$ of the tutor activity is calculated by averaging over recent renditions of the song with exponentially decaying weights (one $e$-fold of decay for every 5 renditions). Further implementation details are available in our code at <https://github.com/ttesileanu/twostagelearning>.
The intuition behind this rule is that, whenever a fluctuation in the tutor activity leads to better-than-average reward ($R(t) > \bar R$), the tutor firing rate changes in the direction of the fluctuation for subsequent trials, “freezing in” the improvement. Conversely, the firing rate moves away from the directions in which fluctuations tend to reduce the reward.
To test our learning rule, we ran simulations using this reinforcement strategy and found that learning again converges to an accurate rendition of the target output (Figure \[fig:reinforcement\_results\]B, inset and online Video 6). The number of repetitions needed for training is greatly increased compared to the case in which the credit assignment is assumed known by the tutor circuit (compare Figure \[fig:reinforcement\_results\]B to Figure \[fig:spiking\_results\]C). This is because the tutor needs to use many training rounds for experimentation before it can guide the conductor–student plasticity. The rate of learning in our model is similar to the songbird (*i.e.,* order $10\, 000$ repetitions for learning, given that a zebra finch typically sings about 1000 repetitions of its song each day, and takes about one month to fully develop adult song).
Because of the extra training time needed for the tutor to adapt its signal, the motor output in our reward-based simulations tends to initially overshoot the target (leading to the kink in the error at around 2000 repetitions in Figure \[fig:reinforcement\_results\]B). Interestingly, the subsequent reduction in output that leads to convergence of the motor program, combined with the positivity constraint on the synaptic strengths, leads to many conductor–student connections being pruned (Figure \[fig:reinforcement\_results\]D). This mirrors experiments on songbirds, where the number of connections between HVC and RA first increases with learning and then decreases [@Garst-Orozco].
The reinforcement rule described above responds only to instantaneous values of the reward signal and tutor firing rate fluctuations. In general, effective learning requires that the tutor keep a memory trace of its activity over a timescale $\tau_\text{tutor}>0$, as in eq. . To achieve this in the reinforcement paradigm, we can use a simple generalization of eq. where the update rule is filtered over the tutor memory timescale: $$\label{eq:tutorUpdateWithMemory}
\Delta f_j(t) = \eta_\text{tutor}\, \frac 1 {\tau_\text{tutor}} \int^t dt' \, (R(t') - \bar R) \xi_j(t') e^{-(t-t')/\tau_\text{tutor}}\,.$$ We tested that this rule leads to effective learning when paired with the corresponding student, *i.e.,* one for which eq. is obeyed (Figure \[fig:reinforcement\_results\]C and online Video 7).
The reinforcement rules proposed here are related to the learning rules from [@Fiete2006; @Fiete2007] and [@Farries2007]. However, those models focused on learning in a single pass, instead of the two-stage architecture that we studied. In particular, in [@Fiete2007], area LMAN was assumed to generate pure Poisson noise and reinforcement learning took place at the HVC–RA synapses. In our model, which is in better agreement with recent evidence regarding the roles of RA and LMAN in birdsong [@Andalman2009], reinforcement learning first takes place in the anterior forebrain pathway (AFP), for which LMAN is the output. A reward-independent heterosynaptic plasticity rule then solidifies the information in RA.
In our simulations, tutor neurons fire Poisson spikes with specific time-dependent rates which change during learning. The timecourse of the firing rates in each repetition must then be stored somewhere in the brain. In fact, in the songbird, there are indirect projections from HVC to LMAN, going through the basal ganglia (Area X) and the dorso-lateral division of the medial thalamus (DLM) in the anterior forebrain pathway (Figure \[fig:bird\_vs\_model\]A) [@Perkel2004]. These synapses could store the required time-dependence of the tutor firing rates. In addition, the same synapses can provide the timebase input that would ensure synchrony between LMAN firing and RA output, as necessary for learning. Our reinforcement learning rule for the tutor area, eq. , can be viewed as an effective model for plasticity in the projections between HVC, Area X, DLM, and LMAN, as in [@Fee2012]. In this picture, the indirect HVC–LMAN connections behave somewhat like the “hedonistic synapses” from [@Seung2003], though we use a simpler synaptic model here. Implementing the integral from eq. would require further recurrent circuitry in LMAN which is beyond the scope of this paper, but would be interesting to investigate in future work.
Discussion
==========
We built a two-stage model of learning in which one area (the student) learns to perform a sequence of actions under guidance from a tutor area. This architecture is inspired by the song system of zebra finches, where area LMAN provides a corrective bias to the song that is then consolidated in the HVC–RA synapses. Using an approach rooted in the efficient coding literature, we showed analytically that, in a simple model, the tutor output that is most likely to lead to effective learning by the student involves an integral over the recent magnitude of the motor error. We found that efficiency requires that the timescale for this integral should be related to the synaptic plasticity rule used by the student. Using simulations, we tested our findings in more general settings. In particular, we demonstrated that tutor-student matching is important for learning in a spiking-neuron model constructed to reproduce spiking patterns similar to those measured in zebra finches. Learning in this model changes the spiking statistics of student neurons in realistic ways, for example, by producing more bursty, stereotyped firing events as learning progresses. Finally, we showed how the tutor can build its error-correcting signal by means of reinforcement learning.
If the birdsong system supports efficient learning, our model can predict the temporal structure of the firing patterns of RA-projecting LMAN neurons, given the plasticity rule implemented at the HVC–RA synapses. These predictions can be directly tested by recordings from LMAN neurons in singing birds, assuming that a good measure of motor error is available, and that we can estimate how the neurons contribute to this error. Moreover, recordings from a tutor circuit, such as LMAN, could be combined with a measure of motor error to infer the plasticity rule in a downstream student circuit, such as RA. This could be compared with direct measurements of the plasticity rule obtained in slice. Conversely, knowledge of the student plasticity rule could be used to predict the time-dependence of tutor firing rates. According to our model, the firing rate should reflect the integral of the motor error with the timescale predicted by the model. A different approach would be to artificially tutor RA by stimulating LMAN neurons electrically or optogenetically. We would predict that if the tutor signal is delivered appropriately (*e.g.,* in conjunction with a particular syllable [@Tumer2007]), then the premotor bias produced by the stimulation should become incorporated into the motor pathway faster when the timescale of the artificial LMAN signal is properly matched to the RA synaptic plasticity rule.
Our model can be applied more generally to other systems in the brain exhibiting two-stage learning, such as motor learning in mammals. If the plasticity mechanisms in these systems are different from those in songbirds, our predictions for the structure of the guiding signal will vary correspondingly. This would allow a further test of our model of “efficient learning” in the brain. It is worth pointing out that our model was derived assuming a certain hierarchy among the timescales that model the student plasticity and the tutor signal. A mismatch between the model predictions and observations could also imply a breakdown of these approximations, rather than failure of the hypothesis that the particular system under study evolved to support efficient learning. Of course our analysis could be extended by relaxing these assumptions, for example by keeping more terms in the Taylor expansion that we used in our derivation of the matched tutor signal.
Applied to birdsong, our model is best seen as a mechanism for learning song syllables. The ordering of syllables in song motifs seems to have a second level of control within HVC and perhaps beyond [@Basista2014; @Hamaguchi2016]. Songs can also be distorted by warping their timebase through changes in HVC firing without alterations of the HVC–RA connectivity [@Ali2013]. In view of these phenomena, it would be interesting to incorporate our model into a larger hierarchical framework in which the sequencing and temporal structure of the syllables are also learned. A model of transitions between syllables can be found in [@Gazzaniga2000], where the authors use a “weight perturbation” optimization scheme in which each HVC–RA synaptic weight is perturbed individually. We did not follow this approach because there is no plausible mechanism for LMAN to provide separate guidance to each HVC–RA synapse; in particular, there are not enough LMAN neurons [@Fiete2007].
In this paper we assumed a two-stage architecture for learning, inspired by birdsong. An interesting question is whether and under what conditions such an architecture is more effective than a single-step model. Possibly, having two stages is better when a single tutor area is responsible for training several different dedicated controllers, as is likely the case in motor learning. It would then be beneficial to have an area that can learn arbitrary behaviors, perhaps at the cost of using more resources and having slower reaction times, along with the ability to transfer these behaviors into low-level circuitry that is only capable of producing stereotyped motor programs. The question then arises whether having more than two levels in this hierarchy could be useful, what the other levels might do, and whether such hierarchical learning systems are implemented in the brain.
Acknowledgments {#acknowledgments .unnumbered}
===============
We would like to thank Serena Bradde for fruitful discussions during the early stages of this work. We also thank Xuexin Wei and Christopher Glaze for useful discussions. We are grateful to Timothy Otchy for providing us with some of the data we used in this paper. During this work VB was supported by NSF grant PHY-1066293 at the Aspen Center for Physics and by NSF Physics of Living Systems grant PHY-1058202. TT was supported by the Swartz Foundation.
Methods
=======
Equations for rate-based model
------------------------------
The basic equations we used for describing our rate-based model (Figure \[fig:linear\_model\]A) are the following: $$\label{eq:muscleLinear}
\begin{split}
y_a(t) &= \sum_j M_{aj} s_j(t)\,,\\
s_j(t) &= \sum_i W_{ij} c_i(t) + w g_j(t) - x_\text{inh}\,.
\end{split}$$ In simulations, we further filtered the output using an exponential kernel, $$\label{eq:si:muscleSmooth}
\tilde y_a(t) = \sum_j M_{aj} \int_0^t s_j(t') \, e^{-(t - t')/\tau_\text{out}} \, dt'\,,\\$$ with a timescale $\tau_\text{out}$ that we typically set to $25\,\textrm{ms}$. The smoothing produces more realistic outputs by mimicking the relatively slow reaction time of real muscles, and stabilizes learning by filtering out high-frequency components of the motor output. The latter interfere with learning because of the delay between the effect of conductor activity on synaptic strengths *vs.* motor output. This delay is of the order $\tau_{1,2} - \tau_\text{out}$ (see the plasticity rule below).
The conductor activity in the rate-based model is modeled after songbird HVC [@Hahnloser2002]: each neuron fires a single burst during the motor program. Each burst corresponds to a sharp increase of the firing rate $c_i(t)$ from 0 to a constant value, and then a decrease $10\,\mathrm{ms}$ later. The activities of the different neurons are spread out to tile the whole duration of the output program. Other choices for the conductor activity also work, provided no patterns are repeated (see Appendix).
Mathematical description of plasticity rule
-------------------------------------------
In our model the rate of change of the synaptic weights obeys a rule that depends on a filtered version of the conductor signal (see Figure \[fig:linear\_model\]B). This is expressed mathematically as $$\label{eq:plasticity1}
\frac {dW_{ij}} {dt} = \eta \, \tilde c_i(t) \, (g_j(t) - \theta)\,,$$ where $\eta$ is a learning rate and $\tilde c_i = K*c_i$, with the star representing convolution and $K$ being a filtering kernel. We considered a linear combination of two exponential kernels with timescales $\tau_1$ and $\tau_2$, $$\label{eq:kernel1}
K(t) = \alpha K_1(t) - \beta K_2(t)\,,$$ with $K_i(t)$ given by $$\label{eq:expKernels1}
K_i(t) = \begin{cases}
\tau_i^{-1} e^{-t/\tau_i} & \text{for $t \ge 0$,}\smallskip\\
0 & \text{else.}
\end{cases}$$ Different choices for the kernels give similar results (see Appendix). The overall scale of $\alpha$ and $\beta$ can be absorbed into the learning rate $\eta$ in eq. . In our simulations, we fix $\alpha - \beta = 1$ and keep the learning rate constant as we change the plasticity rule (see eq. ).
In the spiking simulations with and without reinforcement learning in the tutor circuit, the firing rates $c_i(t)$ and $g_j(t)$ were estimated by filtering spike trains with exponential kernels whose timescales were in the range $5\,\mathrm{ms}$–$40\,\mathrm{ms}$. The reinforcement studies typically required longer timescales for stability, possibly because of delays between conductor activity and reward signals.
Derivation of the matching tutor signal {#sec:si:derivation}
---------------------------------------
To find the tutor signal that provides the most effective teaching for the student, we first calculate how much synaptic weights change according to our plasticity rule, eq. . Then we require that this change matches the gradient descent direction. We have $$\label{eq:changeAsIntegral}
\Delta W_{ij} = \int_0^T \frac {dW_{ij}} {dt}\, dt = \eta \int_0^T \tilde c_i(t) (g_j(t) - \theta)\, dt\,.$$ Because of the linearity assumptions in our model, it is sufficient to focus on a case in which each conductor neuron, $i$, fires a single short burst, at a time $t_i$. We write this as $c_i(t) = \delta(t-t_i)$, and so $$\label{eq:totalChangeGeneralRule0}
\Delta W_{ij} = \int_0^T \frac {dW_{ij}} {dt} \,dt = \eta \int_0^T K(t - t_i) (g_j(t) - \theta) \, dt\,,$$ where we used the definition of $\tilde c_i(t)$. If the time constants $\tau_1$, $\tau_2$ are short compared to the timescale on which the tutor input $g_j(t)$ varies, only the values of $g_j(t)$ around time $t_i$ will contribute to the integral. If we further assume that $T \gg t_i$, we can use a Taylor expansion of $g_j(t)$ around $t = t_i$ to perform the calculation: $$\label{eq:totalChangeGeneralRuleTaylor}
\begin{split}
\Delta W_{ij} &\approx \eta \int_{t_i}^{\infty} K(t - t_i) \bigl(g_j(t_i) - \theta + (t - t_i) g'_j(t_i)\bigr)\, dt\\
&= \eta (g_j(t_i) - \theta) \int_0^\infty K(t) \, dt + \eta g'_j(t_i) \int_0^\infty t K(t) \, dt\\
&= \eta (g_j(t_i) - \theta) \int_0^\infty \bigl(\alpha K_1(t)-\beta K_2(t)\bigr) \, dt + \eta g'_j(t_i) \int_0^\infty t\, \bigl(\alpha K_1(t)-\beta K_2(t)\bigr) \, dt\,.
\end{split}$$ Doing the integrals involving the exponential kernels $K_1$ and $K_2$, we get $$\label{eq:totalChangeGeneralRule}
\Delta W_{ij} = \eta \bigl[(\alpha - \beta)\, (g_j(t_i) -\theta) + (\alpha \tau_1 - \beta \tau_2) g'_j(t_i)\bigr]\,.$$
We would like this synaptic change to optimally reduce a measure of mismatch between the output and the desired target as measured by a loss function. A generic smooth loss function $L(y_a(t), {\bar y_a(t)})$ can be quadratically approximated when $y_a$ is sufficiently close to the target ${\bar y_a(t)}$. With this in mind, we consider a quadratic loss $$\label{eq:lossFunction}
L = \frac 12\sum_a \int_0^T \bigl[y_a(t) - \bar y_a(t)\bigr]^2\, dt \,.$$ The loss function would decrease monotonically during learning if synaptic weights changed in proportion to the negative gradient of $L$: $$\label{eq:imposeGD}
\Delta W_{ij} = -\gamma \frac {\partial L} {\partial W_{ij}} \,,$$ where $\gamma$ is a learning rate. This implies $$\label{eq:gradientDirection}
\Delta W_{ij} = -\gamma \sum_a \int_0^T M_{aj} \bigl[y_a(t) - \bar y_a(t)\bigr]\, c_i(t) \,.$$ Using again $c_i(t) = \delta(t- t_i)$, we obtain $$\label{eq:totalChangeRequired}
\Delta W_{ij} = -\gamma \epsilon_j(t_i)\,,$$ where we used the notation from eq. for the motor error at student neuron $j$.
We now set and equal to each other. If the conductor fires densely in time, we need the equality to hold for all times, and we thus get a differential equation for the tutor signal $g_j(t)$. This identifies the tutor signal that leads to gradient descent learning as a function of the motor error $\epsilon_j(t)$, eq. (with the notation $\zeta = \gamma / \eta$).
Spiking simulations
-------------------
We used spiking models that were based on leaky integrate-and-fire neurons with current-based dynamics for the synaptic inputs. The magnitude of synaptic potentials generated by the conductor–student synapses was independent of the membrane potential, approximating AMPA receptor dynamics, while the synaptic inputs from the tutor to the student were based on a mixture of AMPA and NMDA dynamics. Specifically, the equations describing the dynamics of the spiking model were: $$\label{eq:spikingDynamics}
\begin{split}
\tau_m \frac {dV_j} {dt} &= (V_R - V_j) + R\, \bigl(I_j^\text{AMPA} + I_j^\text{NMDA}\bigr) - V_\text{inh}\,, \qquad \text{(except during refractory period)}\\
\frac {dI_j^\text{AMPA}} {dt} &= -\frac {I_j^\text{AMPA}} {\tau_\text{AMPA}} + \sum_i W_{ij} \sum_k \delta(t - t_k^{\text{conductor \#}i}) + (1-r) w \sum_k \delta(t - t_k^\text{tutor})\,,\\
\frac {dI_j^\text{NMDA}} {dt} &= -\frac {I_j^\text{NMDA}} {\tau_\text{NMDA}} + r w G(V_j) \sum_k \delta(t - t_k^\text{tutor})\,,\\
V_\text{inh} &= \frac {g_\text{inh}} {N_\text{student}} \sum_j S_j(t)\,,\\
\frac {dS_j} {dt} &= -\frac {S_j} {\tau_\text{inh}} + \sum_k \delta(t - t_k^{\text{student}})\,,\\
G(V) &= \left[1 + \frac {\text{[Mg]}} {3.57 \, \mathrm{mM}} \exp (-V/16.13 \, \mathrm{mV})\right]^{-1} \,.
\end{split}$$ Here $V_j$ is the membrane potential of the $j^\text{th}$ student neuron and $V_R$ is the resting potential, as well as the potential to which the membrane was reset after a spike. Spikes were registered whenever the membrane potential went above a threshold $V_\text{th}$, after which a refractory period $\tau_\text{ref}$ ensued. Apart from excitatory AMPA and NMDA inputs modeled by the $I_j^\text{AMPA}$ and $I_j^\text{NMDA}$ variables in our model, we also included a global inhibitory signal $V_\text{inh}$ which is proportional to the overall activity of student neurons averaged over a timescale $\tau_\text{inh}$. The averaging is performed using the auxiliary variables $S_j$ which are convolutions of student spike trains with an exponential kernel. These can be thought of as a simple model for the activities of inhibitory interneurons in the student.
Table \[tab:spiking\_params\] gives the values of the parameters we used in the simulations. These values were chosen to match the firing statistics of neurons in bird RA, as described below.
Parameter Symbol Value Parameter Symbol Value
------------------------------------- -------------------- ------------------------ ------------------------------------------------- ---------------- ---------------------------
No. of conductor neurons $300$ No. of student neurons $80$
Reset potential $V_R$ $-72.3 \,\mathrm{mV}$ Input resistance $R$ $353 \, \mathrm{M\Omega}$
Threshold potential $V_\text{th}$ $-48.6\,\mathrm{mV}$ Strength of inhibition $g_\text{inh}$ $1.80 \, \mathrm{mV}$
Membrane time constant $\tau_m$ $24.5 \,\mathrm{ms}$ Fraction NMDA receptors $r$ $0.9$
Refractory period $\tau_\text{ref}$ $1.1 \, \mathrm{ms}$ Strength of synapses from tutor $w$ $100 \, \mathrm{nA}$
AMPA time constant $\tau_\text{AMPA}$ $6.3 \, \mathrm{ms}$ No. of conductor synapses per student neuron $148$
NMDA time constant $\tau_\text{NMDA}$ $81.5 \, \mathrm{ms} $ Mean strength of synapses from conductor $32.6 \, \mathrm{nA}$
Time constant for global inhibition $\tau_\text{inh}$ $20 \, \mathrm{ms}$ Standard deviation of conductor–student weights $17.4 \, \mathrm{nA}$
Conductor firing rate during bursts $632 \,\mathrm{Hz}$
: Values for parameters used in the spiking simulations.\[tab:spiking\_params\]
The voltage dynamics for conductor and tutor neurons was not simulated explicitly. Instead, each conductor neuron was assumed to fire a burst at a fixed time during the simulation. The onset of each burst had additive timing jitter of $\pm 0.3 \, \mathrm{ms}$ and each spike in the burst had a jitter of $\pm 0.2 \, \mathrm{ms}$. This modeled the uncertainty in spike times that is observed in *in vivo* recordings in birdsong [@Hahnloser2002]. Tutor neurons fired Poisson spikes with a time-dependent firing rate that was set as described in the main text.
The initial connectivity between conductor and student neurons was chosen to be sparse (see Table \[tab:spiking\_params\]). The initial distribution of synaptic weights was log-normal, matching experimentally measured values for zebra finches [@Garst-Orozco]. Since these measurements are done in the slice, the absolute number of HVC synapses per RA neuron is likely to have been underestimated. The number of conductor–student synapses we start with in our simulations is thus chosen to be higher than the value reported in that paper (see Table \[tab:spiking\_params\]), and is allowed to change during learning. We checked that the learning paradigm described here is robust to substantial changes in these parameters, but we have chosen values that are faithful to birdsong experiments and which are thus able to imitate the RA spiking statistics during song.
The synapses projecting onto each student neuron from the tutor have a weight that is fixed during our simulations reflecting the finding in [@Garst-Orozco] that the average strength of LMAN–RA synapses for zebra finches does not change with age. There is some evidence that individual LMAN–RA synapses undergo plasticity concurrently with the HVC–RA synapses [@Mehaffey2015] but we did not seek to model this effect. There are also developmental changes in the kinetics of NMDA-mediated synaptic currents in both HVC–RA and LMAN–RA synapses which we do not model [@Stark1999]. These, however, happen early in development, and thus are unlikely to have an effect on song crystallization, which is what our model focuses on. @Stark1999 also observed changes in the relative contribution of NMDA to AMPA responses in the HVC–RA synapses. We do not incorporate such effects in our model since we do not explicitly model the dynamics of HVC neurons in this paper. However, this is an interesting avenue for future work, especially since there is evidence that area HVC can also contribute to learning, in particular in relation to the temporal structure of song [@Ali2013].
Matching spiking statistics with experimental data
--------------------------------------------------
We used an optimization technique to choose parameters to maximize the similarity between the statistics of spiking in our simulations and the firing statistics observed in neural recordings from the songbird. The comparison was based on several descriptive statistics: the average firing rate; the coefficient of variation and skewness of the distribution of inter-spike intervals; the frequency and average duration of bursts; and the firing rate during bursts. For calculating these statistics, bursts were defined to start if the firing rate went above 80 Hz and last until the rate decreased below 40 Hz.
To carry out such optimizations in the stochastic context of our simulations, we used an evolutionary algorithm—the covariance matrix adaptation evolution strategy (CMA-ES) [@Hansen2006]. The objective function was based on the relative error between the simulation statistics $x_i^\text{sim}$ and the observed statistics $x_i^\text{obs}$, $$\label{eq:spiking_optim_obj_fun}
\text{error} = \left[\sum_i \left(\frac {x_i^\text{sim}} {x_i^\text{obs}} - 1\right)^2\right]^{1/2}\,.$$ Equal weight was placed on optimizing the firing statistics in the juvenile (based on a recording from a 43 dph bird) and optimizing firing in the adult (based on a recording from a 160 dph bird). In this optimization there was no learning between the juvenile and adult stages. We simply required that the number of HVC synapses per RA neuron, and the mean and standard deviation of the corresponding synaptic weights were in the ranges seen in the juvenile and adult by @Garst-Orozco. The optimization was carried out in `Python` (`RRID:SCR_008394`), using code from <https://www.lri.fr/~hansen/cmaes_inmatlab.html>. The results fixed the parameter choices in Table \[tab:spiking\_params\] which were then used to study our learning paradigm. While these choices are important for achieving firing statistics that are similar to those seen in recordings from the bird, our learning paradigm is robust to large variations in the parameters in Table \[tab:spiking\_params\].
Software and data
-----------------
We used custom-built `Python` (`RRID:SCR_008394`) code for simulations and data analysis. The software and data that we used can be accessed online on `GitHub` (`RRID:SCR_002630`) at <https://github.com/ttesileanu/twostagelearning>.
Appendix
========
Effect of nonlinearities
------------------------
We can generalize the model from eq. by using a nonlinear transfer function from student activities to motor output, and a nonlinear activation function for student neurons: $$\label{eq:modelNonlinear}
\begin{split}
y_a(t) &= N_a\Bigl(\sum_j M_{aj} s_j(t)\Bigr)\,,\\
s_j(t) &= F\Bigl(\sum_i W_{ij} c_i(t) + w g_j(t) - x_\text{inh}\Bigr)\,.
\end{split}$$ Suppose further that we use a general loss function, $$\label{eq:generalLoss}
L = \int_0^T \mathcal L\bigl(\{y_a(t) - \bar y_a(t)\}\bigr) \, dt\,.$$ Carrying out the same argument as that from section \[sec:si:derivation\], the gradient descent condition, eq. , implies $$\label{eq:nonlinearGDChange}
\Delta W_{ij} = -\gamma \int_0^T \sum_a M_{aj} N_a' F' c_i(t) \left.\frac {\partial \mathcal L} {\partial y_a}\right\rvert_{y_a(t) - \bar y_a(t)} \,.$$ The departure from the quadratic loss function, $\mathcal L \ne \frac 12 \sum_a (y_a(t) - \bar y_a(t))^2$, and the nonlinearities in the output, $N_a$, have the effect of redefining the motor error, $$\label{eq:nonlinearMotorError}
\epsilon_j(t) = \sum_a M_{aj} N_a' \left.\frac {\partial \mathcal L} {\partial y_a}\right\rvert_{y_a(t) - \bar y_a(t)}\,.$$ A proper loss function will be such that the derivatives $\partial \mathcal L / \partial y_a$ vanish when $y_a(t) = \bar y_a(t)$, and so the motor error $\epsilon_j$ as defined here is zero when the rendition is perfect, as expected. If we use a tutor that ignores the nonlinearities in a nonlinear system, *i.e.,* if we use eq. instead of eq. to calculate the tutor signal that is plugged into eq. , we still expect successful learning provided that $N_a' > 0$ and that $\mathcal L$ is itself an increasing function of $\lvert y_a - \bar y_a\rvert$ (see section \[sec:alt\_controllers\]). This is because replacing eq. with eq. would affect the magnitude of the motor error without significantly changing its direction. In more complicated scenarios, if the transfer function to the output is not monotonic, there is the potential that using eq. would push the system away from convergence instead of towards it. In such a case, an adaptive mechanism, such as the reinforcement rules from eqns. or can be used to adapt to the local values of the derivatives $N_a'$ and $\partial \mathcal L / \partial y_a$.
Finally, the nonlinear activation function $F$ introduces a dependence on the student output $s_j(t)$ in eq. , since $F'$ is evaluated at $F^{-1}(s_j(t))$. To obtain a good match between the student and the tutor in this context, we can modify the student plasticity rule by adding a dependence on the postsynaptic activity, $$\label{eq:plasticityNonlinear}
\frac {dW_{ij}} {dt} = \eta \, \tilde c_i(t) \, (g_j(t) - \theta) \, F'(F^{-1}(s_j(t)))\,.$$ In general, synaptic plasticity has been observed to indeed depend on postsynaptic activity [@Chistiakova2009; @Chistiakova2014]. Our derivation suggests that the effectiveness of learning could be improved by tuning this dependence of synaptic change on postsynaptic activity to the activation function of postsynaptic neurons, according to eq. . It would be interesting to check whether such tuning occurs in real neurons.
Effect of different output functions {#sec:alt_controllers}
------------------------------------
In the main text, we assumed a linear mapping between student activities and motor output. Moreover, we assumed a myotopic organization, in which each student neuron projected to a single muscle, leading to a student–output assignment matrix $M_{aj}$ in which each column had a single non-zero entry. We also assumed that student neurons only contributed additively to the outputs, with no inhibitory activity. Here we show that our results hold for other choices of student–output mappings.
For example, assume a push-pull architecture, in which half of the student neurons controlling one output are excitatory and half are inhibitory. This can be used to decouple the overall firing rate in the student from the magnitude of the outputs. Learning works just as effectively as in the case of the purely additive student–output mapping when using matched tutors, Appendix Figures \[fig:si\_robustness\]A and \[fig:si\_robustness\]B. The consequences of mismatching student and tutor circuits are also not significantly changed, Appendix Figures \[fig:si\_robustness\]C and \[fig:si\_robustness\]D.
![\[fig:si\_robustness\]Robustness of learning. **A.** Error trace showing how average motor error evolves with repetitions of the motor program for rate-based plasticity paired with a matching tutor, when the student–output mapping has a push-pull architecture. The inset shows the final motor output (thick red line) compared to the target output (dotted black line). The output on the first rendition and at two other stages of learning are also shown. **B.** The error trace and final motor output shown for timing-based plasticity matched by a tutor with a long integration timescale. **C.** Effects of mismatch between student and tutor on reproduction accuracy when using a push-pull architecture for the student–output mapping. The heatmap shows the final reproduction error of the motor output after 1000 learning cycles when a student with plasticity parameters $\alpha$ and $\beta$ is paired with a tutor with memory timescale $\tau_\text{tutor}$. Here $\tau_1 = 80\,\mathrm{ms}$ and $\tau_2 = 40 \, \mathrm{ms}$. **D.** Error evolution curves as a function of the mismatch between student and tutor. Each plot shows how the error in the motor program changes during 1000 learning cycles for the same conditions as those shown in the heatmap. The region shaded in light pink shows simulations where the mismatch between student and tutor leads to a deteriorating instead of improving performance during learning. **E.** Convergence in the rate-based model with a linear-nonlinear controller that uses a sigmoidal nonlinearity. **F.** Convergence in the spiking model when inhibition is constant instead of activity-dependent ($V_\text{inh} = \text{constant}$).](figures/si_robustness){width="5.33in"}
We can also consider nonlinear mappings between the student activity and the final output. If there is a monotonic output nonlinearity, as in eq. with $N_a'>0$, the tutor signal derived for the linear case, eq. , can still achieve convergence, though at a slower rate and with a somewhat lower accuracy (see Appendix Figure \[fig:si\_robustness\]E for the case of a sigmoidal nonlinearity). For non-monotonic nonlinearities, the direction from which the optimum is approached can be crucial, as learning can get stuck in local minima of the loss function.[^2] Studying this might provide an interesting avenue to test whether learning in songbirds is based on a gradient descent-type rule or on a more sophisticated optimization technique.
Different inhibition models
---------------------------
In the spiking model, we used an activity-dependent inhibitory signal that was proportional to the average student activity. Using a constant inhibition instead, $V_\text{inh} = \text{constant}$, does not significantly change the results: see Appendix Figure \[fig:si\_robustness\]F for an example.
Effect of changing plasticity kernels
-------------------------------------
In the main text, we used exponential kernels with $\tau_1 = 80\,\mathrm{ms}$ and $\tau_2 = 40\,\mathrm{ms}$ for the smoothing of the conductor signal that enters the synaptic plasticity rule, eq. . We can generalize this in two ways: we can use different timescales $\tau_1$, $\tau_2$, or we can use a different functional form for the kernels. (Note that in the main text we showed the effects of varying the parameters $\alpha$ and $\beta$ in the plasticity rule, while the timescales $\tau_1$ and $\tau_2$ were kept fixed.)
The values for the timescales $\tau_{1,2}$ were chosen to roughly match the shape of the plasticity curve measured in slices of zebra finch RA [@Mehaffey2015] (see Figures \[fig:bird\_vs\_model\]C, \[fig:bird\_vs\_model\]D). The main predictions of our model, that learning is most effective when the tutor signal is matched to the student plasticity rule, and that large mismatches between tutor and student lead to impaired learning, hold well when the student timescales change: see Appendix Figure \[fig:si\_alternate\_kernel\]A for the case when $\tau_1 = 20\,\mathrm{ms}$ and $\tau_2 = 10\,\mathrm{ms}$. In the main text we saw that the negative effects of tutor–student mismatch diminish for timescales that are shorter than $\sim\tau_{1,2}$. In Appendix Figure \[fig:si\_alternate\_kernel\]A, the range of timescales where a precise matching is not essential becomes very small because the student timescales are short.
Another generalization of our plasticity rule can be obtained by changing the functional form of the kernels used to smooth the conductor input. As an example, suppose $K_2$ is kept exponential, while $K_1$ is replaced by $$\bar K_1(t) = \begin{cases}
\frac {1} {\bar \tau_1^2}\, t e^{-t/\bar \tau_1} & \text{for $t\ge 0$,}\\
0 & \text{else.}
\end{cases}$$ An example of learning using an STDP rule based on kernels $\bar K_1$ and $K_2$ where $\bar \tau_1 = \tau_2$ is shown in Appendix Figure \[fig:si\_alternate\_kernel\]B. The matching tutor has the same form as before, eq. with timescale $\tau_\text{tutor} = \tau_\text{tutor}^*$ given by eq. , but with $\tau_1 = 2\bar\tau_1 = 2\tau_2$. We can see that learning is as effective as in the case of purely exponential kernels.
![Effect of changing conductor smoothing kernels in the plasticity rule. (**A**) Matrix showing learning accuracy when using different timescales for the student plasticity rule. Each entry in the heatmap shows the average rendition error after 1000 learning steps when pairing a tutor with timescale $\tau_\text{tutor}$ with a non-matched student. Here the kernels are exponential, with timescales $\tau_1 = 20 \,\mathrm{ms}$, $\tau_2 = 10 \,\mathrm{ms}$. (**B**) Evolution of motor error with learning using kernels $\sim e^{-t/\tau}$ and $\sim t e^{-t/\tau}$, instead of the two exponentials used in the main text. The tutor signal is as before, eq. . The inset shows the final output for the trained model, for one of the two output channels. Learning is as effective and fast as before.\[fig:si\_alternate\_kernel\]](figures/ratebased_alt_cond_kernel_results){width="5.8in"}
More general conductor patterns
-------------------------------
In the main text, we have focused on a conductor whose activity matches that observed in area HVC of songbirds [@Hahnloser2002]: each neuron fires a single burst during the motor program. Our model, however, is not restricted to this case. We generated alternative conductor patterns by using arbitrarily-placed bursts of activity, as in Appendix Figure \[fig:si\_random\_conductor\]A. The model converges to a good rendition of the target program, Appendix Figure \[fig:si\_random\_conductor\]B. Learning is harder in this case because many conductor neurons can be active at the same time, and the weight updates affect not only the output of the system at the current position in the motor program, but also at all the other positions where the conductor neurons fire. This is in contrast to the HVC-like conductor, where each neuron fires at a single point in the motor program, and thus the effect of weight updates is better localized. More generally, simulations show that the sparser the conductor firing, the faster the convergence (data not shown). The accuracy of the final rendition of the motor program (Appendix Figure \[fig:si\_random\_conductor\]B, inset) is also not as good as before.
![Learning with arbitrary conductor activity. **A.** Typical activity of conductor neurons. 20 of the 100 neurons included in the simulation are shown. The activity pattern is chosen so that about 10% of the neurons are active at any given time. The pattern is chosen randomly but is fixed during learning. Each conductor burst lasts $30\,\mathrm{ms}$. **B.** Convergence curve and final rendition of the motor program (in inset). Learning included two output channels but the final output is shown for only one of them.\[fig:si\_random\_conductor\]](figures/ratebased_alt_conductor_results)
Edge effects
------------
In our derivation of the matching tutor rule, we assumed that the system has enough time to integrate all the synaptic weight changes from eq. . However, some of these changes occur tens or hundreds of milliseconds after the inputs that generated them, due to the timescales used in the plasticity kernel. Since our simulations are only run for a finite amount of time, there will in general be edge effects, where periods of the motor program towards the end of the simulations will have difficulty converging. To offset such numerical issues, we ran the simulations for a few hundred milliseconds longer than the duration of the motor program, and ignored the data from this extra period. Our simulations typically run for $600\,\mathrm{ms}$, and the time reserved for relaxation after the end of the program was set to $1200\,\mathrm{ms}$. The long relaxation time was chosen to allow for cases where the tutor was chosen to have a very long memory timescale.
Parameter optimization for reproducing juvenile and adult spiking statistics
----------------------------------------------------------------------------
We set the parameters in our simulations to reproduce spiking statistics from recordings in zebra finch RA as closely as possible. Appendix Figure \[fig:si\_spiking\_matching\_violins\] shows how the distribution of summary statistics obtained from 50 runs of the simulation compares to the distributions calculated from recordings in birds at various developmental stages. Each plot shows a standard box and whisker plot superimposed over a kernel-density estimate of the distribution of a given summary statistic, either over simulation runs or over recordings from birds at various stages of song learning. We ran two sets of simulations, one for a bird with juvenile-like connectivity between HVC and RA, and one with adult-like connectivity (see Methods). In these simulations there was no learning to match the timecourse of songs—the goal was simply to identify parameters that lead to birdsong-like firing statistics.
The qualitative match between our simulations and recordings is good, but the simulations are less variable than the measurements. This may be due to sources of variability that we have ignored—for example, all our simulated neurons had exactly the same membrane time constants, refractory periods, and threshold potentials, which is not the case for real neurons. Another reason might be that in our simulations, all the runs were performed for the same network, while the measurements are from different cells in different birds.
![Violin plots showing how the spiking statistics from our simulation compared to the statistics obtained from neural recordings. Each violin shows a kernel-density estimate of the distribution that a particular summary statistic had in either several runs of a simulation, or in several recordings from behaving birds. The circle and the box within each violin show the median and the interquartile range. \[fig:si\_spiking\_matching\_violins\]](figures/spiking_matching_violins_double_edited)
Effect of spiking stochasticity on learning
-------------------------------------------
As pointed out in the main text, learning is affected in the spiking simulations when the tutor error integration timescale $\tau_\text{tutor}$ becomes very long. More specifically, two distinct effects occur. First, the fluctuations in the motor output increase, leading to a poorer match to the shape of the target motor program. And second, the whole output gets shifted up, towards higher muscle activation values. Both of these effects can be traced back to the stochasticity of the tutor signal.
In the spiking simulations, tutor neurons are assumed to fire Poisson spikes following a time-dependent firing rate that obeys eq. . By the nature of the Poisson process, the tutor output in this case will contain fluctuations around the mean, $g(t) \sim \bar g(t) + \xi(t)$. Recall that the scale of $g(t)$ is set by the threshold $\theta$ and thus so is the scale of the variability $\xi(t)$.
As long as the tutor error integration timescale is not very long, $g(t)$ roughly corresponds to a smoothed version of the motor error $\epsilon(t)$ (*cf.* eq. ). However, as $\tau_\text{tutor}$ grows past the duration $T$ of the motor program, the exponential term in eq. becomes essentially constant, leading to a tutor signal $\bar g(t)$ whose departures from the center value $\theta$ decrease in proportion to the timescale $\tau_\text{tutor}$. As far as the student is concerned, the relevant signal is $g(t) - \theta$ (eq. ), and thus, when $\tau_\text{tutor} > T$, the signal-to-noise ratio in the tutor guiding signal starts to decrease as $1/\tau_\text{tutor}$. This ultimately leads to a very noisy rendition of the target program. One way to improve this would be to increase the gain factor $\zeta$ that controls the relation between the motor error and the tutor signal (see eq. ). This improves the ability of the system to converge onto its target in the late stages of learning. In the early stages of learning, however, this could lead to saturation problems. One way to fix this would be to use a variable gain factor $\zeta$ that ensures the whole range of tutor firing rates is used without generating too much saturation. This would be an interesting avenue for future research.
Reducing the fluctuations in the tutor signal also decreases the fluctuations in the conductor–student synaptic weights, which leads to fewer weights being clamped at 0 because of the positivity constraint. This reduces the shift between the learned motor program and the target. As mentioned in the main text, another approach to reducing or eliminating this shift is to allow for negative weights or (more realistically) to use a push-pull mechanism, in which the activity of some student neurons acts to increase muscle output, while the activity of other student neurons acts as an inhibition on muscle output.
Plasticity parameter values
===========================
In the heatmaps that appear in many of the figures in the main text and in the supplementary information, we kept the timescales $\tau_1$ and $\tau_2$ constant while varying $\alpha$ and $\beta$ to modify the student plasticity rule. Since the overall scale of $\alpha$ and $\beta$ is inconsequential as it can be absorbed into the learning rate (as explained in section \[sec:efficient\_learning\]), we imposed the further constraint $\alpha - \beta = 1$. This implies that we effectively focused on a one-parameter family of student plasticity rule, as identified by the value of $\alpha$ (and the corresponding value for $\beta = \alpha - 1$). In the figures, we expressed this instead in terms of the timescale of the optimally-matching tutor, $\tau_\text{tutor}^*$, as defined in eq. .
Below we give the explicit values of $\alpha$ and $\beta$ that we used for each row in the heatmaps. These can be calculated by solving for $\alpha$ in eq. , using $\beta = \alpha -1$, and assume that $\tau_1 = 80\,\mathrm{ms}$ and $\tau_2 = 40\,\mathrm{ms}$.
$\tau_\text{tutor}^*$ $\alpha$ $\beta$
----------------------- ---------- ---------
10 $-0.75$ $-1.75$
20 $-0.5$ $-1.5$
40 $0.0$ $-1.0$
80 $1.0$ $0.0$
160 $3.0$ $2.0$
320 $7.0$ $6.0$
640 $15.0$ $14.0$
1280 $31.0$ $30.0$
2560 $63.0$ $62.0$
5120 $127.0$ $126.0$
10240 $255.0$ $254.0$
20480 $511.0$ $510.0$
[^1]: We thank the referees for suggesting this way of describing our results.
[^2]: We thank Josh Gold for this observation.
|
---
abstract: 'The U-Net was presented in 2015. With its straight-forward and successful architecture it quickly evolved to a commonly used benchmark in medical image segmentation. The adaptation of the U-Net to novel problems, however, comprises several degrees of freedom regarding the exact architecture, pre-processing, training and inference. These choices are not independent of each other and substantially impact the overall performance. The present paper introduces the nnU-Net (“no-new-Net”), which refers to a robust and self-adapting framework on the basis of 2D and 3D vanilla U-Nets. We argue the strong case for taking away superfluous bells and whistles of many proposed network designs and instead focus on the remaining aspects that make out the performance and generalizability of a method. We evaluate the nnU-Net in the context of the Medical Segmentation Decathlon challenge, which measures segmentation performance in ten disciplines comprising distinct entities, image modalities, image geometries and dataset sizes, with no manual adjustments between datasets allowed. At the time of manuscript submission, nnU-Net achieves the highest mean dice scores across all classes and seven phase 1 tasks (except class 1 in BrainTumour) in the online leaderboard of the challenge.'
author:
- Fabian Isensee
- Jens Petersen
- Andre Klein
- David Zimmerer
- 'Paul F. Jaeger'
- Simon Kohl
- Jakob Wasserthal
- Gregor Köhler
- Tobias Norajitra
- Sebastian Wirkert
- 'Klaus H. Maier-Hein'
bibliography:
- 'bibliography.bib'
title: |
nnU-Net: Self-adapting Framework\
for U-Net-Based Medical Image Segmentation
---
Introduction
============
Medical Image Segmentation is currently dominated by deep convolutional neural networks (CNNs). However, each segmentation benchmark seems to require specialized architectures and training scheme modifications to achieve competitive performance [@isensee2017brain; @li2017h; @roy2018concurrent; @oktay2018attention; @jegou2017one]. This results in huge amounts of publications in the field that, alongside often limited validation on only few or even just a single dataset, make it increasingly difficult for researchers to identify methods that live up to their promised superiority beyond the limited scenarios they are demonstrated on. The Medical Segmentation Decathlon is intended to specifically address this issue: participants in this challenge are asked to create a segmentation algorithm that generalizes across 10 datasets corresponding to different entities of the human body. These algorithms may dynamically adapt to the specifics of a particular dataset, but are only allowed to do so in a fully automatic manner. The challenge is split into two successive phases: 1) a development phase in which participants are given access to 7 datasets to optimize their approach on and, using their final and thus frozen method, must submit segmentations for the corresponding 7 held-out test sets. 2) a second phase to evaluate the same exact method on 3 previously undisclosed datasets.
We hypothesize that some of the architectural modifications presented recently are in part overfitted to specific problems or could suffer from imperfect validation that results from sub-optimal reimplementations of the state-of-the-art. Using the U-Net as a benchmark on an in-house dataset, for example, requires the adaptation of the method to the novel problem. This spans several degrees of freedom. Even though the architecture itself is quite straight-forward, and even though the method is quite commonly used as a benchmark, we believe that the remaining interdependent choices regarding the exact architecture, pre-processing, training, inference and post-processing quite often cause the U-Net to underperform when used as a benchmark. Additionally, architectural tweaks that are intended to improve the performance of a network can rather easily be demonstrated to work if the network is not yet fully optimized for the task at hand, allowing for plenty of headroom for the tweak to improve results. In our own preliminary experiments, these tweaks however were unable to improve segmentation results in fully optimized networks and thus most likely unable to advance the state of the art. This leads us to believe that the influence of non-architectural aspects in segmentation methods is much more impactful, but at the same time also severely underestimated.
In this paper, we present the nnU-Net (“no-new-Net”) framework. It resides on a set of three comparatively simple U-Net models that contain only minor modifications to the original U-Net [@ronneberger2015u]. We omit recently proposed extensions such as for example the use of residual connections [@he2016identity; @milletari2016v], dense connections [@jegou2017one] or attention mechanisms [@oktay2018attention]. The nnU-Net automatically adapts its architectures to the given image geometry. More importantly though, the nnU-Net framework thoroughly defines all the other steps around them. These are steps where much of the nets’ performance can be gained or respectively lost: preprocessing (e.g. resampling and normalization), training (e.g. loss, optimizer setting and data augmentation), inference (e.g. patch-based strategy and ensembling across test-time augmentations and models) and a potential post-processing (e.g. enforcing single connected components if applicable).
Methods
=======
Network architectures {#networkarchitecture}
---------------------
Medical images commonly encompass a third dimension, which is why we consider a pool of basic U-Net architectures consisting of a 2D U-Net, a 3D U-Net and a U-Net Cascade. While the 2D and 3D U-Nets generate segmentations at full resolution, the cascade first generates low resolution segmentations and subsequently refines them. Our architectural modifications as compared to the U-Net’s original formulation are close to negligible and instead we focus our efforts on designing an automatic training pipeline for these models.
The U-Net [@ronneberger2015u] is a successful encoder-decoder network that has received a lot of attention in the recent years. Its encoder part works similarly to a traditional classification CNN in that it successively aggregates semantic information at the expense of reduced spatial information. Since in segmentation, both semantic as well as spatial information are crucial for the success of a network, the missing spatial information must somehow be recovered. The U-Net does this through the decoder, which receives semantic information from the bottom of the ’U’ and recombines it with higher resolution feature maps obtained directly from the encoder through skip connections. Unlike other segmentation networks, such as FCN [@long2015fully] and previous iterations of DeepLab [@chen2018deeplab] this allows the U-Net to segment fine structures particularly well.
Just like the original U-Net, we use two plain convolutional layers between poolings in the encoder and transposed convolution operations in the decoder. We deviate from the original architecture in that we replace ReLU activation functions with leaky ReLUs (neg. slope $1e^{-2}$) and use instance normalization [@ulyanov2016instance] instead of the more popular batch normalization [@ioffe2015batch].
### 2D U-Net
Intuitively, using a 2D U-Net in the context of 3D medical image segmentation appears to be suboptimal because valuable information along the z-axis cannot be aggregated and taken into consideration. However, there is evidence [@isensee2017automatic] that conventional 3D segmentation methods deteriorate in performance if the dataset is anisotropic (cf. Prostate dataset of the Decathlon challenge).
### 3D U-Net
A 3D U-Net seems like the appropriate method of choice for 3D image data. In an ideal world, we would train such an architecture on the entire patient’s image. In reality however, we are limited by the amount of available GPU memory which allows us to train this architecture only on image patches. While this is not a problem for datasets comprised of smaller images (in terms of number of voxels per patient) such as the Brain Tumour, Hippocampus and Prostate datasets of this challenge, patch-based training, as dictated by datasets with large images such as Liver, may impede training. This is due to the limited field of view of the architecture which thus cannot collect sufficient contextual information to e.g. correctly distinguish parts of a liver from parts of other organs.
![U-Net Cascade (on applicable datasets only). Stage 1 (left): a 3D U-Net processes downsampled data, the resulting segmentation maps are upsampled to the original resolution. Stage 2 (right): these segmentations are concatenated as one-hot encodings to the full resolution data and refined by a second 3D U-Net.[]{data-label="fig:cascade"}](pyramid_figure_bw.pdf){width="\textwidth"}
### U-Net Cascade
To address this practical shortcoming of a 3D U-Net on datasets with large image sizes, we additionally propose a cascaded model. Therefore, a 3D U-Net is first trained on downsampled images (stage 1). The segmentation results of this U-Net are then upsampled to the original voxel spacing and passed as additional (one hot encoded) input channels to a second 3D U-Net, which is trained on patches at full resolution (stage 2). See Figure \[fig:cascade\].
### Dynamic adaptation of network topologies
Due to the large differences in image size (median shape $482 \times 512 \times 512$ for Liver vs. $36 \times 50 \times 35$ for Hippocampus) the input patch size and number of pooling operations per axis (and thus implicitly the number of convolutional layers) must be automatically adapted for each dataset to allow for adequate aggregation of spatial information. Apart from adapting to the image geometries, there are technical constraints like the available memory to account for. Our guiding principle in this respect is to dynamically trade off the batch-size versus the network capacity, presented in detail below:
We start out with network configurations that we know to be working with our hardware setup. For the 2D U-Net this configuration is an input patch size of $256 \times 256$, a batch size of 42 and 30 feature maps in the highest layers (number of feature maps doubles with each downsampling). We automatically adapt these parameters to the median plane size of each dataset (where we use the plane with the lowest in-plane spacing, corresponding to the highest resolution), so that the network effectively trains on entire slices. We configure the networks to pool along each axis until the feature map size for that axis is smaller than 8 (but not more than a maximum of 6 pooling operations). Just like the 2D U-Net, our 3D U-Net uses 30 feature maps at the highest resolution layers. Here we start with a base configuration of input patch size $128 \times 128 \times 128$, and a batch size of 2. Due to memory constraints, we do not increase the input patch volume beyond $128^3$ voxels, but instead match the aspect ratio of the input patch size to that of the median size of the dataset in voxels. If the median shape of the dataset is smaller than $128^3$ then we use the median shape as input patch size and increase the batch size (so that the total number of voxels processed is the same as with $128 \times 128 \times 128$ and a batch size of 2). Just like for the 2D U-Net we pool (for a maximum of 5 times) along each axis until the feature maps have size 8.
For any network we limit the total number of voxels processed per optimizer step (defined as the input patch volume times the batch size) to a maximum of 5% of the dataset. For cases in excess, we reduce the batch size (with a lower-bound of 2).
All network topologies generated for the phase 1 datasets are presented in table \[segmentation\_architectures\].
\[segmentation\_architectures\]
2D U-Net 3D U-Net 3D U-Net lowres
-- ---------------------- ---------- ------------- -----------------
median patient shape 169x138 138x169x138 -
input patch size 192x160 128x128x128 -
batch size 89 2 -
num pool per axis 5, 5 5, 5, 5 -
median patient shape 320x232 115x320x232 58x160x116
input patch size 320x256 80x192x128 64x160x128
batch size 33 2 2
num pool per axis 6, 6 4, 5, 5 4, 5, 5
median patient shape 512x512 482x512x512 121x128x128
input patch size 512x512 128x128x128 128x128x128
batch size 10 2 2
num pool per axis 6, 6 5, 5, 5 5, 5, 5
median patient shape 50x35 36x50x35 -
input patch size 56x40 40x56x40 -
batch size 366 9 -
num pool per axis 3, 3 3, 3, 3 -
median patient shape 320x319 20x320x319 -
input patch size 320x320 20x192x192 -
batch size 26 4 -
num pool per axis 6, 6 2, 5, 5 -
median patient shape 512x512 252x512x512 126x256x256
input patch size 512x512 112x128x128 112x128x128
batch size 10 2 2
num pool per axis 6, 6 4, 5, 5 4, 5, 5
median patient shape 512x512 96x512x512 96x256x256
input patch size 512x512 96x160x128 96x160x128
batch size 10 2 2
num pool per axis 6, 6 4, 5, 5 4, 5, 5
: Network topologies as automatically generated for the seven phase 1 tasks of the Medical Segmentation Decathlon challenge. 3D U-Net lowres refers to the first stage of the U-Net Cascade. The configuration of the second stage of the U-Net Cascade is identical to the 3D U-Net.
Preprocessing
-------------
The preprocessing is part of the fully automated segmentation pipeline that our method consists of and, as such, the steps presented below are carried out without any user intervention.
### Cropping
All data is cropped to the region of nonzero values. This has no effect on most datasets such as liver CT, but will reduce the size (and therefore the computational burden) of skull stripped brain MRI.
### Resampling
CNNs do not natively understand voxel spacings. In medical images, it is common for different scanners or different acquisition protocols to result in datasets with heterogeneous voxel spacings. To enable our networks to properly learn spatial semantics, all patients are resampled to the median voxel spacing of their respective dataset, where third order spline interpolation is used for image data and nearest neighbor interpolation for the corresponding segmentation mask.
Necessity for the U-Net Cascade is determined by the following heuristics: If the median shape of the resampled data has more than 4 times the voxels that can be processed as input patch by the 3D U-Net (with a batch size of 2), it qualifies for the U-Net Cascade and this dataset is additionally resampled to a lower resolution. This is done by increasing the voxel spacing (decrease resolution) by a factor of 2 until the above mentioned criterion is met. If the dataset is anisotropic, the higher resolution axes are first downsampled until they match the low resolution axis/axes and only then all axes are downsampled simultaneously. The following datasets of phase 1 fall within the set of described heuristics and hence trigger usage of the U-Net Cascade: Heart, Liver, Lung, and Pancreas.
### Normalization
Because the intensity scale of CT scans is absolute, all CT images are automatically normalized based on statistics of the entire respective dataset: If the modality description in a dataset’s corresponding json desccriptor file indicates ‘ct’, all intensity values occurring within the segmentation masks of the training dataset are collected and the entire dataset is normalized by clipping to the \[0.5, 99.5\] percentiles of these intensity values, followed by a z-score normalization based on the mean and standard deviation of all collected intensity values. For MRI or other image modalities (i.e. if no ‘ct’ string is found in the modality), simple z-score normalization is applied to the patient individually.
If cropping reduces the average size of patients in a dataset (in voxels) by 1/4 or more the normalization is carried out only within the mask of nonzero elements and all values outside the mask are set to 0.
Training Procedure
------------------
All models are trained from scratch and evaluated using five-fold cross-validation on the training set. We train our networks with a combination of dice and cross-entropy loss: $$\mathcal{L}_{total} = \mathcal{L}_{dice} + \mathcal{L}_{CE}$$
For 3D U-Nets operating on nearly entire patients (first stage of the U-Net Cascade and 3D U-Net if no cascade is necessary) we compute the dice loss for each sample in the batch and average over the batch. For all other networks we interpret the samples in the batch as a pseudo-volume and compute the dice loss over all voxels in the batch.
The dice loss formulation used here is a multi-class adaptation of the variant proposed in [@drozdzal2016importance]. Based on past experience [@isensee2017automatic; @isensee2017brain] we favor this formulation over other variants [@milletari2016v; @sudre2017generalised]. The dice loss is implemented as follows: $$\mathcal{L}_\mathrm{dc} = - \frac{2}{|K|} \sum_{k\in K}\frac{\sum_{i\in I} u_i^k v_i^k}{\sum_{i\in I} u_i^k + \sum_{i\in I} v_i^k}$$
where $u$ is the softmax output of the network and $v$ is a one hot encoding of the ground truth segmentation map. Both $u$ and $v$ have shape $I \times K$ with $i \in I$ being the number of pixels in the training patch/batch and $k\in K$ being the classes.
We use the Adam optimizer with an initial learning rate of $3\times10^{-4}$ for all experiments. We define an epoch as the iteration over 250 training batches. During training, we keep an exponential moving average of the validation ($l_{MA}^{v}$) and training ($l_{MA}^{t}$) losses. Whenever $l_{MA}^{t}$ did not improve by at least $5\times 10^{-3}$ within the last 30 epochs, the learning rate was reduced by factor 5. The training was terminated automatically if $l_{MA}^{v}$ did not improve by more than $5\times 10^{-3}$ within the last 60 epochs, but not before the learning rate was smaller than $10^{-6}$.
### Data Augmentation
When training large neural networks from limited training data, special care has to be taken to prevent overfitting. We address this problem by utilizing a large variety of data augmentation techniques. The following augmentation techniques were applied on the fly during training: random rotations, random scaling, random elastic deformations, gamma correction augmentation and mirroring. Data augmentation was done with our own in-house framework which is publically available at [[](https://github.com/MIC-DKFZ/batchgenerators)]{}. We define sets of data augmentation parameters for the 2D and 3D U-Net separately. These parameters are not modified between datasets.
Applying three dimensional data augmentation may be suboptimal if the maximum edge length of the input patch size of a 3D U-Net is more than two times as large as the shortest. For datasets where this criterion applies we use our 2D data augmentation instead and apply it slice-wise for each sample.
The second stage of the U-Net Cascade receives the segmentations of the previous step as additional input channels. To prevent strong co-adaptation we apply random morphological operators (erode, dilate, open, close) and randomly remove connected components of these segmentations.
### Patch Sampling
To increase the stability of our network training we enforce that more than a third of the samples in a batch contain at least one randomly chosen foreground class.
Inference
---------
Due to the patch-based nature of our training, all inference is done patch-based as well. Since network accuracy decreases towards the border of patches, we weigh voxels close to the center higher than those close to the border, when aggregating predictions across patches. Patches are chosen to overlap by patch size / 2 and we further make use of test time data augmentation by mirroring all patches along all valid axes.
Combining the tiled prediction and test time data augmentation result in segmentations where the decision for each voxel is obtained by aggregating up to 64 predictions (in the center of a patient using 3D U-Net). For the test cases we use the five networks obtained from our training set cross-validation as an ensemble to further increase the robustness of our models.
Postprocessing
--------------
A connected component analysis of all ground truth segmentation labels is performed on the training data. If a class lies within a single connected component in all cases, this behaviour is interepreted as a general property of the dataset. Hence, all but the largest connected component for this class are automatically removed on predicted images of the corresponding dataset.
Ensembling and Submission
-------------------------
To further increase the segmentation performance and robustness all possible combinations of two out of three of our models are ensembled for each dataset. For the final submission, the model (or ensemble) that achieves the highest mean foreground dice score on the training set cross-validation is automatically chosen.
Experiments and Results
=======================
We optimize our network topologie using five-fold cross-validations on the phase 1 datasets. Our phase 1 cross-validation results as well as the corresponding submitted test set results are summarized in Table \[tab:results\]. - indicates that the U-Net Cascade was not applicable (i.e. necessary, according to our criteria) to a dataset because it was already fully covered by the input patch size of the 3D U-Net. The model that was used for the final submission is highlighted in bold. Although several test set submissions were allowed by the platform, we believe it to be bad practice to do so. Hence we only submitted once and report the results of this single submission.
As can be seen in Table \[tab:results\] our phase 1 cross-validation results are robustly recovered on the held-out test set indicating a desired absence of over-fitting. The only dataset that suffers from a dip in performance on all of its foreground classes is BrainTumour. The data of this phase 1 dataset stems from the BRATS challenge [@menze2015multimodal] for which such performance drops between validation and testing are a common sight and attributed to a large shift in the respective data and/or ground-truth distributions.
Discussion
==========
In this paper we present the nnU-Net segmentation framework for the medical domain that directly builds around the original U-Net architecture [@ronneberger2015u] and dynamically adapts itself to the specifics of any given dataset. Based on our hypothesis that non-architectural modifications can be much more powerful than some of the recently presented architectural modifications, the essence of this framework is a thorough design of adaptive preprocessing, training scheme and inference. All design choices required to adapt to a new segmentation task are done in a fully automatic manner with no manual interaction. For each task the nnU-Net automatically runs a five-fold cross-validation for three different automatically configures U-Net models and the model (or ensemble) with the highest mean foreground dice score is chosen for final submission. In the context of the Medical Segmentation Decathlon we demonstrate that the nnU-Net performs competitively on the held-out test sets of 7 highly distinct medical datasets, achieving the highest mean dice scores for all classes of all tasks (except class 1 in the BrainTumour dataset) on the online leaderboard at the time of manuscript submission. We acknowledge that training three models and picking the best one for each dataset independently is not the cleanest solution. Given a larger time-scale, one could investigate proper heuristics to identify the best model for a given dataset prior to training. Our current tendency favors the U-Net Cascade (or the 3D U-Net if the cascade cannot be applied) with the sole (close) exceptions being the Prostate and Liver tasks. Additionally, the added benefit of many of our design choices, such as the use of Leaky ReLUs instead of regular ReLUs and the parameters of our data augmentation were not properly validated in the context of this challenge. Future work will therefore focus on systematically evaluating all design choices via ablation studies.
|
---
abstract: 'We propose a new algorithm for the solution of the robust multiple-load topology optimization problem. The algorithm can be applied to any type of problem, e.g., truss topology, variable thickness sheet or free material optimization. We assume that the given loads are uncertain and can be subject to small random perturbations. Furthermore, we define a rigorous measure of robustness of the given design with respect to these perturbations. To implement the algorithm, the users only need software to solve their standard multiple-load problem. Additionally, they have to solve a few small-dimensional eigenvalue problems. Numerical examples demonstrate the efficiency of our approach.'
author:
- Michal Kočvara
bibliography:
- 'truss\_vib\_smo.bib'
date: 'Received: date / Revised: date'
title: 'On Robustness Criteria and Robust Topology Optimization with Uncertain Loads[^1] '
---
[example.eps]{} gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore
Introduction {#sec:1}
============
This article has been motivated by the following sentence of an engineer in an industrial company: “When we use off-the-shelf topology optimization software, we always consider not only the nominal loads but also their angular perturbations by up to 30 degrees.” The goal of this article is to automatize this heuristics and to give rigorous measures of robustness of a structure with respect to these perturbations.
Robust topology optimization (in fact, any robust optimization problem) can be approached from two different angles—a stochastic one and a deterministic one. Most of the existing literature deal with the stochastic approach [e.g. @evgrafov2003stochastic; @doltsinis2004robust; @conti2009shape]. The deterministic (worst case) approach has been pioneered by Ben-Tal, Nemirovksi and El Ghaoui [@ben1997robust; @ben-tal-nemirovski; @el1997robust; @ben2009robust]. In their monograph, [@ben-tal-nemirovski] defined the concept of a robust counterpart to a nominal (convex) optimization problem, where the problem data is assumed to live in an uncertainty set. [@ben-tal-nemirovski] showed that if the uncertainty set is an ellipsoid, then the robust counterpart (a semi-infinite optimization problem) can be formulated as a computationally tractable convex cone optimization problem. In the same monograph, they presented explicit formulations of robust counterparts for the truss topology and the free material optimization problems with uncertainty in the loadings. Unfortunately, these problems (typically large-scale linear semidefinite optimization problems) are just too large to be computationally tractable in practical situations. For this reason, in [-@kocvara-zowe-nemirovski] we have developed a so-called cascading technique that reduces the dimension of the robust counterpart significantly. This article makes an attempt to go one step further in bringing the solution of the robust topology optimization problem closer to use in engineering practice.
After introducing the notation and the standard multiple-load topology optimization problem in Section \[sec:2\], we describe the main idea of our approach and the corresponding algorithm in Section \[sec:3\]. Section \[sec:ex\] is devoted to numerical experiments.
In the article we use standard notation for vectors and matrices: $x_i$ is the $i$-th element of vector $x\in{\mathbb{R}}^n$ and $A_{ij}$ an $(i,j)$-th element of matrix $A\in{\mathbb{R}}^{n\times m}$. If $I\subset\{1,2,\ldots,n\}$, $J\subset\{1,2,\ldots,n\}$ are sets of indices, then $x_I$ is a subvector of $x$ with indices from $I$ and $A_{IJ}$ a submatrix of $A$ with row indices from $I$ and column indices from $J$. For $x\in{\mathbb{R}}^n$, $\|x\|$ denotes the Euclidean norm of $x$.
Problem definition {#sec:2}
==================
We consider a general mechanical structure, discrete or discretized by the finite element method. The number of members or finite elements is denoted by $m$, the total number of “free” degrees of freedom (i.e., not fixed by Dirichlet boundary conditions) by $n$.
For a given set of $L$ (independent) load vectors $$\label{eq:fneq0}
f^{(\ell)}\in{\mathbb{R}}^n,\;\;f^{(\ell)}\neq 0,\qquad \ell=1,\ldots,L,$$ the structure should satisfy linear equilibrium equations $$\label{eq:eleq0}
K(x) u^{(\ell)} = f^{(\ell)}, \qquad \ell=1,\ldots,L.$$ Here $K(x)$ is the stiffness matrix of the structure, depending on a design variable $x$.
We do not assume any particular structure of $K(x)$ or its dependence on $x$. Therefore, the problem formulations and the conclusions apply to a broad class of problems, e.g., the truss topology optimization, variable thickness sheet, SIMP and free material optimization (see, e.g., @bendsoe). All we need is software for the solution of the specific multiple-load problem. Consequently, the design variables $x\in{\mathbb{R}}^m$, $x\geq 0$, represent, for instance, the thickness, cross-sectional area or material properties of the element.
Let $$X:=\{x\in{\mathbb{R}}^m\mid \sum\limits_{i=1}^mx_i \leq v;\
\underline{x}\leq x_i \leq \overline{x},\ i=1,\ldots,m\}$$ be the set of feasible design variables with some $v,\underline{x},\overline{x}\in{\mathbb{R}}$, $v>0$ and $0\leq\underline{x}\leq
\overline{x}$ (again, the specific form of this set is not important for our purposes). The standard formulation of the worst-case multiple-load topology optimization problem reads as follows: $$\begin{aligned}
{2}\label{eq:minc}
&\min_{x\in X,u\in{\mathbb{R}}^{L\cdot n}} \max_{\ \ell=1,\ldots,L\ } (f^{(\ell)})^Tu^{(\ell)} \\
&\mbox{subject to} \notag\\
&\qquad K(x) u^{(\ell)} = f^{(\ell)},\quad\ell=1,\ldots,L\,.\notag\end{aligned}$$ To simplify our notation, we will instead consider the following “nested” formulation $$\begin{aligned}
{2}\label{eq:ml}
&\min_{x\in X} \max_{\ \ell=1,\ldots,L\ }
(f^{(\ell)})^T K(x)^{-1}f^{(\ell)}\,,\end{aligned}$$ where, in case of $K(x)$ singular, we consider the generalized Moore-Penrose inverse of the matrix. Note that, for the numerical treatment, one would use the equivalent formulation $$\begin{aligned}
{2}\label{eq:ml1}
&\min_{x\in X,\gamma\in{\mathbb{R}}} \gamma \\
&\mbox{subject to} \notag\\
&\qquad (f^{(\ell)})^T K(x)^{-1}f^{(\ell)} \leq \gamma, \quad\ell=1,\ldots,L\,.\notag\end{aligned}$$ In the following, we will use formulation (\[eq:ml\]). This is just for the sake of keeping the notation fixed. In practical implementation, the users can use any multiple-load formulation implemented in their software.
Robust topology optimization {#sec:3}
============================
General approach
----------------
In their ground-breaking theory of robust convex optimization [@ben-tal-nemirovski] define a *robust counterpart* to a nominal convex optimization problem in the worst-case sense. The solution of the robust problem should be feasible for *any* instance of the random data and the optimum is attained at the maximum of the objective function over all these instance. [@ben-tal-nemirovski] show that if the data of the problem (vectors, matrices) lie in *ellipsoidal uncertainty sets*, the robust counterpart—essentially a semi-infinite optimization problem—can be converted into a numerically tractable (solvable in polynomial time) convex optimization problem.
Specifically, if we assume uncertainty in the loads of our topology optimization problem (\[eq:ml\]), the robust counterpart is defined as $$\label{eq:robbtn}
\min_{x\in X} \max_{\ \ell=1,\ldots,L\ } \max_{f\in {\cal U}_\ell}
f^T K(x)^{-1}f\,.$$ where $$\label{eq:robbtn_u}
{\cal U}_\ell :=\left\{ f\mid \exists g\in{\mathbb{R}}^p,\
\|g\|\leq 1: f=f_0^{(\ell)} + \sum_{i=1}^p g_i f_i^{(\ell)} \right\}\,;$$ here $f_0^{(\ell)}$ are the nominal loads and $f_i^{(\ell)}\in{\mathbb{R}}^n,\
i=1,\ldots,p$, define an ellipsoid around $f_0^{(\ell)}$. [@ben-tal-nemirovski] have shown that (\[eq:robbtn\]) with the uncertainty set (\[eq:robbtn\_u\]) can be formulated as a linear semidefinite optimization problem. Unfortunately, in the context of topology optimization, the dimension of this problem may be very large: basically, it is the number of the finite element nodes times the space dimension.
To avoid the problem of the prohibitive dimension, in [@kocvara-zowe-nemirovski] we have proposed a *cascading algorithm* that leads to an approximate solution of the original robust problem. The idea is to find only the “most dangerous” incidental loads and to solve the robust problem only with these dangerous loads, ignoring the others. In this article, we took inspiration from [@kocvara-zowe-nemirovski]; however, we have substantially modified the uncertainty sets which also leads to a modification of the algorithm. Our goal was to get closer to engineering practice and to make the approach usable for practitioners.
Uncertainty set
---------------
In @kocvara-zowe-nemirovski we have considered random perturbations of loads at any free node of the finite element mesh (or truss). This leads not only to very large dimensional robust counterparts but also to practical difficulties when a perturbation force can be applied to a node that would not normally be a part of the optimal structure.
In this article we are motivated by the practice when, instead of considering only the nominal loads, the engineers also apply these very loads but in slightly perturbed directions. Our goal is to automatize this heuristics and to give rigorous measures of robustness of a particular design with respect to these perturbations.
Consider the multiple-load topology optimization problem (\[eq:ml\]) with loads $f^{(\ell)}$, $\ell=1,\ldots,L$. We assume that the loads are applied at certain nodes, either the nodes of the truss ground structure or nodes of the finite-element discretization. Each node $\nu_i$, $i=1,\ldots,N$, is associated with $d$ degrees of freedom $\nu_{i_1},\ldots,\nu_{i_d}$. Typically, $d$ is equal to the spatial dimension of the problem. As we have $n$ degrees of freedom, we assume that they can be order such that $$\{\nu_{1_1},\ldots,\nu_{1_d},\nu_{2_1},\ldots,\nu_{2_d},\ldots\ldots,
\nu_{N_1},\ldots,\nu_{N_d}\} =\{1,\ldots,n\}\,.$$
For each $f^{(\ell)}$ we find the set of indices of nodes with at least one non-zero component of $f_0^{(\ell)}$ $$\hat{I}_\ell:=\{i\mid (f_0^{(\ell)})_{\nu_{i_j}}\not= 0\quad\mbox{for some}\ j=1,\ldots,d\}\,,$$ the set of the corresponding degrees of freedom $$\label{eq:I}
I_\ell:=\{k\mid k=\nu_{i_j},\ i\in \hat{I}_\ell,\ j=1,\ldots,d\}$$ and its complement in $\{1,\ldots,n\}$: $$\label{eq:J}
J_\ell:=\{1,\ldots,n\}\setminus I_\ell\,.$$
Assume that instead of knowing each of the loads $f^{(\ell)}$ exactly, we only know that they lie in an ellipsoid around some *nominal loads* $f_0^{(\ell)}$, $\ell=1,\ldots,L$: $$\label{eq:unc}
f^{(\ell)} = f_0^{(\ell)}+P_\ell g,\quad \|g\|\leq 1,\quad g_i=0 \mbox{~if~}i\in J_\ell$$ where $P_\ell$ is a symmetric and positive semidefinite matrix with $(P_\ell)_{ij}=0$ if either $i\in J_\ell$ or $j\in J_\ell$. The choice of $P_\ell$ is discussed below.
#### Choice of $P_\ell$
Consider a nominal load $f_0^{(\ell)}$. Notice first the second part of the definition (\[eq:unc\]) concerning the zero components of the perturbation vector $g$. This means that the perturbed load $f^{(\ell)}$ is only applied at the same nodes as the nominal load $f_0^{(\ell)}$. Matrix $P_\ell$ defines the neighbourhood of $f_0^{(\ell)}$ in which we can expect the random perturbations. Denote by $\tilde{P}_\ell$ the restriction $(P_\ell)_{I_\ell I_\ell}$. The choice $$\tilde{P}_\ell = \tau I$$ defines a ball of radius $\tau$ around $f_0^{(\ell)}$, see Fig. \[fig:unc\]-left. If we want to consider significant angular perturbation of $f_0^{(\ell)}$ but just a small perturbation in its magnitude, we would chose $P_\ell$ to define a flat ellipsoid. For instance, if $$\tilde{P}_\ell = \begin{bmatrix}1.0\cdot 10^{-3}&0\\0&1\end{bmatrix}\quad
\mbox{for~} f_0^{(\ell)}= (10,\ 0)^T$$ or, generally, $$\tilde{P}_\ell = T^T\begin{bmatrix}1.0\cdot 10^{-3}d&0\\0&d\end{bmatrix}T\quad
\mbox{for~} f_0^{(\ell)}= (a,\ b)^T$$ where $T$ is the rotation matrix for an angle defined by $f_0^{(\ell)}$ $$T=\begin{bmatrix}\cos\phi&\sin\phi\\-\sin\phi&\cos\phi\end{bmatrix},\quad\phi=\arctan(b/a)$$ and $d=\tau\|f_0^{(\ell)}\|$; see Fig. \[fig:unc\]-right.
\[scale=1.2,auto=left\] (0,0) circle (1.5pt); (0,0) – node\[above\] [$f_0^{(\ell)}$]{} (2,1) ; (2,1) circle(20pt); (0,0) – node\[below\] [$f^{(\ell)}$]{} (2.4,0.6) ;
\[scale=1.2,auto=left\] (0,0) circle (1.5pt); (0,0) – node\[above\] [$f_0^{(\ell)}$]{} (2,1) ; (2,1) ellipse (5pt and 25pt); (0,0) – node\[below\] [$f^{(\ell)}$]{} (2.3,0.4) ;
Robust counterpart
------------------
We are now ready to give the definition of the robust counterpart.
Consider the multiple-load topology optimization problem (\[eq:ml\]) with nominal loads $f_0^{(\ell)}$, $\ell=1,\ldots,L$. Define $${\cal G}_\ell :=\{g\in{\mathbb{R}}^n\mid \|g\|\leq 1,\ g_i=0 \mbox{~if~}i\in J_\ell\}\,.$$ The *robust counterpart* to problem (\[eq:ml\]) is defined as $$\begin{aligned}
\label{eq:rob}
\min_{x\in X} \max_{\ \ell=1,\ldots,L\ } \max_{g\in{\cal G}_\ell}
(f_0^{(\ell)}+P_\ell g)^T K(x)^{-1}(f_0^{(\ell)}+P_\ell g)\,.\end{aligned}$$
So for each load case we consider the worst-case scenario, the “most dangerous” load from the ball around $f_0^{(\ell)}$.
Notice that up to this point we followed the general theory by [@ben-tal-nemirovski]. From now on, we will use the specific form of the uncertainty set. In the following, we will show that the most-inner optimization problem in (\[eq:rob\]) can be easily solved.
Assume that $x$ and $\ell$ are given. First we find the index sets $I_\ell$ and $J_\ell$ from (\[eq:I\]), (\[eq:J\]). Now we compute the Schur complement of the inverse stiffness matrix $$\label{eq:schur}
S^{(\ell)} = K(x)^{-1}_{I_\ell I_\ell} -
K(x)^{-1}_{J_\ell I_\ell} (K(x)^{-1}_{J_\ell J_\ell})^{-1} K(x)^{-1}_{I_\ell J_\ell}\,.$$ We get the obvious statement:
\[th:l1a\] Let $x$ and $\ell$ be given and denote by $\tilde{f} =
(f_0^{(\ell)}){\!_{I_\ell}}\,$. Then $$\begin{aligned}
&\max_{g\in{\cal G}_\ell}
(f_0^{(\ell)}+P_\ell g)^T K(x)^{-1}(f_0^{(\ell)}+P_\ell g)\label{eq:gmax}\\ &\quad=
\max_{\tilde{g}\in{\mathbb{R}}^{|I_\ell|}:\|\tilde{g}\|\leq 1}
(\tilde{f}+\tilde{P}_\ell \tilde{g})^T S^{(\ell)}(\tilde{f}+\tilde{P}_\ell \tilde{g})\,.\nonumber\end{aligned}$$
\[th:l1\] Let $A$ by a symmetric positive semidefinite $n\times n$ matrix and let $\varphi\in{\mathbb{R}}^n$ be given. The optimal value of the problem $$\label{eq:l11}
\max_{\|\psi\|\leq 1} (\varphi+P\psi)^T A (\varphi+P\psi)$$ is attained at the eigenvector $\psi_{\max}$ associated with the largest eigenvalue $\lambda_{\max}$ of the inhomogeneous eigenvalue problem $$\label{eq:l12}
P^TAP\psi + P^TA\varphi = \lambda I \psi\,.$$
The Lagrangian of the constrained optimization problem (\[eq:l11\]) is given by $${\cal L}(\psi,\lambda):=(\varphi+P\psi)^T A (\varphi+P\psi) - \lambda(\sum\psi_i^2-1)$$ hence the first order optimality condition reads $$2P^TA(\varphi+P\psi) - 2\lambda I\psi = 0\,.$$ The rest follows from convexity of (\[eq:l11\]).
Therefore, by solving the eigenvalue problem $$\label{eq:ev}
P_\ell^TS^{(\ell)}\tilde{f}+P_\ell^TS^{(\ell)}P_\ell \tilde{g} = \lambda I \tilde{g}$$ (with respect to $\tilde{g}$ and $\lambda$) we find the optimal value of the most-inner problem in (\[eq:rob\]) and the corresponding maximizer. Notice that this is a low-dimensional problem, as the number of non-zeros in $f_0^{(\ell)}$ is typically very small, as compared to the number of degrees of freedom.
Measuring robustness
--------------------
Assume that we have solved the original multiple-load problem (\[eq:ml\]) with the nominal loads $f_0^{(1)},\ldots,f_0^{(L)}$. Let us call the optimal design $x^*$. For this design and for each load case, let us solve the eigenvalue problem (\[eq:ev\]) to get eigenvectors $g^{(\ell)}_{\max}$ associated with the largest eigenvalues $\lambda_{\max}^{(\ell)}$, $\ell=1,\ldots,L$, i.e., solutions of (\[eq:gmax\]). A comparison of the optimal compliance for the nominal loads with compliances corresponding to these eigenvectors will give us a clear idea about the vulnerability and robustness of the design $x^*$.
\[def:rob\] Let $x^*$ be the solution of (\[eq:ml\]) and $$c^*:=\max\limits_{\ell=1,\ldots,L}(f_0^{(\ell)})^T
K(x^*)^{-1}f_0^{(\ell)}$$ the corresponding optimal compliance. Define $$c_{\rm rob}:= \max_{\ell=1,\ldots,L}(f_0^{(\ell)}+P_\ell
g^{(\ell)}_{\max})^T K(x^*)^{-1}(f_0^{(\ell)}+P_\ell g^{(\ell)}_{\max})\,,$$ where $g^{(\ell)}_{\max}$ is a solution of (\[eq:gmax\]) for $\ell=1,\ldots,L$. The ratio $${\cal V}(x^*):=\frac{c_{\rm rob}}{c^*}$$ is called the *vulnerability* of design $x^*$ with respect to random perturbations of the nominal loads.
\[def:rob\] Design $x^*$ (solution of (\[eq:ml\])) is *robust* with respect to random perturbations of the nominal loads if its vulnerability is smaller than or equal to one: $${\cal V}(x^*)\leq 1 \,.$$
The design is *almost robust* if $${\cal V}(x^*)\leq 1.05 \,.$$
The constant $1.05$ gives a 5% tolerance for non-robustness. Of course, this constant is to be changed according to particular applications.
This definition is not only important for the algorithm that follows but on its own. It gives us the measure of quality (robustness) of a given design, whether a result of optimization or a manual one, with respect to random perturbations of the given loads. Furthermore, not only it will give us an indication whether the design is (almost) robust—if it is not, we will know by how much. The maximal “perturbed compliance” will show by how much our objective value can get worse under a “bad” random perturbation of the given loads.
Algorithm for robust design
---------------------------
The key idea of our approach to finding a robust design is that, for given $x$ and $\ell$ the eigenvector $f_0^{(\ell)}+P_{\ell}
g^{(\ell)}_{\max}$ *represents the most dangerous load for the design $x$ and the $\ell$-th load case* in the sense that, under this load, the compliance is maximized. If the compliance corresponding to this load is greater than $1.05\cdot c^*$, this load is indeed dangerous and will be added to our set of load cases; if not, the load is harmless for the existing design and can be ignored.
This leads to the following algorithm.
Finding an almost robust design.
[*Step 1.*]{}
: Set $s=0$ and ${\cal
F}=(f^{(1)},\ldots,f^{(L)})$.
[*Step 2.*]{}
: Solve the multiple-load problem (\[eq:ml\]) with the original set of loads ${\cal F}$.\
Get the optimum design $x_{(0)}$ and compute the associated stiffness matrix $K(x_{(0)})$.\
Compute the norm $\hat{f} = \min_{\ell=1,\ldots,L} \|f^{(\ell)}\|$.\
Define the uncertainty ellipsoid by setting $P_\ell$.
[*Step 3.*]{}
: Compute the compliance\
$c_s =
\max_{\ell=1,\ldots,L} (f^{(\ell)})^T K(x_{(s)})^{-1}
f^{(\ell)}$.
[*Step 4.*]{}
: For each load case:
[*Step 4.1.*]{}
: Compute the Schur complement $S^{(\ell)}$ from (\[eq:schur\]) and its inverse.
[*Step 4.2.*]{}
: Solve the inhomogeneous eigenvalue problem (\[eq:ev\]) to find the eigenvector $g^{(\ell)}_{\max}$ associated with the largest eigenvalue.
[*Step 5.*]{}
: Find the index set ${\cal R}$ of all load cases with $$1.05\cdot c_s < (f_0^{(\ell)}+P_\ell g^{(\ell)}_{\max})^T
K(x_{(s)})^{-1}(f_0^{(\ell)}+P_\ell g^{(\ell)}_{\max})\,.$$
[*Step 6.*]{}
: If ${\cal R}=\emptyset$, then the design is almost robust; *FINISH*.\
If not, add loads with indices $\ell\in{\cal R}$ to the existing set of loads $${\cal F} \leftarrow ({\cal F};g^{(\ell)}_{\max}),\ \ \ell\in{\cal R}.$$
[*Step 7.*]{}
: Set $s \leftarrow s+1$.\
Solve the problem (\[eq:ml\]) with loads ${\cal F}$.\
Get the optimum design $x_{(s)}$ and compute the associated stiffness matrix $K(x_{(s)})$.\
*Go back to Step 3*.
In our numerical experiments, we have solved the inhomogeneous eigenvalue problems (\[eq:ev\]) by the power method, as described below.
[Power method for finding the largest eigenvalue of the inhomogeneous eigenvalue problem $$\label{eq:ev1}
Ax - b = \lambda I x$$ where $A$ is a real symmetric positive semidefinite $n\times n$ matrix and $b\in{\mathbb{R}}^n$.\
For $k=1,2,\ldots$ repeat until convergence: $$\begin{aligned}
&y_{k+1} = A x_k - b\\
&\lambda_{k+1} = x_k^Ty_{k+1}\nonumber\\
&x_{k+1} = \frac{y_{k+1}}{\|y_{k+1}\|}\,.\nonumber\end{aligned}$$ ]{}
The convergence proof of the method can be found in [@mattheij-soederlind]. More precisely, the authors show that $\lambda_k$ converges to the largest eigenvalue $\lambda^*$ and $x_k$ to the associated eigenvector $x^*$, under the condition that the operator $(I-x^*x^{*T})A(I-x^*x^{*T})/\lambda^*$ is a contraction. In all our numerical experiments, the power method converged in less than five iterations, therefore we have not pursued the analysis of this operator. Furthermore, there is another simple way how to compute all eigenvalues of (\[eq:ev1\]), as proposed also by [@mattheij-soederlind]. The problem can be converted into a quadratic eigenvalue problem which, in turn, can be written as the following standard (though nonsymmetric) eigenvalue problem: $$\begin{bmatrix} 0& I\\ b^Tb - AA^T & 2A\end{bmatrix}
\begin{bmatrix} x\\y\end{bmatrix} = \lambda
\begin{bmatrix} x\\y\end{bmatrix}$$ that can be solved by any standard algorithm. Recall again that the dimension of these problems is typically very small. Notice that the above eigenproblem only delivers the eigenvalues of the original inhomogeneous problem (\[eq:ev1\]). The associated eigenvectors can be then computed as $x:=(A-\lambda I)^{-1}b,\ x:=x/\|x\|$.
Numerical examples {#sec:ex}
==================
In this section we present numerical examples for robust truss topology optimization and robust variable thickness sheet problem. Purposely, all examples are simple enough so that the reader can see the effect of the robust approach. In fact, for most of our examples the reader will just guess what the critical perturbations of the nominal loads will look like. But that is why we have chosen these examples, in order to show that the results obtained by the algorithm correspond to engineering intuition. Clearly, for real world problems, the intuition may not be that obvious.
In all examples, $P_\ell$ was chosen to define a flat uncertainty ellipsoid around the nominal load: $${P}_\ell = T^T\begin{bmatrix}1.0\cdot 10^{-3}&0\\0&3.0\end{bmatrix}T\quad
\mbox{for~} f_0^{(\ell)}= (a,\ b)^T$$ with $$T=\begin{bmatrix}\cos\phi&\sin\phi\\-\sin\phi&\cos\phi\end{bmatrix},\quad\phi=\arctan(b/a)$$ see Fig. \[fig:unc\]-right.
All optimization problems were solved by our MATLAB based software package PENLAB[^2] [@penlab].
Truss topology optimization
---------------------------
We first consider the standard ground-structure truss topology optimization problem. For a given set of potential bars (the ground structure), we want to find those that best support a given set of loads. The design variables $x_i$ represent the volumes of the bars [see e.g. @bendsoe]. In our examples, all nodes can be connected by a potential bar.
\[ex:scaling\]We start with a toy single-load truss topology example shown in Figure \[fig:1a\]-left, together with the ground structure, the boundary conditions and the nominal load. The obvious solution of the minimum compliance problem is presented in Figure \[fig:1a\]-right; a single bar in the horizontal direction which is extremely unstable with respect to any vertical perturbation of the load and its vulnerability approaches infinity. Also in Figure \[fig:1a\]-right we can see the “most dangerous” load, as computed by our algorithm. When we add this load to the set of loads and solve the corresponding two-load problem, we obtain an optimal design shown in Figure \[fig:1b\]-left. This design is not yet robust as the vulnerability is ${\cal V}=2.25$, still way bigger than 1.05. Hence we will add the new dangerous load, also shown in Figure \[fig:1b\]-left, to the set of loads and solve a three-load problem. The optimal design for this problem is shown in Figure \[fig:1b\]-right. This time, the design is robust. For each iteration of the algorithm, Table \[tab:1\] presents: the corresponding vulnerability ${\cal V}$; maximal compliance for the current multiple-load problem “compl”; compliance of the current design with respect to the nominal load “compl$_0$”; and the worst-case load for the previous design, starting with the nominal load $[10.0,\ 0.0]$.
iter ${\cal V}$ compl compl$_0$ $f_s$
------ ------------ ------- ----------- ----------------
0 Inf 1.0 1.0 \[10.0, 0.0\]
1 2.25 1.46 1.46 \[10.0, 3.0\]
2 1.00 1.90 1.38 \[10.0, -3.0\]
: Example 1: iteration count “iter”, vulnerability ${\cal V}$, maximal compliance of the current problem “compl”, compliance of the current design with respect to the nominal loads “compl$_0$, and the worst perturbation for the previous design $f_s$.
\[tab:1\]
The computed critical perturbation may seem obvious, simply the extreme perturbation of the nominal force “up” and “down”. Again, that is why we have chosen this example, in order to show that the results obtained by the algorithm correspond the engineering intuition.
\[ex:scaling\]We now consider a higher dimensional example of a long slender truss with 55 nodes and 1485 potential bars. This is again a single-load problem with a single horizontal force applied at the middle right-hand side node. The optimal results of the nominal problem and of the robust problem are shown in Fig. \[fig:2a\] left and right, respectively.
The following Table \[tab:2\] shows that we only needed two iterations of Algorithm 1 to obtain a robust solution.
iter ${\cal V}$ compl compl$_0$ $f_s$
------ ------------ -------- ----------- ----------------
0 Inf 10.0 10.0 \[10.0, 0.0\]
1 3.86 90.17 64.68 \[10.0, 3.0\]
2 1.00 101.50 10.15 \[10.0, -3.0\]
: Example 2, same description as in Table 1
\[tab:2\]
\[ex:scaling\]Let us now solve a problem with three load cases, each on them represented by a single force, as shown in Fig. \[fig:3a\]-left. The ground structure consists of 25 nodes and 300 potential bars. Fig. \[fig:3a\]-right shows the optimal structure for the nominal loads, as well as the most dangerous perturbations of the nominal loads for this structure. Due to the “free” bar in the top part, this structure is extremely unstable with respect to perturbations and its vulnerability tends to infinity, as shown in Table \[tab:3\]. After the first iteration of Algorithm 1, we obtain the truss shown in Fig. \[fig:3b\]-left. This truss is still not robust enough with respect to the depicted load perturbations and its vulnerability is ${\cal V}=1.55$. Finally, after the second iteration of Algorithm 1, we obtain the optimal structure shown in Fig. \[fig:3b\]-right. This truss is robust with respect to allowed perturbations.
iter ${\cal V}$ compl compl$_0$ $f_s$
------ ------------ ------- ----------- --------------------------------------------
0 Inf 4.82 4.82 \[10, 0\]; \[0, 10\]; \[7, -7\]
1 1.55 6.08 6.08 \[10, -2.97\]; \[2.97, 10\]; \[9.1, -4.9\]
2 1.00 6.61 6.30 N/A; \[-2.97, 10\]; \[4.9, -9.1\]
: Example 3, same description as in Table 1
\[tab:3\]
Variable thickness sheet
------------------------
In the variable thickness sheet (or free sizing) problem, we consider plane strain linear elasticity model discretized by the standard finite element method. The design variables $x_i$ are the thicknesses of the plate, which are assumed to be constant on each finite element; so we have as many variables as elements. Again, the model can be found, e.g. in [@bendsoe].
To make the results more transparent, we consider a material with zero Poisson ratio.
\[ex:scaling\]Consider a rectangular plate as depicted in Fig. \[fig:4a\]-left. The plate is fixed on its left-hand side (by prescribed homogeneous boundary conditions at the corresponding nodes) and subject to a horizontal load applied to a small segment in the middle of the right-hand side edge. Fig. \[fig:4a\]-right shows the optimal result of this single load problem—a single horizontal bar (recall that the result is due to the zero Poisson ratio). The first line in Table \[tab:4\] shows that this design is far from being robust; its vulnerability is almost 36. In the same table, in the second row, we can see the critical perturbation of the three prescribed forces. If we add these forces as a load number two and solve the corresponding two-load problem, we obtain an optimal solution depicted in Fig. \[fig:4b\]-left. This solution is still not robust; its vulnerability is ${\cal V}=3.35$. But after another iteration of Algorithm 1, we obtain a robust design shown in Fig. \[fig:4b\]-right.
\[scale=1.5,auto=left\] (-.15,-.1) rectangle (0,1.1); (0,0) – (2,0) ;(2,0) – (2,1) ; (2,1) – (0,1) ;(0,1) – (0,0) ; (2,0.4) – (2.4,0.4) ; (2,0.5) – (2.4,0.5) ; (2,0.6) – (2.4,0.6) ;
iter ${\cal V}$ compl compl$_0$ $f_s$
------ ------------ -------- ----------- ----------------------------------
0 35.93 48.88 48.88 \[1, 0, 2, 0, 1, 0\]
1 3.35 78.28 78.28 \[1, 0.25, 2, 0.41, 1, 0.56\]
2 1.04 111.80 56.54 \[1, -0.42, 2, -0.42, 1, -0.43\]
: Example 4, same description as in Table 1
\[tab:4\]
[^1]: This research was supported by the EU FP7 project AMAZE.
[^2]: Downloadable from http://www.nag.co.uk/projects/penlab
|
---
abstract: 'We show that a large class of formal groups can be realised functorially by even periodic ring spectra. The main advance is in the construction of morphisms, not of objects.'
address: |
Department of Pure Mathematics, University of Sheffield\
Sheffield S3 7RH, UK
author:
- 'N.P. Strickland'
title: Realising formal groups
---
Introduction
============
Let ${\operatorname{FG}}$ be the category of formal groups (of the sort usually considered in algebraic topology) over affine schemes. Thus, an object of ${\operatorname{FG}}$ consists of a pair $(G,S)$, where $S$ is an affine scheme, $G$ is a formal group scheme over $S$, and a coordinate $x$ can be chosen such that ${{\mathcal{O}}_G}\simeq{{\mathcal{O}}_S}{[\![x]\!]}$ as ${{\mathcal{O}}_S}$-algebras. A morphism from $(G_0,S_0)$ to $(G_1,S_1)$ is a commutative square $$\xymatrix {
G_0 \rto^{{\tilde{p}}} \dto & G_1 \dto \\
S_0 \rto_p & S_1
}$$ such that the induced map $G_0{\xrightarrow}{}p^*G_1$ is an isomorphism of formal group schemes over $S_0$.
Next, recall that an *even periodic ring spectrum* is a commutative and associative ring spectrum $E$ such that $E^1=0$ and $E^2$ contains a unit (which implies that $E\simeq{\Sigma}^2E$ as spectra). Here we are using the usual notation $E^k=E^k(\text{point})=\pi_{-k}E$. We write ${\operatorname{EPR}}$ for the category of even periodic ring spectra. (Everything here is interpreted in Boardman’s homotopy category of spectra; there are no $E_\infty$ or $A_\infty$ structures.)
Given an even periodic ring spectrum $E$, we can form the scheme $S_E:={\operatorname{spec}}(E^0)$ and the formal group scheme $G_E={\operatorname{spf}}(E^0{\mathbb{C}P^\infty})$ over $S_E$. This construction gives rise to a functor ${\Gamma}{\colon}{\operatorname{EPR}}^{{\operatorname{op}}}{\xrightarrow}{}{\operatorname{FG}}$.
It is a natural problem to try to define a realisation functor $R{\colon}{\operatorname{FG}}{\xrightarrow}{}{\operatorname{EPR}}^{{\operatorname{op}}}$ with ${\Gamma}R(G,S)\simeq(G,S)$, or at least to do this for suitable subcategories of ${\operatorname{FG}}$. For example, if we let ${\operatorname{LFG}}$ denote the category of Landweber exact formal groups, and put ${\operatorname{LEPR}}=\{E\in{\operatorname{EPR}}{\;|\;}{\Gamma}(E)\in{\operatorname{LFG}}\}$, one can show that the functor ${\Gamma}{\colon}{\operatorname{LEPR}}^{{\operatorname{op}}}{\xrightarrow}{}{\operatorname{LFG}}$ is an equivalence; this is essentially due to Landweber, but details of this formulation are given in [@st:fsfg Proposition 8.43]. Inverting this gives a realisation functor for ${\operatorname{LFG}}$, and many well-known spectra are constructed using this. In particular, this gives various different versions of elliptic cohomology, based on various universal families of elliptic curves over rings such as ${{\mathbb{Z}}}[{{\textstyle\frac{1}{6}}},c_4,c_6][\Delta^{-1}]$.
It is hard to say more than this unless we invert the prime $2$. We therefore make a blanket assumption:
From now on, all rings are assumed to be ${{\mathbb{Z}}}[{{\textstyle\frac{1}{2}}}]$-algebras. In particular, we only consider schemes $S$ for which $2$ is invertible in ${{\mathcal{O}}_S}$. We use the symbol $MU$ for the spectrum that would normally be called $MU[{{\textstyle\frac{1}{2}}}]$.
The other main technique for constructing realisations is the modernised version of Baas-Sullivan theory [@ekmm:rma; @st:pmm]. This starts with a strictly commutative ring spectrum $R$, and an algebra $A_*$ over $\pi_*R$, and it constructs a homotopically commutative $R$-algebra spectrum $A$ with $\pi_*A=A_*$, provided that $A_*$ has good structural properties. Firstly, we assume as always that $2$ is invertible in $A_*$. Given this, the construction will work if $A_*$ is a *localised regular quotient (LRQ)* of $R_*$, in other words it has the form $A_*=(S^{-1}\pi_*R)/I$, where $S$ is a multiplicative set and $I$ is an ideal generated by a regular sequence. The construction can also be extended to cover the case where $A_*$ is a free module over an LRQ of $\pi_*R$.
We can apply this taking $R$ to be the periodic bordism spectrum $$MP = {\bigvee}_{n\in{{\mathbb{Z}}}}{\Sigma}^{2n}MU[{{\textstyle\frac{1}{2}}}]$$ (we will verify in the appendix that this can be constructed as a strictly commutative ring). Given a formal group $(G,S)$ we can choose a coordinate $x$, which gives a formal group law $F$ defined over ${{\mathcal{O}}_S}$, and thus a ring map $\pi_0MP{\xrightarrow}{}{{\mathcal{O}}_S}$, making ${{\mathcal{O}}_S}$ into a $\pi_0MP$-algebra. If this algebra has the right properties, then we can use the Baas-Sullivan approach to construct $E$ with ${\Gamma}(E)\simeq(G,S)$. It is convenient to make the following *ad hoc* definition:
A ring $R$ is *standard* if $2$ is invertible in $R$ and $R$ is either a field or a ring of the form $T^{-1}{{\mathbb{Z}}}$ (for some set $T$ of primes).
An easy argument given below shows that the above method can construct realizations for all formal groups over standard rings. Unfortunately, this construction is not obviously functorial: it depends on a choice of coordinate, and morphisms of formal groups do not generally preserve coordinates. The main result of this paper is to show that with suitable hypotheses we can nonetheless define a functor.
The basic point is to consider the situation where we have several different coordinates, say $x_0,\ldots,x_r$ on a fixed formal group $G$. In a well-known way, this makes ${{\mathcal{O}}_S}$ into an algebra over the ring $\pi_0(MP^{(r+1)})$, and we can ask whether this can be realized topologically by an $MP^{(r+1)}$-algebra; the question will be made more precise in Section \[sec-basic\]. We say that $G$ is *very good* if the question has an affirmative answer for all $r\geq 0$ and all $x_0,\ldots,x_r$.
All formal groups over standard rings are very good.
This will be proved as Corollary \[cor-very-good\].
For our sharpest results, we need a slightly more complicated notion. We say that a coordinate $x_0$ is *multirealisable* if for any list $x_1,\ldots,x_r$ of additional coordinates, the question mentioned above has an affirmative answer. We say that $G$ is *good* if it admits a multirealisable coordinate. Of course, $G$ is very good iff *every* coordinate is multirealisable. We write ${\operatorname{GFG}}$ for the category of good formal groups (considered as a full subcategory of ${\operatorname{FG}}$). The details are given in Definition \[defn-good\].
\[thm-LRQ-good\] Let $x$ be a coordinate on a formal group $(G,S)$, and suppose that the classifying map $\pi_0MP{\xrightarrow}{}{{\mathcal{O}}_S}$ makes ${{\mathcal{O}}_S}$ into a localised regular quotient of $\pi_0MP$. Then $x$ is multirealisable, and so $G$ is good.
This will be proved as Proposition \[prop-LRQ-multi\].
At odd primes, the formal groups associated to $2$-periodic versions of $BP$, $P(n)$, $B(n)$, $E(n)$, $K(n)$, $k(n)$ and so on are all good.
This shows that there is a considerable overlap with the Landweber exact case. However, there are many good formal groups that are not Landweber exact. Conversely, there is no reason to expect that Landweber exact formal groups will be good, although we have no counterexamples.
Our main result is as follows:
\[thm-main\] There is a realisation functor $R{\colon}{\operatorname{GFG}}{\xrightarrow}{}{\operatorname{EPR}}$, with ${\Gamma}R\simeq 1{\colon}{\operatorname{GFG}}{\xrightarrow}{}{\operatorname{GFG}}$.
Note that good formal groups are realisable by definition; the content of the theorem is that the realisation is well-defined and functorial.
We next explain the formal part of the construction; in Section \[sec-proof\] we will give additional details and prove that we have the required properties. The functor $R$ actually arises as $UV^{-1}$ for a pair of functors ${\operatorname{GFG}}{\xleftarrow}{V}{{\mathcal{E}}}{\xrightarrow}{U}{\operatorname{EPR}}$ in which $V$ is an equivalence. To explain ${{\mathcal{E}}}$, recall that we have a topological category ${\operatorname{Mod}}_0$ of $MP$-modules. We write ${\operatorname{DMod}}_0$ for the derived category, and ${\operatorname{EPA}}_0$ for the category of even periodic commutative ring objects in ${\operatorname{DMod}}_0$. The unit map $\eta{\colon}S{\xrightarrow}{}MP$ gives a functor $\eta^*{\colon}{\operatorname{EPA}}_0{\xrightarrow}{}{\operatorname{EPR}}$, and the objects of the category ${{\mathcal{E}}}$ are the objects $E\in{\operatorname{EPA}}_0$ for which the associated coordinate on ${\Gamma}(\eta^*E)$ is multirealisable. The morphism set ${{\mathcal{E}}}(E_0,E_1)$ is a subset of ${\operatorname{EPR}}(\eta^*E_0,\eta^*E_1)$, the functor $V{\colon}{{\mathcal{E}}}{\xrightarrow}{}{\operatorname{GFG}}$ is given by ${\Gamma}$, and the functor $U{\colon}{{\mathcal{E}}}{\xrightarrow}{}{\operatorname{EPR}}$ is given by $\eta^*$. We say that a map $f{\colon}\eta^*E_0{\xrightarrow}{}\eta^*E_1$ in ${\operatorname{EPR}}$ is *good* if there is a commutative ring object $A$ in the derived category of $MP{\wedge}MP$-modules together with maps $f'{\colon}E_0{\xrightarrow}{}({1{\wedge}\eta})^*A$ and $f''{\colon}({\eta{\wedge}1})^*A{\xrightarrow}{}E_1$ in ${\operatorname{EPA}}_0$ such that $f''$ is an equivalence and $f$ is equal to the composite $$\eta^*E_0 {\xrightarrow}{\eta^*f'} ({\eta{\wedge}\eta})^*A
{\xrightarrow}{\eta^*f''} \eta^*E_1.$$ The morphisms in the category ${{\mathcal{E}}}$ are just the good maps. To prove Theorem \[thm-main\], we need to show that
- The composite of two good maps is good, so ${{\mathcal{E}}}$ really is a category.
- For any map ${\Gamma}(\eta^*E_0){\xrightarrow}{}{\Gamma}(\eta^*E_1)$ of good formal groups, there is a unique good map $\eta^*E_0{\xrightarrow}{}\eta^*E_1$ inducing it, so that $V$ is full and faithful.
- For any good formal group $(G,S)$ there is an object $E\in{\operatorname{EPA}}_0$ such that ${\Gamma}(\eta^*E)\simeq(G,S)$, so $V$ is essentially surjective.
To prove statement $(k)$, we need to construct modules over the $k$-fold smash power of $MP$. It will be most efficient to do this for all $k$ simultaneously.
Preliminaries
=============
Differential forms {#subsec-forms}
------------------
Let $(G,S)$ be a formal group, and let $I\leq{{\mathcal{O}}_G}$ be the augmentation ideal. Recall that the cotangent space of $G$ at zero is the module ${\omega}_G=I/I^2$. If $x$ is a coordinate on $G$ that vanishes at zero, then we write $dx$ for the image of $x$ in $I/I^2$, and note that ${\omega}_G$ is freely generated over ${{\mathcal{O}}_S}$ by $dx$. We define a graded ring $D(G,S)^*$ by $$D(G,S)^k = \begin{cases}
0 & \text{ if $k$ is odd } \\
{\omega}_G^{{\otimes}(-k/2)} & \text{ if $k$ is even. }
\end{cases}$$ Here the tensor products are taken over ${{\mathcal{O}}_S}$, and ${\omega}_G^{{\otimes}n}$ means the dual of ${\omega}_G^{{\otimes}|n|}$ when $n<0$. Where convenient, we will convert to homological gradings by the usual rule: $D(G,S)_k=D(G,S)^{-k}$.
Now let $E$ be an even periodic ring spectrum with ${\Gamma}(E)=(G,S)$. We then have ${{\mathcal{O}}_G}=E^0{\mathbb{C}P^\infty}$ and $I={\widetilde{E}}^0{\mathbb{C}P^\infty}$ and one checks easily that the inclusion $S^2={\mathbb{C}P}^1{\xrightarrow}{}{\mathbb{C}P^\infty}$ gives an isomorphism ${\omega}_G=I/I^2={\widetilde{E}}^0S^2=E^{-2}$. Using the periodicity of $E$, we see that this extends to a canonical isomorphism $D({\Gamma}(E))^*\simeq E^*$.
It also follows from this analysis (or from more direct arguments) that a map $f{\colon}E_0{\xrightarrow}{}E_1$ in ${\operatorname{EPR}}$ is a weak equivalence if and only if $\pi_0f$ is an isomorphism.
Periodic bordism {#subsec-MP}
----------------
Consider the homology theory $MP_*(X)=MU_*(X){\otimes}{{\mathbb{Z}}}[u,u^{-1}]$, where $u$ has homological degree $2$ (and thus cohomological degree $-2$). This is represented by the spectrum $MP={\bigvee}_{n\in{{\mathbb{Z}}}}{\Sigma}^{2n}MU$, with an evident ring structure. It is well-known that $MU$ is an $E_\infty$ ring spectrum; see for example [@lemast:esh Section IX]. It is also shown there that $MU$ is an $H_\infty^2$ ring spectrum, which means (as explained in [@lemast:esh Remark VII.2.9]) that $MP$ is an $H_\infty$ ring spectrum; this is weaker than $E_\infty$ in theory, but usually equivalent in practise. As one would expect, $MP$ is actually an $E_\infty$ ring spectrum; a proof is given in the appendix. It follows from [@ekmm:rma Proposition II.4.3] that one can construct a model for $MP$ that is a strictly commutative ring spectrum (or “commutative $S$-algebra”). We may also assume that it is a cofibrant object in the category of all strictly commutative ring spectra.
For typographical convenience, we write $MP(r)$ for the $(r+1)$-fold smash power $MP{\wedge}\ldots{\wedge}MP$, which is again a strictly commutative ring. The spectra $MP(r)$ fit together into a cosimplicial object in the usual way; for example, we have three maps $${\eta{\wedge}1{\wedge}1},{1{\wedge}\eta{\wedge}1},{1{\wedge}1{\wedge}\eta}{\colon}MP(0) {\xrightarrow}{} MP(2).$$ In the category of strictly commutative ring spectra, the coproduct is the smash product. It follows formally that the smash product of cofibrant objects is cofibrant, so in particular the objects $MP(r)$ are all cofibrant.
For $r>0$, it is well-known that $\pi_*MU^{(r+1)}$ is a polynomial algebra over $\pi_*MU$ on countably many generators, and it follows that there is a noncanonical isomorphism $$\pi_0MP(r) \simeq
\pi_0MP[x_1,x_2,\ldots][x_1^{-1},\ldots,x_r^{-1}].$$
There are $r+1$ obvious inclusions $MP{\xrightarrow}{}MP(r)$. We can use these to push forward the standard generator of $MP^0{\mathbb{C}P^\infty}$, giving $r+1$ different coordinates on the formal group ${\Gamma}(MP(r))$. We denote these by ${\widetilde{x}}_0,\ldots,{\widetilde{x}}_r$.
Groups and laws {#subsec-laws}
---------------
We now define a category ${\operatorname{FG}}_r$ as follows. The objects are systems $$(G,S,x_0,\ldots,x_r),$$ where $(G,S)$ is a formal group and the $x_i$ are coordinates on $G$. The morphisms from $(G,S,x_0,\ldots,x_r)$ to $(H,T,y_0,\ldots,y_r)$ are the maps $({\tilde{p}},p){\colon}(G,S){\xrightarrow}{}(H,T)$ in ${\operatorname{FG}}$ for which ${\tilde{p}}^*y_i=x_i$ for all $i$. Note that given $p$, the map ${\tilde{p}}$ is determined by the fact that ${\tilde{p}}^*y_0=x_0$. Thus, the forgetful functor $(G,S,x_0,\ldots,x_r)\mapsto S$ (from ${\operatorname{FG}}_r$ to the category of affine schemes) is faithful.
We also write ${\operatorname{Alg}}_r$ for the category of commutative algebras over the ring $\pi_0MP(r)$.
\[prop-FGr\] There is an equivalence ${\operatorname{FG}}_r\simeq{\operatorname{Alg}}_r^{{{\operatorname{op}}}}$.
Recall that we have coordinates ${\widetilde{x}}_0,\ldots,{\widetilde{x}}_r$ on ${\Gamma}(MP(r))$. Given an object $A\in{\operatorname{Alg}}_r$ we have a structure map ${\operatorname{spec}}(A){\xrightarrow}{}{\operatorname{spec}}(\pi_0MP(r))$, and we can pull back ${\Gamma}(MP(r))$ to get a formal group $G_A$ over ${\operatorname{spec}}(A)$. We can also pull back the coordinates ${\widetilde{x}}_i$ to make $G_A$ an object of ${\operatorname{FG}}_r$. It is easy to see that this construction defines a functor $U{\colon}{\operatorname{Alg}}_r^{{{\operatorname{op}}}}{\xrightarrow}{}{\operatorname{FG}}_r$. By forgetting down to the category of affine schemes, we see that $U$ is faithful.
We now claim that $U$ is an equivalence. We will deduce this from a well-known result of Quillen by a sequence of translations. First, Quillen tells us that maps $\pi_*MU^{(r+1)}{\xrightarrow}{}B_*$ of graded rings biject naturally with systems $$F_0 {\xleftarrow}{f_0} F_1 {\xleftarrow}{f_1} \cdots {\xleftarrow}{f_{r-1}} F_r,$$ where each $F_i$ is a homogeneous formal group law over $B_*$ and each $f_i$ is a strict isomorphism. By a standard translation to the even periodic case, we see that maps $\pi_0MP(r){\xrightarrow}{}A$ of ungraded rings biject naturally with systems $$F_0 {\xleftarrow}{f_0} F_1 {\xleftarrow}{f_1} \cdots {\xleftarrow}{f_{r-1}} F_r,$$ where each $F_i$ is a formal group law over $A$ and each $f_i$ is a (not necessarily strict) isomorphism.
Now suppose we have an object $(G,S,x_0,\ldots,x_r)$ in ${\operatorname{FG}}_r$. For each $i$ there is a unique formal group law $F_i$ over ${{\mathcal{O}}_S}$ such that $x_i(a+b)=F_i(x_i(a),x_i(b))$ for sections $a,b$ of $G$. Moreover, as $x_{i+1}$ is another coordinate, we can write $x_i=f_i(x_{i+1})$ for a unique power series $f_i\in{{\mathcal{O}}_S}{[\![t]\!]}$. It is easy to check that $f_i$ is an isomorphism from $F_{i+1}$ to $F_i$, so Quillen’s theorem gives us a map $\pi_0MP(r){\xrightarrow}{}{{\mathcal{O}}_S}$, allowing us to regard ${{\mathcal{O}}_S}$ as an object of ${\operatorname{Alg}}_r$. It is easy to see that this construction gives a functor ${\operatorname{FG}}_r{\xrightarrow}{}{\operatorname{Alg}}_r^{{{\operatorname{op}}}}$. We leave it to the reader to check that this is inverse to $U$.
Module categories
-----------------
We write ${\operatorname{Mod}}_r$ for the category of $MP(r)$-modules (in the strict sense, not the homotopical one). Note that a map $f{\colon}A_0{\xrightarrow}{}A_1$ of strictly commutative ring spectra gives a functor $f^*{\colon}{\operatorname{Mod}}_{A_1}{\xrightarrow}{}{\operatorname{Mod}}_{A_0}$, which is just the identity on the underlying spectra (and thus preserves weak equivalences). It follows easily that for any two maps $A_0{\xrightarrow}{f}A_1{\xrightarrow}{g}A_2$, the functor $f^*g^*$ is actually equal (not just naturally isomorphic or naturally homotopy equivalent) to $(gf)^*$. Thus, the categories ${\operatorname{Mod}}_r$ fit together to give a simplicial category ${\operatorname{Mod}}_*$.
For us, a *simplicial category* means a simplicial object in the category of categories. Elsewhere in the literature, the same phrase is sometimes used to refer to categories enriched over the category of simplicial sets, which is a rather different notion.
Next, we write ${\operatorname{DMod}}_r$ the derived category of ${\operatorname{Mod}}_r$, as in [@ekmm:rma Chapter III]. As usual, there are two different models for a category such as ${\operatorname{DMod}}_r$:
- One can take the objects to be the cofibrant objects in ${\operatorname{Mod}}_r$, and morphisms to be homotopy classes of maps; or
- One can use all objects in ${\operatorname{Mod}}_r$ and take morphisms to be equivalence classes of “formal fractions”, in which one is allowed to invert weak equivalences.
We will use model (b). This preserves the strong functorality mentioned previously, and ensures that ${\operatorname{DMod}}_*$ is again a simplicial category.
We also write ${\operatorname{EPA}}_r$ for the category of even periodic commutative ring objects in ${\operatorname{DMod}}_r$, giving another simplicial category. (Note that periodicity is actually automatic, because $MP(r)$ is itself periodic.) Various fragments of the simplicial structure will be used in Section \[sec-proof\].
Basic realisation results {#sec-basic}
=========================
Let $R$ be a strictly commutative ring spectrum that is even and periodic, such that $R_0$ is an integral domain (and as always, $2$ is invertible). The main examples will be $R=MP(r)$ for $r\geq 0$. Let $\mathcal{D}$ be the derived category of $R$-modules, and let ${{\mathcal{R}}}$ be the category of commutative ring objects $A\in\mathcal{D}$ such that $\pi_1A=0$. Recall that if $f$ is a morphism in ${{\mathcal{R}}}$ such that $\pi_0f$ is an isomorphism, then $\pi_*f$ is also an isomorphism and so $f$ is an equivalence.
We also write ${{\mathcal{R}}}_0$ for the category of commutative algebras over $\pi_0R$. We say that an object $A\in{{\mathcal{R}}}$ is *strong* if for all $B\in{{\mathcal{R}}}$, the map $$\pi_0{\colon}{{\mathcal{R}}}(A,B) {\xrightarrow}{} {{\mathcal{R}}}_0(\pi_0A,\pi_0B)$$ is a bijection. A *realisation* of an object $A_0\in{{\mathcal{R}}}_0$ is a pair $(A,u)$, where $A\in{{\mathcal{R}}}$ and $u{\colon}\pi_0A{\xrightarrow}{}A_0$ is an isomorphism. We say that $(A,u)$ is a *strong realisation* iff the object $A$ is strong; if so, we have a natural isomorphism ${{\mathcal{R}}}(A,B)\simeq{{\mathcal{R}}}_0(A_0,\pi_0B)$. We say that $A_0$ is *strongly realisable* if it admits a strong realisation. If so, it is easy to check that all realisations are strong, and any two realisations are linked by a unique isomorphism.
The results of [@st:pmm] provide a good supply of strongly realisable algebras, except that we need a little translation between the even periodic framework and the usual graded framework. Suppose that $A_0\in{{\mathcal{R}}}_0$, and put $T={\operatorname{spec}}(A_0)$. We have a unit map $\eta{\colon}\pi_0R{\xrightarrow}{}A_0$ and thus a map ${\operatorname{spec}}(\eta){\colon}T{\xrightarrow}{}S_R$; we can pull back the formal group $G_R$ along this to get a formal group $H:={\operatorname{spec}}(\eta)^*G_R$ over $T$. From this we get a map $\eta_*{\colon}R_*=D(G_R,S_R)_*{\xrightarrow}{}D(H,T)_*$, which agrees with $\eta$ in degree zero. Indeed, if we choose a generator $u$ of $R_2$ over $R_0$, then $\eta_*$ is just the map $R_0[u,u^{-1}]{\xrightarrow}{}A_0[u,u^{-1}]$ obtained in the obvious way from $\eta$. It is easy to check that $A_0$ is strongly realisable (as defined in the previous paragraph) iff $D(H,T)_*$ is strongly realisable over $R_*$ (as defined in [@st:pmm]).
A *short ordinal* is an ordinal ${\lambda}$ of the form $n.{\omega}+m$ for some $n,m\in{{\mathbb{N}}}$. A *regular sequence* in a ring $R_0$ is a system of elements $(x_{\alpha})_{{\alpha}<{\lambda}}$ for some short ordinal ${\lambda}$ such that $x_{\alpha}$ is not a zero-divisor in the ring $(S^{-1}R_0)/(x_{\beta}{\;|\;}{\beta}<{\alpha})$. An object $A_0\in{{\mathcal{R}}}_0$ is a *localised regular quotient* (or LRQ) of $R_0$ if $A_0=(S^{-1}R_0)/I$ for some subset $S\subset R_0$ and some ideal $I\leq S^{-1}R_0$ that can be generated by a regular sequence.
We have made a small extension of the usual notion of a regular sequence, to ensure that any LRQ of an LRQ of $R_0$ is itself an LRQ of $R_0$; see Lemma \[lem-LRQ-LRQ\].
\[thm-LRQ\] If $A_0$ is an LRQ of $R_0$, then it is strongly realisable.
This is essentially [@st:pmm Theorem 2.6], translated into a periodic setting as explained above. Here we are using a slightly more general notion of a regular sequence, but all the arguments can be adapted in a straightforward way. The main point is that any countable limit ordinal has a cofinal sequence, so homotopy colimits can be constructed using telescopes in the usual way. Andrey Lazarev has pointed out a lacuna in [@st:pmm]: it is necessary to assume that the elements $x_{\alpha}$ are all regular in $S^{-1}R_0$ itself, which is not generally automatic. However, we are assuming that $R_0$ is an integral domain so this issue does not arise.
\[prop-tensor\] Suppose that
- $A$ and $B$ are strong realisations of $A_0$ and $B_0$
- The natural map $A_0{\otimes}_{R_0}B_0{\xrightarrow}{}(A{\wedge}_R B)_0$ is an isomorphism.
Then $A{\wedge}_RB$ is a strong realisation of $A_0{\otimes}_{R_0}B_0$.
This follows from [@st:pmm Corollary 4.5].
\[prop-free\] If $A_0\in{{\mathcal{R}}}_0$ is strongly realisable, and $B_0$ is an algebra over $A_0$ that is free as a module over $A_0$, then $B_0$ is also strongly realisable.
This follows from [@st:pmm Proposition 4.13].
\[prop-Z-ninv\] Suppose that $R_0$ is a polynomial ring in countably many variables over ${{\mathbb{Z}}}[{{\textstyle\frac{1}{2}}}]$, that $A_0\in{{\mathcal{R}}}_0$, and that $A_0={{\mathbb{Z}}}[1/2n]$ as a ring (for some $n$). Then $A_0$ is an LRQ of $R_0$, and thus is strongly realisable.
Choose a system of polynomial generators $\{x_k{\;|\;}k\geq 0\}$ for $R_0$ over ${{\mathbb{Z}}}[{{\textstyle\frac{1}{2}}}]$. Put $a_k=\eta(x_k)\in A_0={{\mathbb{Z}}}[1/n]$ and $y_k=x_k-a_k\in R_0[1/n]$. It is clear that $R_0[1/2n]={{\mathbb{Z}}}[1/2n][y_k{\;|\;}k\geq 0]$, that the elements $y_k$ form a regular sequence generating an ideal $I$ say, and that $A_0=R_0[1/2n]/I$.
\[prop-field\] Suppose that $R_0$ is a polynomial ring in countably many variables over ${{\mathbb{Z}}}[{{\textstyle\frac{1}{2}}}]$, that $A_0\in{{\mathcal{R}}}_0$, and that $A_0$ is a field (necessarily of characteristic different from $2$). Then $A_0$ is a free module over an LRQ of $R_0$, and thus is strongly realisable.
For notational simplicity, we assume that $A_0$ has characteristic $p>2$; the case of characteristic $0$ is essentially the same.
Choose a set $X$ of polynomial generators for $R_0$ over ${{\mathbb{Z}}}[{{\textstyle\frac{1}{2}}}]$. Let $K$ be the subfield of $A_0$ generated by the image of $\eta$, or equivalently by $\eta(X)$. We can choose a subset $Y{\subseteq}X$ such that $\eta(Y)$ is a transcendence basis for $K$ over ${{\mathbb{F}_p}}$. This means that the subfield $L_0$ of $K$ generated by $\eta(Y)$ is isomorphic to the rational function field ${{\mathbb{F}_p}}(Y)$, and that $K$ is algebraic over $L_0$. Put $S={{\mathbb{Z}}}[{{\textstyle\frac{1}{2}}},Y]{\setminus}(p{{\mathbb{Z}}}[{{\textstyle\frac{1}{2}}},Y])$, so $L_0=(S^{-1}{{\mathbb{Z}}}[{{\textstyle\frac{1}{2}}},Y])/p$. Next, list the elements of $X{\setminus}Y$ as $\{x_1,x_2,\ldots\}$, and let $L_k$ be the subfield of $K$ generated by $\{x_i{\;|\;}i\leq k\}$. (We will assume that $X{\setminus}Y$ is infinite; if not, the notation changes slightly.) As $x_k$ is algebraic over $L_{k-1}$, there is a monic polynomial $f_k(t)\in L_{k-1}[t]$ with $L_k=L_{k-1}[x_k]/f_k(x_k)$. As $L_{k-1}$ is a quotient of the ring $P_{k-1}:=S^{-1}{{\mathbb{Z}}}[Y,x_1,\ldots,x_{k-1}]$, we can choose a monic polynomial $g_k(t)\in P_{k-1}[t]$ lifting $f_k$, and put $z_k:=g_k(x_k)\in P_k{\subseteq}S^{-1}R_0$. It is not hard to check that the sequence $(p,z_1,z_2,\ldots)$ is regular in $S^{-1}R_0$, and that $(S^{-1}R_0)/(z_i{\;|\;}i>0)=K$, so $K$ is an LRQ of $R_0$. It is clear that $A_0$ is free over the subfield $K$.
\[lem-LRQ-LRQ\] An LRQ of an LRQ is an LRQ.
Suppose that $B=(S^{-1}A)/(x_{\alpha}{\;|\;}{\alpha}<{\lambda})$ and $C=(T^{-1}B)/(y_{\beta}{\;|\;}{\beta}<\mu)$, where ${\lambda}$ and $\mu$ are short ordinals, and the $x$ and $y$ sequences are regular in $S^{-1}A$ and $T^{-1}B$ respectively. Let $T'$ be the set of elements of $A$ that become invertible in $T^{-1}B$; clearly $S{\subseteq}T'$ and $T^{-1}B=((T')^{-1}A)/(x_{\alpha}{\;|\;}{\alpha}<{\lambda})$. As $(T')^{-1}A$ is a localisation of $S^{-1}A$ and localisation is exact, we see that $x$ is a regular sequence in $(T')^{-1}A$ as well. After multiplying by suitable elements of $T'$ if necessary, we may assume that $y_{\beta}$ lies in the image of $A$ (this does not affect regularity, as the elements of $T'$ are invertible). We then put $z_{\alpha}=x_{\alpha}$ for ${\alpha}<{\lambda}$, and let $z_{{\lambda}+{\beta}}$ be any preimage of $y_{\beta}$ in $A$ for $0\leq{\beta}<\mu$. This gives a regular sequence in $(T')^{-1}A$ indexed by ${\lambda}+\mu$, such that $C=((T')^{-1}A)/(z_{\gamma}{\;|\;}{\gamma}<{\lambda}+\mu)$ as required.
We now specialize to the case $R=MP(r)$, so ${{\mathcal{R}}}_0={\operatorname{EPA}}_r$. We write ${\Gamma}_r$ for the evident composite functor $${\operatorname{EPA}}_r^{{{\operatorname{op}}}} {\xrightarrow}{\pi_0} {\operatorname{Alg}}_r^{{{\operatorname{op}}}} \simeq {\operatorname{FG}}_r.$$ Translating our previous definitions via the equivalence ${\operatorname{Alg}}_r^{{{\operatorname{op}}}}\simeq{\operatorname{FG}}_r$, we obtain the following.
\[defn-strong-FGr\] An object $A\in{\operatorname{EPA}}_r$ is *strong* if for all $B\in{\operatorname{EPA}}_r$, the map $${\Gamma}_r {\colon}{\operatorname{EPA}}_r(A,B) {\xrightarrow}{} {\operatorname{FG}}_r({\Gamma}_r(B),{\Gamma}_r(A))$$ is a bijection.
A *realisation* of an object $(G,S,{\underline{x}})\in{\operatorname{FG}}_r$ is a triple $(A,{\tilde{p}},p)$, where $A\in{\operatorname{EPA}}_r$ and $({\tilde{p}},p){\colon}{\Gamma}_rA{\xrightarrow}{}(G,S,{\underline{x}})$ is an isomorphism. This is a *strong realisation* if the object $A$ is strong.
We now give more precise versions of the definitions in the introduction.
A formal group $(G,S)$ is *very good* if for every nonempty list ${\underline{x}}$ of coordinates, the object $(G,S,{\underline{x}})\in{\operatorname{FG}}_r$ is strongly realisable.
\[defn-good\] A coordinate $x_0$ on $G$ is *multirealisable* if for every list $x_1,\ldots,x_r$ of coordinates, the object $(G,S,x_0,\ldots,x_r)\in{\operatorname{FG}}_r$ is strongly realisable. A formal group $(G,S)$ is *good* if it admits a multirealisable coordinate. We write ${\operatorname{GFG}}$ for the category of good formal groups.
\[rem-perm\] Let $x_0,\ldots,x_r$ be coordinates, and suppose that $x_0$ is multirealisable. Let ${\sigma}$ be a permutation of $\{0,\ldots,r\}$. Using the evident action of permutations on $MP(r)$, we see that the object $(G,S,x_{{\sigma}(0)},\ldots,x_{{\sigma}(r)})$ is strongly realisable.
\[prop-LRQ-multi\] Suppose that $x_0$ is such that the classifying map $\pi_0MP{\xrightarrow}{}{{\mathcal{O}}_S}$ makes ${{\mathcal{O}}_S}$ an LRQ of $\pi_0MP$. Then $x_0$ is multirealisable, so $(G,S)$ is good.
The coordinate $x_0$ gives a map $f_0{\colon}\pi_0MP{\xrightarrow}{}{{\mathcal{O}}_S}$. By assumption, there is a multiplicative set $T{\subseteq}\pi_0MP$ and a regular ideal $I$ such that $f_0$ induces an isomorphism $(T^{-1}\pi_0MP)/I{\xrightarrow}{}{{\mathcal{O}}_S}$.
Now consider a list of additional coordinates $x_1,\ldots,x_r$ say. These give a map $f{\colon}\pi_0MP(r){\xrightarrow}{}{{\mathcal{O}}_S}$ extending $f_0$. We know from Section \[subsec-MP\] that $\pi_0MP(r)$ is a polynomial ring in countably many variables over $\pi_0MP$, in which $r$ of the variables have been inverted, so we can write $$\pi_0MP(r) = \pi_0MP[u_1,u_2,\ldots][u_1^{-1},\ldots,u_r^{-1}].$$ Put $$A_0 = {{\mathcal{O}}_S}[u_1,u_2,\ldots][u_1^{-1},\ldots,u_r^{-1}],$$ which is evidently an LRQ of $\pi_0MP(r)$. It is easy to see that $f$ induces a map $f'{\colon}A_0{\xrightarrow}{}{{\mathcal{O}}_S}$ of ${{\mathcal{O}}_S}$-algebras. Put $a_k=f'(u_k)\in{{\mathcal{O}}_S}$, and $v_k=u_k-a_k\in A_0$. Clearly $A_0$ is a localisation of ${{\mathcal{O}}_S}[v_k{\;|\;}k>0]$, the sequence of $v$’s is regular in $A_0$, and $A_0/(v_k{\;|\;}k>0)={{\mathcal{O}}_S}$ as $\pi_0MP(r)$-algebras. It follows that ${{\mathcal{O}}_S}$ is an LRQ of an LRQ, and thus an LRQ, over $\pi_0MP(r)$. It is thus strongly realisable as required.
\[cor-very-good\] If ${{\mathcal{O}}_S}$ is a standard ring, then every coordinate is multirealisable, and so $(G,S)$ is very good.
This now follows from Propositions \[prop-Z-ninv\] and \[prop-field\].
Proof of the main theorem {#sec-proof}
=========================
Let ${{\mathcal{E}}}$ denote the class of objects $E\in{\operatorname{EPA}}_0$ for which the resulting coordinate on ${\Gamma}(\eta^*E)$ is multirealisable. Note that this means that ${\Gamma}_1E$ is strongly realisable, so every realisation is strong, so in particular $E$ is a strong object.
\[prop-surj\] For any good formal group $(G,S)$, there exists $E\in{{\mathcal{E}}}$ with ${\Gamma}(\eta^*E)\simeq(G,S)$.
By the definition of goodness we can choose a multirealisable coordinate $x_0$ on $G$. This means in particular that the object $(G,S,x_0)\in{\operatorname{FG}}_0$ is isomorphic to ${\Gamma}_0(E)$ for some $E\in{\operatorname{EPA}}_0$. It follows that $(G,S)\simeq{\Gamma}(\eta^*E)$, as required.
\[prop-maps\] Suppose we have objects $E_0,E_1\in{{\mathcal{E}}}$, together with a map $$({\tilde{p}},p){\colon}{\Gamma}(\eta^*E_1){\xrightarrow}{}{\Gamma}(\eta^*E_0)$$ in ${\operatorname{GFG}}$. Then there is a unique good map $f{\colon}\eta^*E_0{\xrightarrow}{}\eta^*E_1$ such that ${\Gamma}(f)=({\tilde{p}},p)$.
We first put $(G_i,S_i,x_i)={\Gamma}_0E_i$ for $i=0,1$.
We introduce a category ${{\mathcal{B}}}={{\mathcal{B}}}(E_0,E_1,{\tilde{p}},p)$ as follows. The objects are triples $(A,f',f'')$ where
- $A$ is an object of ${\operatorname{EPA}}_1$.
- $f'$ is a morphism $E_0{\xrightarrow}{}({1{\wedge}\eta})^*A$ in ${\operatorname{EPA}}_0$.
- $f''$ is an isomorphism $({\eta{\wedge}1})^*A{\xrightarrow}{}E_1$ in ${\operatorname{EPA}}_0$.
- The composite $$f = {\theta}(A,f',f'') :=
(\eta^*E_0{\xrightarrow}{\eta^*f'}({\eta{\wedge}\eta})^*A{\xrightarrow}{\eta^*f''}\eta^*E_1)$$ satisfies ${\Gamma}(f)=({\tilde{p}},p)$.
The morphisms from $(A,f',f'')$ to $(B,g',g'')$ in ${{\mathcal{B}}}$ are the isomorphisms $u{\colon}A{\xrightarrow}{}B$ in ${\operatorname{EPA}}_1$ for which $(({1{\wedge}\eta})^*u)f'=g'$ and $g''(({\eta{\wedge}1})^*u)=f''$.
The maps of the form ${\theta}(A,f',f'')$ are precisely the good maps that induce $({\tilde{p}},p)$, and isomorphic objects of ${{\mathcal{B}}}$ have the same image under ${\theta}$. It will thus suffice to show that ${{\mathcal{B}}}\neq\emptyset$ and all objects of ${{\mathcal{B}}}$ are isomorphic.
First, as $x_1$ is multirealisable, we can find an object $A\in{\operatorname{EPA}}_1$ and an isomorphism $({\tilde{q}},q){\colon}{\Gamma}_1A{\xrightarrow}{}(G_1,S_1,{\tilde{p}}^*x_0,x_1)$ displaying $A$ as a strong realisation of $(G_1,S_1,{\tilde{p}}^*x_0,x_1)$. We write $(H,T,y_0,y_1)={\Gamma}_1A$, so $({\tilde{q}},q){\colon}(H,T){\xrightarrow}{\simeq}(G_1,S_1)$ and $({\tilde{p}}{\tilde{q}})^*x_0=y_0$ and ${\tilde{q}}^*x_1=y_1$. We can thus regard $({\tilde{p}}{\tilde{q}},pq)$ as a morphism $${\Gamma}_0(({1{\wedge}\eta})^*A) = (H,T,y_0) {\xrightarrow}{} (G_0,S_0,x_0) = {\Gamma}_0E_0,$$ and $E_0$ is a strong realisation of $(G_0,S_0,x_0)$, so this must come from a map $f'{\colon}E_0{\xrightarrow}{}({1{\wedge}\eta})^*A$ in ${\operatorname{EPA}}_0$. Similarly, we can regard $({\tilde{q}},q)$ as an isomorphism $${\Gamma}_0(({\eta{\wedge}1})^*A) = (H,T,y_1) {\xrightarrow}{\simeq}
(G_1,S_1,x_1) = {\Gamma}_0^*E_1.$$ As $E_1$ is a strong realisation of $(G_1,S_1,x_1)$, this comes from a map $E_1{\xrightarrow}{}({\eta{\wedge}1})^*A$; this is easily seen to be an isomorphism, and we let $f''{\colon}({\eta{\wedge}1})^*A{\xrightarrow}{}E_1$ be the inverse map. It is then clear that the map $$f=(\eta^*f'')\circ(\eta^*f'){\colon}\eta^*E_0{\xrightarrow}{}\eta^*E_1$$ is good and satisfies ${\Gamma}(f)=({\tilde{p}},p)$, so $(A,f',f'')\in{{\mathcal{B}}}$. Thus ${{\mathcal{B}}}\neq\emptyset$.
Now suppose we have another object $(B,g',g'')\in{{\mathcal{B}}}$, with ${\Gamma}_1B=(K,U,z_0,z_1)$ say. We put $$\begin{aligned}
({\tilde{r}},r) = {\Gamma}_1g'' &{\colon}(G_1,S_1,x_1) {\xrightarrow}{\simeq}
{\Gamma}_1(({\eta{\wedge}1})^*B) = (K,U,z_1) \\
({\tilde{s}},s) = {\Gamma}_1g' &{\colon}{\Gamma}_1(({1{\wedge}\eta})^*B) = (K,U,z_0)
{\xrightarrow}{} (G_0,S_0,x_0).
\end{aligned}$$ By hypothesis we have $({\tilde{s}}{\tilde{r}},sr)=({\tilde{p}},p){\colon}(G_1,s_1){\xrightarrow}{}(G_0,S_0)$. We display all these maps in the following commutative diagram: $$\xymatrix @=3pc {
(H,T) \rto^{({\tilde{q}},q)}_{\simeq} \dto_{({\tilde{p}}{\tilde{q}},pq)} &
(G_1,S_1) \dlto_{({\tilde{p}},p)} \dto_{\simeq}^{({\tilde{r}},r)} \\
(G_0,S_0) &
(K,U) \lto^{({\tilde{s}},s)}.
}$$ We now claim that $({\tilde{r}}{\tilde{q}},rq)$ can be regarded as a map $$(H,T,y_0,y_1){\xrightarrow}{}(K,U,z_0,z_1).$$ Indeed, it is clear from the data recorded above that it is a map $(H,T,y_1){\xrightarrow}{}(K,U,z_1)$, so it will suffice to check that $({\tilde{r}}{\tilde{q}})^*z_0=y_0$. We are given that $z_0={\tilde{s}}^*x_0$ and ${\tilde{s}}{\tilde{r}}={\tilde{p}}$ and $({\tilde{p}}{\tilde{q}})^*x_0=y_0$; the claim follows. As $r$ and $q$ are isomorphisms, we have an isomorphism $$({\tilde{r}}{\tilde{q}},rq)^{-1}{\colon}{\Gamma}_1B = (K,U,z_0,z_1) {\xrightarrow}{}
(H,T,y_0,y_1) = {\Gamma}_1A$$ in ${\operatorname{FG}}_1$. As $A$ is a strong realization, this comes from a unique map $u{\colon}A{\xrightarrow}{}B$ in ${\operatorname{EPA}}_1$, which is easily seen to be an isomorphism.
We must show that $u$ is a morphism in our category ${{\mathcal{B}}}$, or equivalently that in ${\operatorname{EPA}}_0$ we have $$\begin{aligned}
(({1{\wedge}\eta})^*u)f'=g' & {\colon}E_0{\xrightarrow}{}({1{\wedge}\eta})^*B \\
g''(({\eta{\wedge}1})^*u)=f'' & {\colon}({\eta{\wedge}1})^*B{\xrightarrow}{} E_1.
\end{aligned}$$ Note that $E_0$ and $E_1$ are strong, and $f''$ is an isomorphism, so $({\eta{\wedge}1})^*B$ is strong. It is thus enough to check our two equations after applying $\pi_0$ (here we have used the original definition rather than the equivalent one in Definition \[defn-strong-FGr\]). By definition or construction, we have $$\begin{aligned}
{\operatorname{spec}}(\pi_0f') &= pq \\
{\operatorname{spec}}(\pi_0f'') &= q^{-1} \\
{\operatorname{spec}}(\pi_0g') &= s \\
{\operatorname{spec}}(\pi_0g'') &= r \\
{\operatorname{spec}}(\pi_0u) &= (rq)^{-1} \\
sr &= p.
\end{aligned}$$ It follows easily that $(\pi_0u)(\pi_0f')=\pi_0g'$ and $(\pi_0g'')(\pi_0u)=\pi_0f''$, as required. This shows that $u$ is an isomorphism in ${{\mathcal{B}}}$, and thus that $f$ is the unique good map inducing the map $({\tilde{p}},p)$.
\[lem-id\] For any $E\in{{\mathcal{E}}}$, the identity map $1{\colon}\eta^*E{\xrightarrow}{}\eta^*E$ is good.
Note that the multiplication map $MP(1)=MP{\wedge}MP{\xrightarrow}{}MP$ is a map of ring spectra (in the strict sense) and so induces a functor $\mu^*{\colon}{\operatorname{EPA}}_0{\xrightarrow}{}{\operatorname{EPA}}_1$ with $({1{\wedge}\eta})^*\mu^*E=({\eta{\wedge}1})^*\mu^*E=E$ on the nose. We can thus take $A=\mu^*E$ and $f'=f''=1_E$ to show that $1_E$ is good.
\[prop-comp\] Suppose we have objects $E_0,E_1,E_2\in{{\mathcal{E}}}$ and good morphisms $\eta^*E_0{\xrightarrow}{f}\eta^*E_1{\xrightarrow}{g}\eta^*E_2$. Then the composite $gf$ is also good.
Write $(G_i,S_i,x_i)={\Gamma}_0E_i$ and $({\tilde{p}},p)={\Gamma}(f){\colon}(G_1,S_1){\xrightarrow}{}(G_0,S_0)$ and $({\tilde{q}},q)={\Gamma}(g){\colon}(G_2,S_2){\xrightarrow}{}(G_1,S_1)$.
Choose objects $A,B\in{\operatorname{EPA}}_1$ and maps $$\begin{aligned}
f' &{\colon}E_0 {\xrightarrow}{} ({1{\wedge}\eta})^*A \\
f'' &{\colon}({\eta{\wedge}1})^*A {\xrightarrow}{\simeq} E_1 \\
g' &{\colon}E_1 {\xrightarrow}{} ({1{\wedge}\eta})^*B \\
g'' &{\colon}({\eta{\wedge}1})^*B {\xrightarrow}{\simeq} E_2
\end{aligned}$$ exhibiting the goodness of $f$ and $g$. This gives rise to isomorphisms $$\begin{aligned}
{\Gamma}_1A &= (G_1,S_1,{\tilde{p}}^*x_0,x_1) \\
{\Gamma}_1B &= (G_2,S_2,{\tilde{q}}^*x_1,x_2).
\end{aligned}$$
Next, observe that we have an object $(G_2,S_2,({\tilde{p}}{\tilde{q}})^*x_0,{\tilde{q}}^*x_1,x_2)\in{\operatorname{FG}}_2$, which is strongly realisable because $x_2$ is a multirealisable coordinate. We can thus choose an object $P\in{\operatorname{EPA}}_2$ and an isomorphism $$({\tilde{r}},r){\colon}{\Gamma}_2P{\xrightarrow}{}(G_2,S_2,({\tilde{p}}{\tilde{q}})^*x_0,{\tilde{q}}^*x_1,x_2)$$ making $P$ a strong realisation. We can also regard $({\tilde{r}},r)$ as an isomorphism $${\Gamma}_1(({\eta{\wedge}1{\wedge}1})^*P) {\xrightarrow}{} (G_2,S_2,{\tilde{q}}^*x_1,x_2) = {\Gamma}_1B.$$ As $B$ is strong, this comes from a unique isomorphism $v{\colon}({\eta{\wedge}1{\wedge}1})^*P{\xrightarrow}{}B$ in ${\operatorname{EPA}}_1$.
Similarly, we can regard $({\tilde{r}},r)$ as an isomorphism $${\Gamma}_1(({1{\wedge}1{\wedge}\eta})^*P){\xrightarrow}{}(G_2,S_2,{\tilde{q}}^*{\tilde{p}}^*x_0,{\tilde{q}}^*x_1),$$ and we can regard $({\tilde{q}},q)$ as a morphism $$(G_2,S_2,{\tilde{q}}^*{\tilde{p}}^*x_0,{\tilde{q}}^*x_1) {\xrightarrow}{}
(G_1,S_1,{\tilde{p}}^*x_0,x_1) \simeq {\Gamma}_1A.$$ As $A$ is strong, the composite $({\tilde{q}}{\tilde{r}},qr)$ must come from a unique map $u{\colon}A{\xrightarrow}{}({1{\wedge}1{\wedge}\eta})^*P$ in ${\operatorname{EPA}}_1$.
We now put $$\begin{aligned}
C &= ({1{\wedge}\eta{\wedge}1})^*P\in{\operatorname{EPA}}_1 \\
h' &= (E_0 {\xrightarrow}{f'} ({1{\wedge}\eta})^*A{\xrightarrow}{({1{\wedge}\eta})^*u} ({1{\wedge}\eta{\wedge}\eta})^*P= ({1{\wedge}\eta})^*C) \\
h''&= (({\eta{\wedge}1})^*C = ({\eta{\wedge}\eta{\wedge}1})^*P{\xrightarrow}{({\eta{\wedge}1})^*v}({\eta{\wedge}1})^*B {\xrightarrow}{g''} E_2).
\end{aligned}$$ As $v$ and $g''$ are isomorphisms, the same is true of $h''$. We claim that after forgetting down to ${\operatorname{EPR}}$, we have $h''h'=gf$; this will prove that $gf$ is good as claimed. We certainly have $h''h'=g''vuf'$ and $gf=g''g'f''f'$ so it will suffice to show that $vu=g'f''{\colon}A{\xrightarrow}{}B$ in ${\operatorname{EPR}}$. For this, it will be enough to prove that the following diagram in ${\operatorname{EPA}}_0$ commutes. $$\xymatrix {
({\eta{\wedge}1})^*A \rto^{({\eta{\wedge}1})^*u} \dto_{f''}^{\simeq} & ({\eta{\wedge}1{\wedge}\eta})^*P
\dto_{\simeq}^{({1{\wedge}\eta})^*v} \\ E_1 \rto_{g'} & ({1{\wedge}\eta})^*B.
}$$ As this is a diagram in ${\operatorname{EPA}}_0$ and $({\eta{\wedge}1})^*A\simeq E_1$ is strong, it will be enough to check that the diagram commutes after applying $\pi_0$. By construction we have $\pi_0(u)=w^{-1}\circ\psi\circ\pi_0(f'')$ and $\psi=\pi_0(g)=\pi_0(g'')\circ\pi_0(g')$ and $\pi_0(v)=\pi_0(g'')^{-1}\circ w$. It follows directly that the above diagram commutes on homotopy, groups, so it commutes in ${\operatorname{EPA}}_0$, so it commutes in ${\operatorname{EPR}}$, so $gf=h''h'$ in ${\operatorname{EPR}}$ as explained previously. Thus, the map $gf$ is good, as claimed.
We merely need to collect results together and explain the argument in the introduction in more detail. Lemma \[lem-id\] and Proposition \[prop-comp\] show that we can make ${{\mathcal{E}}}$ into a category by taking the good maps from $\eta^*E_0$ to $\eta^*E_1$ as the morphisms from $E_0$ to $E_1$. Tautologically, we can define a faithful functor $U{\colon}{{\mathcal{E}}}{\xrightarrow}{}{\operatorname{EPR}}$ by $U(E)=\eta^*E$ and $U(f)=f$. We then define $V={\Gamma}U{\colon}{{\mathcal{E}}}{\xrightarrow}{}{\operatorname{FG}}$; by the definition of ${{\mathcal{E}}}$, this actually lands in ${\operatorname{GFG}}$. Proposition \[prop-surj\] says that $V$ is essentially surjective, and Proposition \[prop-maps\] says that $V$ is full and faithful. This means that $V$ is an equivalence, so we can invert it and define $R=UV^{-1}{\colon}{\operatorname{GFG}}{\xrightarrow}{}{\operatorname{EPR}}$. As $V={\Gamma}U$ we have ${\Gamma}R=1$, so $R$ is the required realisation functor.
Appendix : The product on $MP$
==============================
In this appendix we verify that $MP$ can be constructed as an $E_\infty$ ring spectrum.
Let ${{\mathcal{U}}}$ be a complex universe. For any finite-dimensional subspace $U$ of ${{\mathcal{U}}}$, we write $U_L=U\oplus 0<{{\mathcal{U}}}\oplus{{\mathcal{U}}}$ and $U_R=0\oplus U<{{\mathcal{U}}}\oplus{{\mathcal{U}}}$. We let ${\operatorname{Grass}}(U\oplus U)$ denote the Grassmannian of all subspaces of $U\oplus U$ (of all possible dimensions), and we write ${\gamma}_U$ for the tautological bundle over this space, and ${\operatorname{Thom}}(U\oplus U)$ for the associated Thom space. If $U\leq U'<{{\mathcal{U}}}$ then we define $i{\colon}{\operatorname{Grass}}(U^2){\xrightarrow}{}{\operatorname{Grass}}((U')^2)$ by $i(A)=A\oplus(U'\ominus U)_R$. On passing to Thom spaces we get a map ${\sigma}{\colon}{\Sigma}^{U'\ominus U}{\operatorname{Thom}}(U^2){\xrightarrow}{}{\operatorname{Thom}}((U')^2)$. These maps can be used to assemble the spaces ${\operatorname{Thom}}(U^2)$ into a ${\Sigma}$-inclusion prespectrum indexed by the complex subspaces of ${{\mathcal{U}}}$. We write $T_{{\mathcal{U}}}$ for this prespectrum, and $MP_{{\mathcal{U}}}$ for its spectrification.
Now let ${{\mathcal{V}}}$ be another complex universe, so we have a prespectrum $T_{{\mathcal{V}}}$ over ${{\mathcal{V}}}$, and thus an external smash product $T_{{\mathcal{U}}}{\wedge}_{{\operatorname{ext}}}T_{{\mathcal{V}}}$ indexed on the complex subspaces of ${{\mathcal{U}}}\oplus{{\mathcal{V}}}$ of the form $U\oplus V$. The direct sum gives a map ${\operatorname{Grass}}(U^2){\times}{\operatorname{Grass}}(V^2){\xrightarrow}{}{\operatorname{Grass}}((U\oplus V)^2)$ which induces a map ${\operatorname{Thom}}(U^2){\wedge}{\operatorname{Thom}}(V^2){\xrightarrow}{}{\operatorname{Thom}}((U\oplus V)^2)$. These maps fit together to give a map $T_{{\mathcal{U}}}{\wedge}_{{\operatorname{ext}}}T_{{\mathcal{V}}}{\xrightarrow}{}T_{{{\mathcal{U}}}\oplus{{\mathcal{V}}}}$, and thus a map $MP_{{\mathcal{U}}}{\wedge}_{{\operatorname{ext}}}MP_{{\mathcal{V}}}{\xrightarrow}{}MP_{{{\mathcal{U}}}\oplus{{\mathcal{V}}}}$ of spectra over ${{\mathcal{U}}}\oplus{{\mathcal{V}}}$. Essentially the same construction gives maps $$MP_{{{\mathcal{U}}}_1}{\wedge}_{{\operatorname{ext}}}\ldots{\wedge}_{{\operatorname{ext}}}MP_{{{\mathcal{U}}}_r}
{\xrightarrow}{} MP_{{{\mathcal{U}}}_1\oplus\ldots\oplus{{\mathcal{U}}}_r}.$$ If ${{\mathcal{U}}}_1=\ldots{{\mathcal{U}}}_r={{\mathcal{U}}}$, then this map is ${\Sigma}_r$-equivariant.
Now suppose instead that we have a complex linear isometry $f{\colon}{{\mathcal{U}}}{\xrightarrow}{}{{\mathcal{V}}}$. This gives evident homeomorphisms ${\operatorname{Thom}}(U^2){\xrightarrow}{}{\operatorname{Thom}}((fU)^2)$, which fit together to induce a map $MP_{{\mathcal{U}}}{\xrightarrow}{}f^*MP_{{\mathcal{V}}}$, which is adjoint to a map $f_*MP_{{\mathcal{U}}}{\xrightarrow}{}MP_{{\mathcal{V}}}$. We next observe that this construction is continuous in all possible variables, including $f$. (This statement requires some interpretation, but there are no new issues beyond those that are well-understood for $MU$; the cleanest technical framework is provided by [@el:ggs].) It follows that they fit together to give a map ${{\mathcal{L}_\mathbb{C}}}({{\mathcal{U}}},{{\mathcal{V}}}){\ltimes}MP_{{\mathcal{U}}}{\xrightarrow}{}MP_{{\mathcal{V}}}$ of spectra over ${{\mathcal{V}}}$.
We now combine this with the product structure mentioned earlier to get a map $${{\mathcal{L}_\mathbb{C}}}({{\mathcal{U}}}^r,{{\mathcal{U}}}){\ltimes}_{{\Sigma}_r}
(MP_{{\mathcal{U}}}{\wedge}_{{\operatorname{ext}}}\ldots{\wedge}_{{\operatorname{ext}}}MP_{{\mathcal{U}}}) {\xrightarrow}{}
MP_{{\mathcal{U}}}.$$ This means that $MP_{{\mathcal{U}}}$ has an action of the $E_\infty$ operad of complex linear isometries, as required.
All that is left is to check that the spectrum $MP=MP_{{{\mathbb{C}}}^\infty}$ constructed above has the right homotopy type. As $T$ is a ${\Sigma}$-inclusion prespectrum, we know that spectrification works in the simplest possible way and that $MP$ is the homotopy colimit of the spectra $${\Sigma}^{-2n}{\operatorname{Thom}}({{\mathbb{C}}}^n\oplus{{\mathbb{C}}}^n) =
{\bigvee}_{k=-n}^n {\Sigma}^{-2n}{\operatorname{Grass}}_{k+n}({{\mathbb{C}}}^n\oplus{{\mathbb{C}}}^n)^{\gamma},$$ where ${\operatorname{Grass}}_d(V)$ is the space of $d$-dimensional subspaces of $V$. It is not hard to see that the maps of the colimit system preserve this splitting, so that $MP$ is the wedge over all $k\in{{\mathbb{Z}}}$ of the spectra $$X_k := {\operatornamewithlimits{\underset{\longrightarrow}{holim}}}_n{\Sigma}^{-2n}{\operatorname{Grass}}_{k+n}({{\mathbb{C}}}^n\oplus{{\mathbb{C}}}^n)^{\gamma}.$$ This can be rewritten as $$X_k =
{\Sigma}^{2k} {\operatornamewithlimits{\underset{\longrightarrow}{holim}}}_{n,m}{\Sigma}^{-2(k+n)}{\operatorname{Grass}}_{k+n}({{\mathbb{C}}}^m\oplus{{\mathbb{C}}}^n)^{\gamma}.$$ We can reindex by putting $n=i-k$ and $m=j+k$, and then pass to the limit in $j$. We find that $$X_k = {\Sigma}^{2k}{\operatornamewithlimits{\underset{\longrightarrow}{holim}}}_i{\Sigma}^{-2i}{\operatorname{Grass}}_i({{\mathbb{C}}}^\infty\oplus{{\mathbb{C}}}^i)^{\gamma}.$$ It is well-known that ${\operatorname{Grass}}_i({{\mathbb{C}}}^\infty\oplus{{\mathbb{C}}}^i)$ is a model for $BU(i)$, and it follows that $X_k={\Sigma}^{2k}MU$, so $MP={\bigvee}_k{\Sigma}^{2k}MU$ as claimed. We leave it to the reader to check that the product structure is the obvious one.
All the above was done without inverting $2$. Inverting $2$ is an example of Bousfield localisation, and this can always be performed in the category of strictly commutative ring spectra.
**AD Elmendorf**, *The [G]{}rassmannian Geometry of Spectra*, Journal of Pure and Applied Algebra 54 (1988) 37–94
**AD Elmendorf**, **I Kriz**, **MA Mandell**, **JP May**, *Rings, Modules and Algebras in Stable Homotopy Theory*, volume 47 of *Amer. Math. Soc. Surveys and Monographs*, American Mathematical Society (1996)
**LGaunce Lewis**, **JPeter May**, **MSteinberger (with contributions by Jim EMcClure)**, *Equivariant Stable Homotopy Theory*, volume 1213 of *Lecture Notes in Mathematics*, Springer–Verlag, New York (1986)
**NP Strickland**, *Products on ${\rm {M}{U}}$-modules*, Trans. Amer. Math. Soc. 351 (1999) 2569–2606
**Neil P Strickland**, *Formal schemes and formal groups*, from: “Homotopy invariant algebraic structures (Baltimore, MD, 1998)”, Amer. Math. Soc., Providence, RI (1999) 263–352
|
---
abstract: 'This paper provides a sample of a LaTeX document which conforms, somewhat loosely, to the formatting guidelines for ACM SIG Proceedings.'
author:
- Ben Trovato
- 'G.K.M. Tobin'
- 'Lars Th[ø]{}rv[ä]{}ld'
- Valerie Béranger
- Aparna Patel
- Huifen Chan
- Charles Palmer
- John Smith
- 'Julius P. Kumquat'
bibliography:
- 'sample-bibliography.bib'
subtitle: Extended Abstract
title: SIG Proceedings Paper in LaTeX Format
---
<ccs2012> <concept> <concept\_id>10010520.10010553.10010562</concept\_id> <concept\_desc>Computer systems organization Embedded systems</concept\_desc> <concept\_significance>500</concept\_significance> </concept> <concept> <concept\_id>10010520.10010575.10010755</concept\_id> <concept\_desc>Computer systems organization Redundancy</concept\_desc> <concept\_significance>300</concept\_significance> </concept> <concept> <concept\_id>10010520.10010553.10010554</concept\_id> <concept\_desc>Computer systems organization Robotics</concept\_desc> <concept\_significance>100</concept\_significance> </concept> <concept> <concept\_id>10003033.10003083.10003095</concept\_id> <concept\_desc>Networks Network reliability</concept\_desc> <concept\_significance>100</concept\_significance> </concept> </ccs2012>
{width="\textwidth"}
|
---
abstract: 'In this letter we synthesize numerically the Lü attractor starting from the generalized Lorenz and Chen systems, by switching the control parameter inside a chosen finite set of values on every successive adjacent finite time intervals. A numerical method with fixed step size for ODEs is used to integrate the underlying initial value problem. As numerically and computationally proved in this work, the utilized attractors synthesis algorithm introduced by the present author before, allows to synthesize the Lü attractor starting from any finite set of parameter values.'
author:
- |
Marius-F. Danca\
$^{a}$Department of Mathematics and Computer Science,\
Avram Iancu University,\
400380 Cluj-Napoca, Romania;\
$^{b}$Institute of Science and Technology,\
400487 Cluj-Napoca, Romania
title: 'Synthesizing the Lü attractor by parameter-switching'
---
Keywords: Lü system, global attractor, chaotic attractor, parameter-switching
Introduction
============
Consider the following unified chaotic system (bridge between the Lorenz and Chen systems) [@Lu]: $$\begin{array}
[c]{l}
\overset{.}{x}_{1}=(25p+10)(x_{2}-x_{1}),\\
\overset{.}{x}_{2}=(28-35p)x_{1}+(29p-1)x_{2}-x_{1}x_{3},\\
\overset{.}{x}_{3}=x_{1}x_{2}-\left( 8+p\right) /3x_{3},
\end{array}
\label{unified}$$
where the parameter $p\in\left[ 0,1\right] .$ As it is known now, for $p\in\lbrack0,0.8)$ (\[unified\]) models the canonical Lorenz system [@Celikovski; @and; @Chen], for $p=0.8$ the system becomes Lü system [@Lu; @and; @Chen], while when $p\in(0.8,1]$, the system becomes Chen system [@Chen; @and; @Ueta]. Therefore, this system is likely to be the simplest chaotic system bridging the gap between the Lorenz and the Chen systems.
The above three systems share some common properties such as: they all have the same symmetry, dissipativity, stability of equilibria, similar bifurcations and topological structures and belong to the generalized Lorenz canonical family [@Celikovski; @and; @Chen].
In the mentioned references, a positive answer to the question as if it is possible to realize a continuous transition from one to another system is given.
In this letter, we present a discontinuous transition algorithm between the Lorenz and the Chen systems with whitch the Lü attractor can be synthesized. For this purpose, the parameter switching method introduced in [@Danca; @et; @al; @1] is utilized.
The present work is organized as follows: Section 2 presents the synthesis algorithm, while in Section 3 the Lü attractor is synthesized in both deterministic and random ways via the mentioned synthesis algorithm.
Attractors synthesis algorithm
==============================
Consider a class of dissipative autonomous dynamical systems modeled by the following initial value problem: $$S:~\dot{x}=f_{p}(x),\quad x(0)=x_{0}, \label{ivp general}$$ where $p\in\mathbb{R}$ and $f_{p}:\mathbb{R}^{n}\longrightarrow \mathbb{R}^{n}\,$ has the expression $$f_{p}(x\mathbf{)=}g(x\mathbf{)}+pMx, \label{2}$$
with $g:\mathbb{R}^{n}\longrightarrow\mathbb{R}^{n}~\ $being a vector continuous nonlinear function, $M~$a real constant $n\times n$ matrix,$~x_{0}\in\mathbb{R}^{n}$, and the maximal existence interval$~I=[0,\infty).$
For the Lü system (\[unified\]), one has: $$M=\left(
\begin{array}
[c]{ccc}0 & 25 & 0\\
-35 & 29 & 0\\
0 & 0 & -1/3
\end{array}
\right) ,~g(x)=\left( 10\left( x_{2}-x_{1}\right) ,~x_{2}-x_{1}
x_{3}+28x_{1},x_{1}x_{2}-8/3x_{3}\right) ^{T}$$
with divergence $div~f_{p}(x)<0$ for$~p\in\lbrack0,1],$ so the system is dissipative.
The existence and uniqueness of solutions on the maximal existence interval $I~$are assumed. Also, without any restriction, it is supposed that corresponding to different $p$, there are different global attractors. Because of numerical characteristics of the attractors synthesis (AS) algorithm and for sake of simplicity, by a *(global) attractor* in this letter one understand without a significant loss of generality, only the approximation of the $\omega$-*limit set*, as in [@Foias], is plotted after neglecting a sufficiently long period of transients (for background about attractors see [@milnor]).
Let $\mathcal{P\subset}\mathbb{R}$ be the set of all admissible values for $p$ and $\mathcal{A}$ the set of all corresponding global attractors, which includes attractive stable fixed points, limit cycles and chaotic attractors. Also, denote by $\mathcal{P}_{N}$ a finite subset of $\mathcal{P}$ for some positive integer $N>1$ and the corresponding subset of attractors $\mathcal{A}_{N}\subset\mathcal{A}.$
Because of the assumed dissipativity, $\mathcal{A}$ is a non-empty set. Therefore, following the above assumptions, a bijection between $\mathcal{P}$ and $\mathcal{A},$ $F:$ $\mathcal{P\rightarrow A},$ can be considered. Thus, to each $p\in\mathcal{P}$ corresponds a unique global attractor $A_{p}\in\mathcal{A}$ and conversely for each global attractor there exists a unique parameter value $p\in\mathcal{P}$.
In [@Danca; @et; @al; @1], it is proved numerically that switching, indefinitely in some periodic way,the parameter $p$ inside $\mathcal{P}_{N}$ over finite time subintervals, while (\[ivp general\]) is integrated with some numerical method for ODEs with fixed step size $h$, any attractor of $\mathcal{A}_{N}$ can be synthesized. For a chosen $N$, consider $I$ being partitioned in to consecutive sets of $N~$finite adjacent time subintervals $I_{i}$, : $I=(I_{1}\cup I_{2}\cup\ldots\cup I_{N})\cup(I_{1}\cup
I_{2}\cup\ldots\cup I_{N})\cup\ldots$ of lengths $\Delta
t_{i},~i=1,2\ldots,N$. If, in each subinterval $I_{i},$ while some numerical method with single fixed step size $h$ integrates (\[ivp general\]), $p$ is switched as follows: $p=p_{i},~$for $t\in I_{i},$ Then, a *synthesized attractor*, denoted by $A^{\ast},$ can be generated$.$ The simplest way to implement numerically the AS algorithm is to choose $\Delta t_{i}$ as a multiple of $h$. Thus, the AS algorithm can be symbolically written for a fixed step size $h$ as follows: $$\lbrack m_{1}p_{1},~m_{2}p_{2},\ldots,m_{N}p_{N}], \label{SA}$$
where $m_{k}$ are some positive integers (weights) and by $m_{k}p_{k}$ one understands that in the $k$-th time subinterval $I_{k},~$of length $m_{k}h,$ $p~$receives the value $p_{k}.$
In [@Danca; @et; @al; @1], it is proved numerically that $A^{\ast}$ is *identical*[^1] to $A_{p^{\ast}}$ for
$$p^{\ast}=\frac{\sum\limits_{k=1}^{N}m_{k}p_{k}}{\sum\limits_{k=1}^{N}m_{k}
}\text{.} \label{p formula}$$
For example, the sequence $\left[ 1p_{1},2p_{2}\right] $ means that $m_{1}=1,$ $m_{2}=2$ and the synthesized attractor $A^{\ast}~$is synthesized as follows: in the first time interval $I_{1}~$of length $\Delta t_{1}=h,$ the numerical method solves (\[ivp general\]) with $p=p_{1};$ next, for the second time interval $I_{2}$ of length $\Delta t_{2}=2h$, $p=p_{2},$ and the algorithm repeats. If we apply this scheme to (\[unified\]) for $p_{1}=0.8$ (chaotic Lü attractor) and $p_{2}=0.959$ (chaotic Chen attractor), one obtains the synthesized regular Chen attractor $A^{\ast}$ which is identical to $A_{p^{\ast}}~$with $p^{\ast}=\left( p_{1}+2p_{2}\right) /3=0.906~$in (\[p formula\]), corresponding to a stable periodic limit cycle. In Fig. 1, to underline the identity, phase plots, time series, histograms and Poincaré sections superimposed were utilized beside Haussdorf distance between the two attractors which is of order $10^{-2}\div10^{-3}~$conferring a good accuracy to AS.
It is noted that the AS algorithm can be applied even in some random way: because $p^{\ast}~$in (\[p formula\]) is defined in a convex manner (if denoting $\alpha_{k}=m_{k}/\sum\limits_{k=1}^{N}m_{k}<1,$ then$~p^{\ast}=\sum\limits_{k=1}^{N}\alpha_{k}p_{k},$ with$\ \sum\limits_{k=1}^{N}\alpha
_{k}=1)$ and based on the bijective function $F$, any synthesized attractor $A^{\ast}$ is located inside the set $\mathcal{A}_{N}$ (all elements, i.e. attractors, are ordered with the order endowed by $F)~$and whatever (random) scheme (\[SA\]) is used, the result is the same [@Danca1].
The random AS can be implemented e.g. by generating a sequence (\[SA\]) with a random uniform distribution of $p$ [@Danca1] which is supposed to generate all the integers $1,\ldots,N~$(Fig. 2)$.~$
Now, $p^{\ast}$ is given by the following formula: $$p^{\ast}=\frac{\sum\limits_{i=1}^{N}m_{i}^{^{\prime}}p_{i}}{\sum
\limits_{i=1}^{N}m_{i}^{^{\prime}}} \label{p random}$$
where $m_{i}^{\prime}$ counts the number of $p_{i\text{ }}$. Obviously, now, $I$ has to be chosen large enough, such that (\[p random\]) can converge to $p^{\ast}$ (the precise value in this case for $p^{\ast}$ could be obtained only for $I=[0,\infty)$ ).
i\) The AS algorithm is useful in the applications where some $p$ are not directly accessible.ii) The AS algorithm can be viewed as an explanation for the way regular or chaotic behaviors may appear in natural systems. ii) Being a numerical algorithm, AS has limitations. For example, for relatively large switches of $p$ or $m,$ or for a too-large number $N,$ $A^{\ast}$ could present some “corners”. Also, obviously, the $h$ size may influence the AS algorithm performances (ideally, $\ h$ should decrease to zero). Some details and other related aspects about the errors can be found in [@Danca; @et; @al; @1] and [@Danca1]). iii) In the general case of a dynamical system modeled by (\[ivp general\]), the only restriction to synthesize a chaotic attractor, when starting from regular attractors, is that inside the set $\mathcal{A}_{N}$ there are chaotic attractors (and vice-versa for regular synthesized attractors).iv) The AS can be used as a kind of control-like method [@Danca; @2] or anticontrol [@Danca; @et; @al; @1].v) Near several continuous dynamical systems (such as the Chen system, Rössler system, Rabinovich-Fabrikant system ([@xiaodong; @et; @al]) minimal networks, Lotka-Volterra system, Lü system, Rikitake system), the AS algorithm was also applied successfully to systems of fractional orders [@Danca; @Kai].
Lü attractor synthesis
======================
The numerical results in this section are obtained using the standard Runge-Kutta algorithm with fixed integration time step $h=0.001$.
To visualize how the AS works, the bifurcation diagram was plotted (Fig. 3). Next, we synthesize the Lü attractor starting from different values for $p$ and using deterministic or random schemes (\[p formula\]). In this simulation, once we fixed $N,$ all we need is to choose $m$ and $\mathcal{P}_{N}~$so that the equation (\[p formula\]) with $p^{\ast}=0.8$ corresponding to the Lü attractor, can be verified. Besides Poincaré sections and histograms, Haussdorf distance between $A^{\ast}~$and $A_{p^{\ast}}$ ([@Falconer] p.114) was computed, in this case, in order of $10^{-2} \div10^{-3},$ which indicates a good approximation.
First we applied the deterministic scheme (\[SA\]) for $N=2,$ $p_{1}=0.2$ (corresponding to generalized the Lorenz system, Fig. 4 a), $\ $and$~p_{2}=1$ (corresponding to the Chen system, Fig. 4 b) with the scheme $[1p_{1},3p_{2}]$. In this case, the synthesized attractor $A^{\ast}$ is identical to $A_{p^{\ast}}$ with $p^{\ast}=0.8=\left( 1p_{1}+3p_{2}\right) /4$ (Fig. 4 c). In Fig. 4 d and e, the histograms and Poincaré sections of both attractors, $A^{\ast}$ and $A_{p^{\ast}},~$are plotted superimposed to underline the identity.
Because the solution of (\[p formula\]) for given $N,~P_{N}$ is not unique, the Lü attractor can be obtained in, theoretically, infinitely many ways. Thus, we chose $N=5$, $p_{1}=0.47,~p_{2}=0.585,~p_{3}=0,678~$(corresponding to the Lorenz system)$,~p_{4}=0.905,$ $p_{5}=0.9405$ (corresponding to the Chen system) and $\ m_{1}=m_{2}=m_{3}=m_{4}=1,$ $m_{5}=4,$ again $p^{\ast
}=0.8=(p_{1}+p_{2}+p_{3}+p_{4}+4p_{5})/8.$ $A_{p_{1,...,5}}$ and $A^{\ast},$ $A_{p^{\ast}}$ are presented in Fig. 5 with Poincaré sections and histograms.
Using the random way presented in Fig. 2, the Lü attractor can be synthesized with, for example, $p_{1}=0.6$ and $p_{2}=1$ (Fig. 6).
Conclusion
==========
The design AS algorithm has been utilized to generate numerically the Lü attractor starting from his “neighbors”, the Lorenz and Chen attractors, not by continuous transformations as before but by discontinuous parameter switching inside a chosen parameter set.
[9]{}
Lü, J., Chen, G. Cheng, D. and Celikovsky, S. \[2002\] “Bridge the gap between the Lorenz system and the Chen system,” *Int. J. Bifurcation and chaos*,12, 2917-2926.
Celikovsky, S. and Chen, G. \[2002\] “On a generalized Lorenz canonical form of chaotic systems,” *Int. J. Bifurcation and Chaos* 12, 1789-1812.
Lü, J. and Chen, G. \[2002\] “A new chaotic attractor coined,” *Int. J. Bifurcation and chaos* **12**, 659-661.
Danca, M.-F., Tang, W. K. S. and Chen, G. \[2008\] “A switching scheme for synthesizing attractors of dissipative chaotic systems,” *Applied Mathematics and Computations* **201,** 650-67.
Chen, G. and Ueta, T. \[1999\] “Yet another chaotic attractor,” *Int. J. Bifurcation and Chaos* 9, 1465-1466.
Foias, C. and Jolly, M. S. \[1995\] “On the numerical algebraic approximation of global attractors,” *Nonlinearity.* **8,** 295–319.
Milnor, J. \[1985\] “On the concept of attractor,” *Communications in Mathematical Physics* **99**, 177–195.
Danca, M.-F. \[2008\] “Random parameter-switching synthesis of a class of hyperbolic attractors,” *Chaos.* **18,** 033111.
Danca, M-F. \[2009\] “Finding stable attractors of a class of dissipative dynamical systems by numerical parameter switching,” *Dynamical Systems*, DOI 10.1080/14689360903401278.
Luo, X., Small, M., Danca, M.-F. and Chen. G. \[2007\] “On a dynamical system with multiple chaotic attractors,” *Int. J. Bifurcation and Chao* **17**, 3235-3251.
Danca, M.-F. and Diethlem, K. \[2010\] “Fractional-order attractors synthesis via parameter switching,” *Commun Nonlinear Sci Numer Simulat*, doi:10.1016/j.cnsns.2010.01.011, 2019.
Falconer, K. \[1990\] *Fractal Geometry, Mathematical Foundations and Applications* (John Wiley & Sons, Chichester).
![Synthesis of a stable limit cycle for the Chen attractor$,~$obtained using the scheme $[1p_{1},2p_{2}]$ with $p_{1}=0.8$ and $p_{2}=0.959$ and $p^{\ast}=0.906$; a) Lü attractor; b) Chen attractor; c) $A^{\ast}$ and $A_{p^{\ast}}$ plotted superimposed; d) Histograms of $A^{\ast }$ and $A_{p^{\ast}}~$superimposed; e) Poincaré superimposed sections with plane $x_{3}=28$ of $A^{\ast}$ and $A_{p^{\ast}}.~$f) Time series with transients of component $x_{1}$ of $A^{\ast}$ and $A_{p^{\ast}}$ superimposed.](figure1.png){width="100.00000%"}
$$\begin{array}
[c]{l}
repeat\\
~~~~label=\operatorname{rand}(N)\\
~~~~if~label=1~then\\
~~\ ~~~~~~~integrate~(\ref{ivp general})~with~p=p_{1}\\
~~~~~~~~~~inc(m_{1}^{^{\prime}})\\
~~~~if~label=2~then\\
~~~~~~~~~~integrate~(\ref{ivp general})~with~p=p_{2}\\
~~~~\ ~~~~~inc(m_{2}^{^{\prime}})\\
~~~~~\ldots\\
~~~~if~label=N~then\\
~~~~~~~~~~integrate~(\ref{ivp general})~with~p=p_{N}\\
~\ ~~\ ~~~~~inc(m_{N}^{^{\prime}})\\
~~~~~t=t+h\\
until~t\geq T_{\max}
\end{array}
\label{random}$$
{width="110.00000%"}
![The synthesized Lü attractor obtained with scheme $[1p_{1},3p_{2}]$ for $p_{1}=0.2,$ $p_{2}=1$; a) $A_{p_{1}}$; b) $A_{p_{2}};~$c) $A^{\ast}$ and $A_{p^{\ast}}$ plotted superimposed; d) Superimposed histograms; e) Superimposed Poincaré sections.](figure4.png){width="100.00000%"}
![The synthesized Lü attractor obtained with scheme $[1p_{1},$ $1p_{2},1p_{3},1p_{4},4p_{5}]$ for $p_{1}=0.47,~p_{2}=0.585,~p_{3}=0,678,$ $p_{4}=0.905$ and $p_{5}=0.9405.$ a-e) Attractors $A_{p_{i}},$ $i=1,\ldots,5;$ f) $A^{\ast}$ and $A_{p^{\ast}}~$plotted superimposed; g) Superimpose histograms; h) Superimposed Poincaré sections. ](figure5.png){width="95.00000%"}
{width="100.00000%"}
[^1]: Identicity is understood in a geometrical sense: two attractors are considered to be (almost) identical if their trajectories in the phase space coincide. The word *almost* corresponds to the case of chaotic attractors, where identity may appear only after infinite time. Supplementarily, Poincaré sections and Haussdorf distance between trajectories are utilized to underline this identity.
|
---
abstract: 'The Jaynes-Cummings model is solved with the raising and lowering (shift) operators by using the matrix-diagonalizing technique. Bell nonlocality is also found present ubiquitously in the excitations states of the model.'
author:
- Jie Zhou
- 'Hong-Yi Su'
- 'Fu-Lin Zhang'
- 'Hong-Biao Zhang'
- 'Jing-Ling Chen'
title: 'Solving the Jaynes-Cummings Model with Shift Operators Constructed by Means of the Matrix-Diagonalizing Technique'
---
Introduction
============
Many quantum mechanical models with various interactions and potentials were conventionally addressed by solving their wave equations in a certain position or momentum coordinate, with account of variables separation, boundary conditions, single-valuedness, *etc*. Such a way of solving, while highly worthwhile in obtaining explicit energy spectra and wavefunctions, can sometimes obscure the underlying symmetries of the quantum system that is considered. In comparison, operator methods [@shift] — particularly ones involving Lie algebras [@lie] — not only simplify the problem solving in practice, but also provide more insight into the solutions of other related models that share similar symmetries, whatever stationary or dynamical, in most cases.
Out of many an algebraic method the one proposed in [@ge2000] stands out by dealing with nonlinear deformation algebra [@dq93], which is generated from shift operators obtained by solving $$[H,\mathcal{X}]=\mathcal{X}\mathcal{G},\label{method}$$ where $H$ denotes the Hamiltonian, $\mathcal{X}$ is a *closed operator set*, and $\mathcal{G}$ is a matrix to be diagonalized (c.f. Eq. (\[matrix-tech\]) below, which is equivalent to (\[method\]) by subtracting a term $H$ from $\mathcal{G}$). The linear $su(2)$ or $su(1,1)$ Lie algebra can then be written out with these shift operators, revealing a dynamical symmetry of the system [@tolinear].
Using this method in the present paper, we will solve the Jaynes-Cummings model (JCM) proposed by Jaynes and Cummings in 1963 [@JC]. The model describes the system of a two-level atom interacting with a quantized mode of an optical cavity, with or without the presence of light. Its applications range from atomic physics, quantum optics [@V.Vedral], and solid-state quantum information circuits [@Irish], both experimentally and theoretically. The Hamiltonian reads $$\begin{aligned}
\label{1}
H=\omega( a^\dag a+\frac{1}{2})+g(\sigma^+ a+\sigma^- a^\dag)+\Delta \sigma_z,
\end{aligned}$$ where $\sigma_{x,y,z}$ are Pauli matrices, $2\Delta$ is the level splitting of the two level system, $a(a^\dagger)$ are the destruction (creation) operators of a single bosonic mode with freequency $\omega$, and $g$ is the coupled coefficient, and $\sigma^\pm=\frac{1}{2}(\sigma_x \pm i\sigma_y)$. Here, the conservation of a quantity $C=a^\dagger a+\frac{1}{2}\sigma_z$, commuting with $H$, means that the state space can be broken down into infinite two-dimensional subspaces, each eigenstate can be labeled by $C=0,1,2\cdots$. There are two eigenstates in the two-dimensional subspace can be labeled by $+$ and $-$ [@Braak].
The paper is organized as follows. In Sec. II, we construct the raising and lowering operators for the Hamiltonian (\[1\]). In Sec. III, we compute the energy spectrum and the wave functions of the physical system. In Sec. IV, we construct algebraic structure based on the raising and lowering operators. In Sec. V, we prpose a test of Bell’s inequality with the excitation states of the JCM. Discussion is made in the last section.
Raising and lowering operators of the Hamiltonian
=================================================
For any Hamiltonian operator $H$, if there are operators ${\hat {\cal
L}}^{\pm}$ satisfying the following commutation relation $$\label{sh} [H,{\hat {\cal L}}^{\pm}]={\hat {\cal
L}}^{\pm}f^{\pm}(H),$$ then ${\hat {\cal L}}^+$ and ${\hat {\cal L}}^-$ are called the raising and lowering operators of operator $H$, respectively [@HuChen; @Chen; @Chen1; @WL; @wl] . For example, let ${\hat {\cal L}}^-=a$, ${\hat {\cal L}}^+=a^\dag$, and $f^{\pm}(H)=\pm \hbar \omega$, Eq. (\[sh\]) reduces to the usual case of the quantum linear harmonic oscillator, for which $[H, a]=-a
\hbar\omega$, $[H, a^\dag]=a^\dag \hbar\omega$. We refer the readers who are interested in the general definition of raising and lowering operators to Refs. [@shift] and [@Chen]. Note that the explicit form of the raising and lowering operators ${\hat {\cal L}}^{\pm}$ for a specific Hamiltonian system need not be mutually adjoint [@shift].
With the canonical commutation relations $$\label{3}
\begin{split}
&\left[a,a^\dagger\right]=1,\\
&\left[\sigma^+,\sigma^-\right]=\sigma_z,~~\left[\sigma^-,\sigma^+\right]=-\sigma_z,\\
&\left[\sigma^+,\sigma_z\right]=-2\sigma^+,~~\left[\sigma^-,\sigma_z\right]=2\sigma^-,
\end{split}$$ we have $$\begin{aligned}
&[H,\sigma_z]=-2g\sigma^+ a+2g\sigma^- a^\dagger,\label{4}\\
&[H,a^\dagger a]=g\sigma^+ a-g\sigma^- a^\dagger,\label{5}\\
&[H,\sigma^+ a] = -\frac{g}{\omega}\sigma_z H+(\frac{g^2}{\omega}-\delta)\sigma^+ a-\frac{g^2}{\omega}\sigma^- a^\dagger-\frac{g\delta}{2\omega},\label{6}\\
&[H,\sigma^- a^\dagger] = \frac{g}{\omega}\sigma_z H+(\delta+\frac{g^2}{\omega})\sigma^- a^\dagger-\frac{g^2}{\omega}\sigma^+a+\frac{g\delta}{2\omega},\label{7}
\end{aligned}$$ with $\delta=\omega-2\Delta$. From Eqs. (\[4\]), (\[5\]),we can know that: $$\begin{aligned}
\left[ {H,{a^\dag }a + \frac{1}{2}{\sigma _z}} \right] = 0\end{aligned}$$ So,the operator $$C = {a^\dag }a + \frac{1}{2}{\sigma _z}$$ is a conserved quantity.
From (\[5\]), (\[6\]) and (\[7\]) can then be rewritten in the following form $$\begin{aligned}
& HX=XG,\label{matrix-tech}\\
&X=(\sigma^+ a,\sigma^- a^\dagger,\sigma_z,1)\nonumber,\end{aligned}$$ where $X$ is the closed operator set [@ge2000], and $G$ is a $4\times4$ matrix $$\begin{aligned}
G=\left(\begin{array}{cccc}H-\delta+\frac{g^2}{\omega}&-\frac{g^2}{\omega}&-2g&0\\-\frac{g^2}{\omega}&H+\delta+\frac{g^2}{\omega}&2g&0\\-\frac{g}{\omega}H&\frac{g}{\omega}H&H&0\\-\frac{g\delta}{2\omega}&\frac{g\delta}{2\omega}&0&H
\end{array}\right).\end{aligned}$$ Solving the equation $$\begin{aligned}
\label{12}
\det(G-\lambda I)=0,
\end{aligned}$$ we obtain four eigenvalues of matrix $G$: $$\begin{aligned}
\label{13}
\lambda_1&=\lambda_2=H,\nonumber\\
\lambda_3&=\frac{g^2+H\omega-T(H)}{\omega},\nonumber\\
\lambda_4&=\frac{g^2+H\omega+T(H)}{\omega}.
\end{aligned}$$ where $T(H)=\sqrt{g^4+4g^2 H\omega+\omega^2 \delta^2}$.
Now we can write $G$ as $$\begin{aligned}
\label{14}
G=R\Lambda R^{-1},
\end{aligned}$$ and $$\begin{aligned}
\Lambda&=\left(\begin{array}{cccc}\lambda_1&0&0&0\\0&\lambda_2&0&0\\0&0&\lambda_3&0\\0&0&0&\lambda_4
\end{array}\right),\label{15}\\
R&=\left(\begin{array}{cccc}0&2g&g^2-\omega \delta-T(H)&g^2-\omega \delta+T(H)\\0&2g&-g^2-\omega \delta+T(H)&-g^2-\omega \delta-T(H)\\0&-\delta&-2gH&-2gH\\1&0&-g\delta&-g\delta
\end{array}\right).
\end{aligned}$$
We multiply $R$ with $M$ to construct a more general diagonal matrix: $$\begin{aligned}
\label{17}
S=RM,~~
M=\left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & \beta & 0 \\
0 & 0 & 0 & \alpha (H) \\
\end{array}
\right),
\end{aligned}$$ where $\beta$ is a constant number, and $\alpha(H)$ is a function about $H$ and $\beta$, which we will solve later. Then we have $$\begin{aligned}
\label{19}
S=\left(
\begin{array}{cccc}
0 & 2 g & \beta \xi(H) &\eta(H)\alpha (H) \\
0 & 2 g & \beta \tau(H) & \kappa(H) \alpha (H) \\
0 & -\delta & \beta \gamma (H) & \gamma (H)\alpha (H) \\
1 & 0 & -g \delta\beta & -g \delta \alpha (H) \\
\end{array}
\right)\end{aligned}$$ where $\gamma (H)=-2 g H, \xi(H)=g^2-\omega \delta -T(H), \tau(H)=-g^2-\omega \delta +T(H), \kappa(H)=-g^2-\omega \delta -T(H), \eta(H)=g^2-\omega \delta +T(H)$. We can find that $$\begin{aligned}
\label{21}
H(\sigma^+ a,\sigma^- a^\dagger,\sigma_z,1)S=(\sigma^+ a,\sigma^- a^\dagger,\sigma_z,1)S\Lambda.
\end{aligned}$$ Then, from the technique in [@ge2000] we obtain $$\begin{aligned}
\label{22}
(1,D,b,b^+)=(\sigma^+ a,\sigma^- a^\dagger,\sigma_z,1)S,\end{aligned}$$ such that $$\begin{aligned}
\label{23}
b^+&=\sigma^+ a\left[g^2+T(H)-\omega \delta \right]\alpha (H)+\sigma^- a^\dagger\nonumber\\
&~~~~~~~~~~\times\left[-g^2-T(H)-\omega \delta \right]\alpha (H)\nonumber\\
&~~~~~~~~~~+\sigma_z\gamma (H)\alpha (H) -\text{g$\delta \alpha $}(H),\\
b&=\sigma^+ a\left[g^2-T(H)-\omega \delta \right]\beta+\sigma^- a^\dagger\nonumber \\
&~~~~~~~~~~\times\left[-g^2+T(H)-\omega \delta \right]\beta\nonumber\\
&~~~~~~~~~~+\sigma_z \beta \gamma (H)-\text{g$\delta \beta $}.
\end{aligned}$$ Here $D$ is equivalent to $C$. From Eqs. (\[21\]) and (\[22\]), we have $H(1,S1,b,b^+)=(1,S1,b,b^+)\Lambda$, and $$\label{MorseHb}
[H,b]=b(\lambda_3-H), \;\; [H,b^+]=b^+ (\lambda_4-H),$$ which recover the definitions of raising and lowering operators. So the operators $b^+$ and $b$ are the raising and lowering operators of the Hamiltonian operator $H$, respectively.
In what follows we construct $\alpha(H)$ to make the raising and lowering operators mutually adjoint $$\label{mj}
(b^+)^\dag=b.$$ Suppose $F(H)$ is a real function of $H$, we have from Eq. (\[matrix-tech\]) the following operator equation $$\begin{aligned}
\label{27}
F(H)(\sigma^+ a,\sigma^- a^\dagger,\sigma_z,1)=(\sigma^+ a,\sigma^- a^\dagger,\sigma_z,1)R F(\Lambda) R^{-1},\end{aligned}$$ and, from Eq. (\[mj\]) and Eq. (\[27\]), $$\begin{aligned}
\label{28}
\alpha(H)=\beta(1+\frac{2 g^2}{T(H)}).\end{aligned}$$ This is the relation between $\alpha(H)$ and $\beta$ .
Determination of energy spectrum and wave functions
===================================================
We will use the raising and lowering operators to determine its energy spectrum and wavefunctions.
1. Ground state: For the ground state $|\psi_0\rangle$ and the zero-point energy $E_0$ , it must be satisfied that $$\begin{aligned}
\label{29}
b|\psi_0\rangle=0,~~~
H|\psi_0\rangle=E_0|\psi_0\rangle.
\end{aligned}$$ Suppose in the computation basis that $$\begin{aligned}
\label{31}
|\psi_0\rangle= \left(
\begin{array}{c}
\sum\limits_{n=0}^\infty e_n|n\rangle\\
\sum\limits_{n=0}^\infty f_n|n\rangle \\
\end{array}
\right).\end{aligned}$$ From Eqs.(29) and (31), for convenience, we introduce the notations $$\begin{aligned}
\label{32}
\alpha_1(E_0)&=\gamma (E_0)-g\delta,\nonumber\\
\alpha_2(E_0)&=g^2-\omega \delta -T(E_0),\nonumber\\
\beta_1(E_0)&=-g^2-\omega \delta +T(E_0),\nonumber\\
\beta_2(E_0)&=-\gamma (E_0)-g\delta,\end{aligned}$$ and so $$\begin{aligned}
\alpha_1\sum\limits_{n=0}^\infty e_n|n\rangle+\beta_1\sum\limits_{n=0}^\infty \sqrt{n+1}f_{n+1}|n\rangle=0,\label{33}\\
\alpha_2\sum\limits_{n=0}^\infty \sqrt{n+1}e_n|n+1\rangle+\beta_2\sum\limits_{n=0}^\infty f_n|n\rangle=0.\label{34}\end{aligned}$$ Through analysis of Eqs.(\[33\]) and (\[34\]), we find that if $e_n=0,\beta_2=0,\beta_1=2g^2$,$f_n=\left\{\begin{array}{l}
1,n=0 \\
0,n\neq0
\end{array}\right..$ We obtain the ground state $$\label{44}
\left\{\begin{array}{l}
|\psi_0\rangle= \left(
\begin{array}{c}
0\\
|0\rangle \\
\end{array}
\right), \\
E_0=\frac{\delta}{2}+\omega.
\end{array}\right.$$
2. Excitation states: We have from Eqs.(\[33\]) and (\[34\]) that $$\begin{aligned}
\label{46}
\begin{array}{llll}
b|\psi_n^-\rangle=0,~~~
H|\psi_n^-\rangle=E_n^-|\psi_n^-\rangle,
\end{array}\end{aligned}$$ where $n = 1,2,3,4 \cdots$, and so $$\begin{aligned}
\label{47}
\left\{\begin{array}{l}
|\psi_n^-\rangle=\sin\theta_n |g,n+1\rangle-\cos\theta_n|e,n\rangle\\\\
E_n^-=\omega(n+1)-\sqrt{g^2(n+1)+\frac{\delta^2}{4}}
\end{array}\right..\end{aligned}$$ where $\tan\theta_n=\frac{-\delta+\sqrt{\delta^2+4g^2(n+1)}}{2g\sqrt{n+1}},n = 1,2,3,4 \cdots$
Similarly, from $$\begin{aligned}
\label{48}
\begin{array}{lllll}
b^+|\psi_n^-\rangle=\chi|\psi _n^ +\rangle,~~~
H|\psi _n^ +\rangle=E_+|\psi _n^ +\rangle,
\end{array}\end{aligned}$$ we have $$\begin{aligned}
\label{49}
%\left\{
|\psi _n^ +\rangle=\cos\theta_n |g,n+1\rangle+\sin\theta_n|e,n\rangle,\nonumber\\
E _n^ +=\omega(n+1)+\sqrt{g^2(n+1)+\frac{\delta^2}{4}}.
%\right.\end{aligned}$$
To summarize, the energy spectra and the wavefunctions are $$\begin{aligned}
\label{50}
\begin{array}{l}
|\psi_0\rangle= \left(
\begin{array}{c}
0\\
|0\rangle \\
\end{array}
\right),~~~
E_0=\frac{\delta}{2}+\omega(n+1),
\end{array}\end{aligned}$$ for the ground state, and $$\begin{aligned}
|\psi_n^+\rangle=\cos\theta_n |g,n+1\rangle+\sin\theta_n|e,n\rangle,\nonumber\\
|\psi_n^-\rangle=\sin\theta_n |g,n+1\rangle-\cos\theta_n|e,n\rangle,\nonumber\\
E_n^\pm=\omega(n+1)\pm\sqrt{g^2(n+1)+\frac{\delta^2}{4}},\end{aligned}$$ for the excitation states.
Algebraic structure of the JCM
==============================
We redefine three generators $J_0$ and $J_\pm$ with $H$, $b^+$ and $b$ satisfying commutation relations $$\begin{aligned}
\label{51}
[H,b]=-b f(H), \ \ \ [H,b^+]=f(H)b^+, \label{commut}
\end{aligned}$$ where $ f(H)=-\frac{g^2-T(H)}{\omega}.$ From Eq.(\[commut\]) we also have $$\begin{aligned}
\begin{array}{lllll}
Hb=b(H-f(H))=b\lambda_3=b\frac{g^2+H\omega-T(H)}{\omega},
\end{array}
\end{aligned}$$ or more generally, $$\begin{split}
F(H)b&=bF(\lambda_3)=bF(H-f(H))\\
&=bF(\frac{g^2+H\omega-T(H)}{\omega}).
\end{split}$$ Thus, $$\begin{aligned}
[\frac{1}{2g^2}T(H),b]=-b,~~~
[\frac{1}{2g^2}T(H),b^+]=b^+.\end{aligned}$$
Then we have $$\begin{aligned}
\label{54}
\left[ J_0 , b \right]=-b,~~
\left[ J_0, b^{\dag} \right] = b^{\dag},~~J_0=\frac{1}{2g^2}T(H)+\nu.\end{aligned}$$ Furthermore,
$$\begin{aligned}
\label{55}
J_{-}&=b\xi(J_0), ~~~ J_{+}=\xi(J_0)b^{\dag},~~~
\left[ J_{+} , J_{-} \right] = b^+ b \;
\xi^2(J_0)-bb^+\xi^2(J_0+1),\end{aligned}$$
and $$\begin{aligned}
b^\dagger b&=f(H)\nonumber\\
&=\left( {H - \frac{\omega }{2} - \omega C} \right)\frac{{2T(H)\left( {{g^2} + 2H\omega - T(H)} \right)}}{\omega } + \frac{1}{\omega }\left( {2{g^4}H + {\delta ^2}\omega \left( {2H\omega - T(H)} \right) - 2{g^2}H\left( { - 4H\omega + T(H)} \right)} \right)\nonumber\\
&=f(J_0)\nonumber\\
&=\frac{{\left( {{J_0} - \nu } \right)\left( {{g^4}{{\left( {1 - 2{J_0} + 2\nu } \right)}^2} - {\delta ^2}{\omega ^2}} \right)\left( {{g^4}{{\left( {1 + 2{J_0} - 2\nu } \right)}^2} - \left( {\left( {2 + 4C} \right){g^2} + {\delta ^2}} \right){\omega ^2}} \right)}}{{2{g^2}{\omega ^2}}},\\
bb^\dagger&=g(H)\nonumber\\
&=- \left( {H - \frac{\omega }{2} - \omega C} \right)\frac{{2{{\left( {2{g^2} + T(H)} \right)}^2}\left( {{g^2} + 2H\omega + T(H)} \right)}}{{\omega T(H)}}\nonumber\\
&~~~~~~+ \frac{{\left( {2{g^2} + T(H)} \right)\left[ {6{g^4}H + {\delta ^2}\omega \left( {2H\omega + T(H)} \right) + 2{g^2}\left( {4{H^2}\omega + {\delta ^2}\omega + 3HT(H)} \right)} \right]}}{{\omega T(H)}}\nonumber\\
&=g( J_0 )\nonumber\\
&= - \frac{{{{\left( {1 + {J_0} - \nu } \right)}^2}\left( {{g^4}{{\left( {1 + 2{J_0} - 2\nu } \right)}^2} - {\delta ^2}{\omega ^2}} \right)\left( {{g^4}{{\left( { - 1 + 2{J_0} - 2\nu } \right)}^2} - \left( {\left( {2 + 4C} \right){g^2} + {\delta ^2}} \right){\omega ^2}} \right)}}{{2{g^2}\left( {{J_0} - \nu } \right){\omega ^2}}},\\
{\xi ^2}\left( {{J_0}} \right) &= \frac{{2{g^2}{\omega ^2}}}{{\left( {{J_0} - \nu } \right)\left( {{g^4}{{\left( { - 1 + 2{J_0} - 2\nu } \right)}^2} - {\delta ^2}{\omega ^2}} \right)}}\nonumber.
\end{aligned}$$
So $$\begin{aligned}
\label{44}
\left[ {{J_ + },{J_ - }} \right] = {g^4}{\left( {1 + 2{J_0} - 2\nu } \right)^2} - \left( {\left( {2 + 4C} \right){g^2} + {\delta ^2}} \right){\omega ^2} + \frac{{\left( {1 + {J_0} - \nu } \right)\left( {{g^4}{{\left( {1 - 2{J_0} + 2\nu } \right)}^2} - \left( {\left( {2 + 4C} \right){g^2} + {\delta ^2}} \right){\omega ^2}} \right)}}{{\left( {{J_0} - \nu } \right)}}.
\end{aligned}$$
Bell nonlocality
================
We introduce “pseudo-spin” operators [@ChenZB2002] $$\begin{aligned}
\left\{ \begin{array}{l}
{{\tilde \sigma }_x} = \sum\limits_{n = 0}^\infty {\left( {\left| n \right\rangle \left\langle {n + 1} \right| + \left| {n + 1} \right\rangle \left\langle n \right|} \right)}, \\
{{\tilde \sigma }_y} = i\sum\limits_{n = 0}^\infty {\left( {\left| {n + 1} \right\rangle \left\langle n \right| - \left| n \right\rangle \left\langle {n + 1} \right|} \right)}, \\
{{\tilde \sigma }_z} = \sum\limits_{n = 0}^\infty {\left( {\left| n \right\rangle \left\langle n \right| - \left| {n + 1} \right\rangle \left\langle {n + 1} \right|} \right)},
\end{array} \right.\end{aligned}$$ and we define the Bell operator [@CHSH69] as $$\begin{aligned}
\begin{array}{lllll}
{B_{CHSH}}&=& \left( {\hat {\vec \sigma} \cdot \vec a} \right) \otimes \left( {\tilde {\vec \sigma} \cdot \vec b} \right) + \left( {\hat {\vec \sigma} \cdot \vec a} \right) \otimes \left( {\tilde {\vec \sigma} \cdot \vec b'} \right) \\
&+& \left( {\hat {\vec \sigma} \cdot \vec a'} \right) \otimes \left( {\tilde {\vec \sigma} \cdot \vec b} \right) - \left( {\hat{ \vec \sigma} \cdot \vec a'} \right) \otimes \left( {\tilde {\vec \sigma} \cdot \vec b'} \right),
\end{array}\end{aligned}$$ where $\vec a,\vec a',\vec b,\vec b'$ are four unit vectors, $\hat {\vec \sigma}=(\hat\sigma_x,\hat\sigma_y,\hat\sigma_z)$ denotes the usual Pauli matrix (i.e., by taking $n=0$ in the pseudo-spin operators).
We can construct a test of Bell’s inequality with the excitation states, for example, the $| {\psi _n^ - }\rangle$ for a certain $n$: $$\begin{aligned}
\left| {\psi _n^ - } \right\rangle & =& \sin {\theta _n}\left| {g,n + 1} \right\rangle - \cos {\theta _n}\left| {e,n} \right\rangle\nonumber \\
&\equiv& \sin {\theta _n}\left| 1 \right\rangle \left| {n + 1} \right\rangle - \cos {\theta _n}\left| 0 \right\rangle \left| n \right\rangle\end{aligned}$$ with $|0\rangle=(1,0)^{\rm T},~|1\rangle=(0,1)^{\rm T}$. By taking $$\begin{split}
& \vec a=(0,0,-1),~~~ \vec a'=(1,0,0),\\
& \vec b=(-\sin\theta^*,0,\cos\theta^*),\\
& \vec b'=(\sin\theta^*,0,\cos\theta^*),\\
& \theta^*=\frac{\pi}{2}+\arctan\frac{1}{\sin2\theta_n},
\end{split}$$ it turns out that $\langle B_{CHSH}\rangle=2\sqrt{1+\sin^2(2\theta_n)}$, which violates the local hidden variable bound 2, except for $\theta_n=0,\pi/2$. This shows the ubiquitous presence of Bell nonlocality in the JCM.
Discussion
==========
In this paper, we have obtained the raising and lowering operators for the JCM by means of the matrix-diagonalizing technique, and then worked out the energy spectra and wavefunctions in the computation basis. We have then revealed the dynamical symmetry of the model by writing these shift operators in terms of generators of the Lie algebra. Finally, we have shown that Bell nonlocality is found to exist in the excitation states of the JCM, further justifying the merits of solving the JCM with the method used in the present paper.
acknowledgments
===============
J.L.C. is supported by National Natural Science Foundations of China (Grant No. 11475089). F.L.Z. is supported by National Natural Science Foundations of China (Grant No. 11675119 and No. 11575125)
[99]{} O. L. De Lange and R. E. Raab, *Operator Methods in Quantum Mechanics* (Clarendon Press, Oxford, 1991).
L. Infeld and T. E. Hull, Rev. Mod. Phys. **23**, 21 (1951); B. Mielnik, J. Math. Phys. **25**, 3387 (1984); A. Stahlhofen and K. Bleuler, Nuovo Cimento B **104**, 447 (1989); J. Hoppe, *Lectures on Integrable Systems* (Sringer Verlag, Berlin, 1992); J. I. Díaz, J. Negro, L. M. Neito, and O. Rosas-Ortiz, J. Phys. A **32**, 8447 (1999).
M.L. Ge, L. C. Kwek, Y. Liu, C. H. Oh, and X. B. Wang, Phys. Rev. A **62**, 052110 (2000).
C. Delbecq and C. Quesne, J. Phys. A **26**, L127 (1993).
J. L. Chen, Y. Liu, and M. L. Ge, J. Phys. A **31**, 6473 (1998); C. Quesne, J. Phys. A **32**, 6705 (1999).
E. T. Jaynes and F. W. Cummings, Proc. IEEE **51**, 89 (1963). V. Vedral, Modern Foundations of Quantum Optics (Imperial College Press, London, 2006). E. K. Irish, Phys. Rev. Lett. **99**, 173601 (2007). D. Braak, Phys. Rev. Lett. **107**, 100401 (2011).
M. G. Hu and J. L. Chen, Int. J. Theor. Phys. **46**, 2119-2137 (2007). J. L. Chen, H. B. Zhang, X. H. Wang, H. Jing and X. G. Zhao, Int. J. Theor. Phys. **39**, 2043 (2000).
J. L. Chen, Y. Liu and M. L. Ge, J. Phys. A **31**, 6473 (1998).
X. H. Wang and Y. B. Liu, Int. J. Theor. Phys. **48**: 2748-2756 (2009). X. H. Wang and Y. B. Liu, Chin. Phys. Lett. **27**, No.2 020301 (2010).
Z. B. Chen, J. W. Pan, G. Hou, and Y. D. Zhang, Phys. Rev. Lett. **88**, 040406 (2002).
J. F. Clauser, M. A. Horne, A. Shimony, and R. A. Holt, Phys. Rev. Lett. **23**, 880 (1969).
|
---
abstract: 'In this paper, we provide upper and lower estimates for the minimal number of functions needed to represent a bounded variation function with an accuracy of epsilon with respect to ${\bf L}^1$–distance.'
author:
- |
Prerona Dutta and Khai T. Nguyen\
\
[North Carolina State University.]{}\
\
[Emails: [email protected],[email protected]]{}
title: '**Covering numbers for bounded variation functions**'
---
Introduction {#sec:1}
============
The ${\ve}$-entropy has been studied extensively in a variety of literature and disciplines. It plays a central role in various areas of information theory and statistics, including nonparametric function estimation, density information, empirical processes and machine learning (see e.g in [@LB; @DH; @DP]). This concept was first introduced by Kolmogorov and Tikhomirov in [@KT]:
\[def1\] Let $(X,d)$ be a metric space and $E$ a precompact subset of $X$. For $\varepsilon >0$, let $\mathcal{N}_{\varepsilon}(E|X)$ be the minimal number of sets in an $\varepsilon$-covering of $E$, i.e., a covering of $E$ by subsets of $X$ with diameter no greater than $2\varepsilon$. Then $\varepsilon$-entropy of $E$ is defined as $$\mathcal{H}_{\varepsilon}(E~|~X)=\log_2\mathcal{N}_{\varepsilon}(E~|~X).$$
In other words, it is the minimum number of bits needed to represent a point in a given set $E$ in the space $X$ with an accuracy of $\varepsilon$ with respect to the metric $d$.
A classical topic in the field of probability is to investigate the metric covering numbers for general classes of real-valued functions $\mathcal{F}$ defined on $X$ under the family of ${\bf L}^1(dP)$ where $P$ is a probability distribution on $X$. Upper bounds in terms of Vapnik-Chervonenkis and pseudo-dimension of the function class were established in [@RMD], and then improved in [@DP; @DH; @DH1]. Several results on lower bounds were also studied in [@KMJ]. Later on, upper and lower estimates of the ${\ve}$-entropy of $\mathcal{F}$ in ${\bf L}^1(dP)$ in terms of a scale-sensitive dimension of the function class were provided in [@LBW; @KMJ], and applied to machine learning.\
\
Thanks to the Helly’s theorem, a set of uniformly bounded variation functions is compact in ${\bf L}^1$-space. A natural question is to quantify the compactness of such sets by using the $\ve$-entropy. In [@KMJ], the authors considered this problem in the scalar case and proved that the $\ve$-entropy of a class of real valued functions of bounded variation in ${\bf L}^1$ is of the order of $\ds {1\over \ve}$. Some related works have been done in the context of density estimation where attention has been given to the problem of finding covering numbers for the classes of densities that are unimodal or nondecreasing in [@LB; @PG]. In the multi-dimensional cases, the covering numbers of convex and uniformly bounded functions were studied in [@GS]. It was shown that the $\ve$-entropy of a class of convex functions with uniform bound in ${\bf L}^1$ is of the order of $\ds{1\over \ve^{n\over 2}}$ where $n$ is the dimension of the state variable. The result was previously studied for scalar state variables in [@DD] and for convex functions that are uniformly bounded and uniformly Lipschitz with a known Lipschitz constant in [@EB]. These results have direct implications in the study of rates of convergence of empirical minimization procedures (see e.g. in [@LB1; @SVG] as well as optimal convergence rates in the numerous convexity constrained function estimation problems (see e.g. in [@LB0; @LLC; @YB]).
Recently, the ${\ve}$-entropy has been used to measure the set of solutions of certain nonlinear partial different equations. In this setting, it could provide a measure of the order of “resolution” and of the “complexity” of a numerical scheme, as suggested in [@Lax78; @Lax02]. Roughly speaking, the order of magnitude of the $\varepsilon$-entropy should indicate the minimum number of operations that one should perform in order to obtain an approximate solution with a precision of order $\varepsilon$ with respect to the considered topology. A starting point of this research topic is a result which was obtained in [@DLG] for a scalar conservation law in one dimensional space u\_t(t,x)+f(u(t,x))\_x = 0, with uniformly convex flux $f$. They showed that the upper bound of the minimum number of functions needed to represent an entropy solution $u$ of (\[CL\]) at any time $t>0$ with accuracy $\varepsilon$ with respect to $\bf{L}^1$-distance is of the order of $\ds {1\over\ve }$. In [@AON1] a lower bound on such an $\varepsilon$-entropy was established, which is of the same order as of the upper bound in [@DLG]. More generally, the authors in [@AON1] also obtained the same estimate for a system of hyperbolic conservation laws in [@AON2; @AON3]. In the scalar case, it is well-known that the integral form of an entropy solution of (\[CL\]) is a viscosity solution of the related Hamilton-Jacobi equation. Therefore, it is natural to study the ${\ve}$-entropy for the set of viscosity solutions to the Hamilton-Jacobi equation u\_t(t,x)+H(\_x u(t,x) ) = 0, with respect to $\bf{W}^{1,1}$-distance in multi-dimensional cases. Most recently, it has been proved in [@ACN] that the minimal number of functions needed to represent a viscosity solution of (\[HJ\]) with accuracy $\varepsilon$ with respect to the $\bf{W}^{1,1}$-distance is of the order of $\ds {1\over\varepsilon^n}$, provided that $H$ is uniformly convex. Here, $n$ is the dimension of the state variable. The same result for when the Hamiltonian depends on the state variable $x$ has also been obtained by the same authors in [@ACN1].\
\
Interestingly, the authors in [@ACN] also established an upper bound on the ${\ve}$-entropy for the class of monotone functions in $\bf{L}^{1}$-space. As a consequence of Poincaré-type inequalities, they could obtain the ${\ve}$-entropy for a class of semi-convex/concave functions in Sobolev $\bf{W}^{1,1}$ space. This result somehow extended the one in [@GS; @DD; @EB] to a stronger norm, $\bf{W}^{1,1}$-norm instead of ${\bf L}^1$-norm. Motivated by the results in [@KMJ; @GS; @DD; @EB; @ACN] and a possible application to Hamilton-Jacobi equation with non-strictly convex Hamiltonian, we will provide in the present paper upper and lower estimates of the ${\ve}$-entropy for a class of uniformly bounded total variation functions in $\bf{L}^{1}$-space in multi-dimensional cases. In particular, our result shows that the minimal number of functions needed to represent a function with bounded variation with an error $\ve$ with respect to ${\bf L}^1$-distance is of the order of ${1\over \ve^n}$. The precise statement will be stated in Theorem \[main\] in section 3.
Notations and preliminaries
===========================
Let $n\geqslant 1$ be an integer and $D$ be a measurable subset of $\R^n$. Throughout the paper we shall denote by:
- $|\cdot|$ the Euclidean norm in $\R^n$;
- $\langle\cdot,\cdot\rangle$ the Euclidean inner product in $\R^n$;
- $\mathrm{int}(D)$ the interior of $D$;
- $\partial D$ the boundary of $D$;
- $\mathrm{Vol}(D)$ the Lebesgue measure of a measurable set $D\subset \R^n$;
- $\mathbf{L}^{1}(D,\R)$ the Lebesgue space of all (equivalence classes of) summable real functions on $D$, equipped with the usual norm $\|\cdot\|_{\mathbf{L}^{1}(D)}$;
- $\mathbf{L}^{\infty}(D,\R)$ the space of all essentially bounded real functions on $D$, and by $\|u\|_{\mathbf{L}^{\infty}(D)}$ the essential supremum of a function $u\in \mathbf{L}^{\infty}(D,\R)$;
- $\mathcal{C}^1_{c}(\Omega,\R^n)$, with $\Omega\subset\R^n$ an open set, the set of all continuous differentiable functions from $\Omega$ to $\R^n$ with a compact support in $\Omega$;
- $\chi_{D}(x)=\left\{\bega{rl}
&1 \qquad~~\mathrm{if}\qquad x\in D\,,
\\[4mm]
&0\quad~\mathrm{if}\qquad x\in\R^n\backslash D\,
\enda\right.$ the characteristic function of a subset $D$ of $\R^n$.
- $\mathrm{Card}(S)$ the number of elements of any finite set $S$;
- $\lfloor x\rfloor\doteq a\doteq\max\{z\in\mathbb{Z}~|~z\leq x\}$ denotes the integer part of $x$.
We now introduce the concept of functions of bounded variations.
The function $u\in {\bf L}^1(\Omega,\R)$ is [*a function of bounded variation on $\Omega$ (denoted by $BV(\Omega,\R)$)*]{} if the distributional derivative of $u$ is representable by a finite Radon measure in $\Omega$, i.e., if $$\int_{\Omega}~u\cdot {\partial \varphi\over\partial x_i}~dx~=~-\int_{\Omega}\varphi dD_iu\qquad\qquad\forall \varphi\in\mathcal{C}_c^1(\Omega,\R), i\in\{1,2,...,n\}$$ for some Radon measure $Du=(D_1u,D_2u,...,D_nu)$. We denote by $|Du|$ the total variation of the vector measure $Du$, i.e., $$|Du|(\Omega)~=~\sup\left\{\int_{\Omega}u(x)\mathrm{div}(\phi)~\Big|~\phi\in\mathcal{C}_c^1(\Omega,\R^n), \|\phi\|_{{\bf L}^{\infty}(\Omega)}\leq 1\right\}\,.$$
Let’s recall a Poincaré-type inequality for bounded total variation functions on convex domain that will be used in the paper. This result is based on [@AD theorem 3.2] and on [@ANP Proposition 3.2.1, Theorem 3.44].
(Poincaré inequality) Let $\Omega\subset \R^n$ be an open, bounded, convex set with Lipschitz boundary. For any $u\in BV(\Omega,\R)$, it holds $$\int_{\Omega} \big|u(x)-u_{\Omega}\big|~dx~\leq~{\mathrm{diam}(\Omega)\over 2}\cdot |Du|(\Omega)$$ where $$u_{\Omega}~=~{1\over \mathrm{Vol}(\Omega)}\cdot \int_{\Omega}u(x)~dx$$ is the mean value of $u$ over $\Omega$.
To complete this section, we will state a result on the $\ve$-entropy for a class of bounded total variation functions in the scalar case using a method similar to the one provided in [@BKP]. Given $L,V, M>0$, denote by \_[\[L,M,V\]]{} = {f\^1(\[0,L\],\[0,M\]) | |Df|((0,L))V}.
\[1-D-BV\] For all $0<\ve<{L(M+V)\over 6}$, it holds \_(\_[\[L,M,V\]]{} | [**L**]{}\^1(\[0,L\])) 8.
[**Proof.**]{} For any $f\in \mathcal{B}_{[L,M,V]}$, let $V_f(x)$ be the total variation of $f$ over $[0,x]$. We decompose $$f(x)~=~f^+(x)-f^-(x)\qquad\forall x\in [0,L]\,.$$ where $f^{-}={V_f-f\over 2} +{M\over 2}$ is a non-decreasing function $[0,L]$ to $\left[0,{L+M\over 2}\right]$ and $f^{+}={V_f+f\over 2} +{M\over 2}$ is a nondecreasing function $[0,L]$ to $\left[{M\over 2},{L+2M\over 2}\right]$. Denote by $$\mathcal{I}~:=~\left\{g: [0,L]\to \left[0,{V+M\over 2}\right]~\Big|~g~\mathrm{is~nondecreasing}\right\}\,,$$ we then have \_[\[L,M,V\]]{} (+[M2]{})- {g-h | g+[M2]{}h}. For any $\ve>0$, it holds $$\mathcal{N}_{\ve}\left(\mathcal{B}_{[L,M,V]}~|~{\bf L}^1([0,L])\right)~\leq~\left[\mathcal{N}_{{\ve\over 2}}(\mathcal{I}~|~{\bf L}^1([0,L]))\right]^2\,.$$ Indeed, from the definition \[def1\], there exists a set $\mathcal{G}_{{\ve\over 2}}$ of $\mathcal{N}_{{\ve\over 2}}(\mathcal{I}~|~{\bf L}^1([0,L]))$ subsets of ${\bf L}^{1}([0,L])$ such that $$\ds\mathcal{I}~\subseteq~\bigcup_{\mathcal{E}\in\mathcal{G}_{{\ve\over 2}}}~\mathcal{E}\quad\mathrm{and}\quad\mathrm{diam}(\mathcal{E})~=~\sup_{h_1,h_2\in\mathcal{E}}\|h_1-h_2\|_{{\bf L^1([0,L])}}~\leq~\ve\,.$$ Thus, (\[inc1\]) implies $$\mathcal{B}_{[L,M,V]}~\subseteq~\bigcup_{(\mathcal{E}_1,\mathcal{E}_2)\in\mathcal{G}_{{\ve\over 2}}\times \mathcal{G}_{{\ve\over 2}}} \left[\left(\mathcal{E}_1+{M\over 2}\right)-\mathcal{E}_2\right]\,.$$ For any two functions $$f_i~=~g_i-h_i~\in \left(\mathcal{E}_1+{M\over 2}\right)-\mathcal{E}_2\qquad\mathrm{for}~i=1,2\,,$$ we have $$\bega{rl}
\|f_1-f_2\|_{{\bf L}^1([0,L])}&\leq~\|g_1-g_2\|_{{\bf L}^1([0,L])}+\|h_1-h_2\|_{{\bf L}^1([0,L])}\\[3mm]
&\leq~\ds\mathrm{diam}\left(\mathcal{E}_1+{M\over 2}\right)+\mathrm{diam}(\mathcal{E}_2)~\leq~\ve+\ve~=~2\ve
\enda$$ and this implies that $$\ds \mathrm{diam}\left[\left(\mathcal{E}_1+{M\over 2}\right)-\mathcal{E}_2\right]~\leq~2\ve\,.$$ By the definition \[def1\], we have $$\mathcal{N}_{\ve}\left(\mathcal{B}_{[L,M,V]}~\Big|~{\bf L}^1([0.L])\right)~\leq~\mathcal{N}^2_{{\ve\over 2}}(\mathcal{I}~|~{\bf L}^1([0,L]))\,.$$ and thus \_(\_[\[L,M,V\]]{} | [**L**]{}\^1(\[0,L\])) 2\_[2]{}( | [**L**]{}\^1(\[0,L\])).\
Finally, applying [@DLG Lemma 3.1] for $\mathcal{I}$, we obtain that for $0<\ve<{L(M+V)\over 6}$, it holds $$\ds\mathcal{H}_{{\ve\over 2}}\left(\mathcal{I}~\big|~{\bf L}^1([0,L])\right)~\leq~4\cdot \left\lfloor{L(M+V)\over \ve}\right\rfloor\,,$$ and (\[ess1\]) yields (\[BV-Es1\]).
Estimates of the $\ve$-entropy for a class of BV functions
==========================================================
In this section, we establish upper and lower estimates of the ${\ve}$-entropy for a class of uniformly bounded total variation functions, \_[\[L,M,V\]]{} = {u\^1(\[0,L\]\^n,) | u\_[[**L**]{}\^(\[0,L\]\^[n]{})]{}M, |Du|((0,L)\^n)V }, in the ${\bf L}^{1}([0,L]^n,\R)$-space. In particular, it is shown that the minimal number of functions needed to represent a function in $\mathcal{F}_{[L,M,V]}$ with an error $\ve$ with respect to ${\bf L}^1$-distance is of the order of ${1\over \ve^n}$. More precisely, our main result is stated as the following.
v
\[main\] Given $L,M,V>0$, for every $0<\ve< {ML^n\over 8}$, it holds \^n \_(\_[\[L,M,V\]]{} | [**L**]{}\^1(\[0,L\]\^n)) \_[\[n,L,M,V\]]{} where the constant $\Gamma_{[n,L,M,V]}$ is computed as $$\Gamma_{[n,L,M,V]}~=~{8\over \sqrt{n}}\left(4\sqrt{n}LV\right)^n+\left({2^{n+7}V\over M}+8\right)\cdot \left({ML^n\over 8}\right)^n\,.$$
[**Proof.**]{} [*(Upper estimate)*]{} Let’s first prove the upper-estimate of $\mathcal{H}_{\ve}\left(\mathcal{F}_{[L,M,V]}~\Big|~{\bf L}^1([0,L]^n)\right)$. The proof is divided into several steps:
[**1.**]{} For any $N\in\mathbb{N}$, we divide the square $[0,L]^n$ into $N^n$ small squares $\square_{\iota}$ for $\iota=(\iota_1,\iota_2,...,\iota_n)\in \{0,1,...,N-1\}^n$ such that $$\square_{\iota}~=~{\iota L\over N}+\Bigg(\left[0, {L\over N}\right]\times \left[0, {L\over N}\right]\times...\times \left[0, {L\over N}\right]\Bigg)\qquad\mathrm{and}\qquad \bigcup_{\iota\in \{0,1,2,...,N-1\}^n}~\square_{\iota}~=~[0,L]^n\,.$$ For any $u\in\mathcal{F}_{[L,M,V]}$, denote by $$-M~\leq~u_{\iota}~=~{1\over\mathrm{Vol}(\square_{\iota})}~\int_{\square_{\iota}}u(x)~dx~\leq~M$$ the average value of $u$ in $\square_{\iota}$ for every $\iota\in \{0,1,2,...,N-1\}^n$. Let $\tilde{u}$ be a piecewise constant function on $[0,L]^n$ such that $$\tilde{u}(x)~=~\left\{\bega{rl}
&\ds u_{\iota}~\qquad\qquad\forall x\in \mathrm{int}\big(\square_{\iota}\big)\,,
\\[4mm]
&\ds 0\qquad~~\qquad\forall x\in \bigcup_{\iota\in\{1,2,\dots,N-1\}^n}\partial\square_{\iota}\,.
\enda\right.$$ Thanks to the Poincaré inequality, we have $$\int_{\square_{\iota}}|u(x)-u_{\iota}|~dx~\leq~{\mathrm{diam}(\square{\iota})\over 2}\cdot |Du|(\mathrm{int}(\square_{\iota}))$$ for all $\iota\in \{0,1,2,...,N-1\}^n$. Hence, the ${\bf L}^1$-distance between $u$ and $\tilde{u}$ can be estimated as follows $$\begin{gathered}
\label{L-est1}
\|u - \tilde{u}\|_{{\bf L}_1([0,L]^n)}~=~ ~\int_{[0,L]^n}|u(x)-\tilde{u}(x)|~dx~=~\sum_{\iota\in \{0,1,2,...,N-1\}^n} \int_{\square{\iota}} |u(x) - u_{\iota}|~dx\\[2mm]
~\leq~\sum_{\iota\in \{0,1,2,...,N-1\}^n}~\Bigg( {\mathrm{diam}(\mathrm{int}(\square{\iota}))\over 2}\cdot |Du|(\mathrm{int}(\square{\iota})) \Bigg)~\leq~ \frac{L\sqrt{n}}{N}~\sum_{\iota\in \{0,1,2,...,N-1\}^n} |Du|(\square{\iota})\\[2mm]
~=~{L\sqrt{n}\over N}~|Du|((0,L)^n)~\leq~{L\sqrt{n}\over N}\cdot V\,.\end{gathered}$$
v
[**2.**]{} Let $e_1,e_2,...,e_n$ be the standard basis of $\R^n$ where $e_i$ denotes the vector with a $1$ in the $i$-th coordinate and $0$’s elsewhere. For any $\iota\in\{0,1,2,...,N-1\}^n$ and $j\in \{1,2,...,n\}$, we estimate $\left|u_{\iota+e_j}-u_{\iota}\right|\,$ in the following way:
$$\begin{gathered}
\label{qs-mono}
|u_{\iota+e_j}-u_{\iota}|~=~ \left| {1\over \mathrm{Vol}\left(\square_{\iota+e_j}\right)}~\int_{\square_{\iota+e_j}}u(x)~dx - {1\over \mathrm{Vol}\left(\square_{\iota}\right)}~\int_{\square_{\iota}}u(x)~dx
\right|\\
~=~{1\over \mathrm{Vol}\left(\square_{\iota}\right)}\cdot \left| \int_{\square_{\iota}}~u\Big(x+{L\over N}\cdot e_j\Big)-u(x)~dx\right|~=~{1\over \mathrm{Vol}\left(\square_{\iota}\right)}\cdot \left| \int_{\square_{\iota}}\int_{0}^{{L\over N}}~Du(x+se_j)(e_j)~dsdx\right|\\
~\leq~{1\over \mathrm{Vol}\left(\square_{\iota}\right)}\cdot\int_{0}^{{L\over N}} \left|\int_{\square_{\iota}}~Du(x+se_j)(e_j)~dx\right|ds~~\leq~\left({N\over L}\right)^{n-1}\cdot |Du|(\mathrm{int}(\square_{\iota}\cup \square_{\iota+e_j} ))\,.\end{gathered}$$
Let us rearrange the index set $$\{0,1,2,\dots,N-1\}^{n}~=~\left\{\kappa^1,\kappa^2,\dots,\kappa^{N^n}\right\}$$ in the way such that for all $j\in\{1,...,N^n-1\}$, it holds $$\kappa^{j+1}~=~\kappa^{j}+e_k\qquad\mathrm{for\ some}~k\in\{1,2,...,n\}\,.$$ From (\[qs-mono\]) and (\[F\]), we have $$\begin{gathered}
\label{TV1}
\sum_{j=1}^{N^n}\left|u_{\kappa^{j+1}}-u_{\kappa^j}\right|~\leq~\left({N\over L}\right)^{n-1}\cdot \sum_{j=1}^{N^n}|Du|(\mathrm{int}(\square_{ \kappa^{j}}\cup \square_{ \kappa^{j+1}}))
\cr
~\leq~2\left({N\over L}\right)^{n-1}\cdot |Du|((0,L)^n)~\leq~2V\left({N\over L}\right)^{n-1}\,.\end{gathered}$$ To conclude this step, we define the function $f_{u,N}: [0,LN^{n-1}]\to [-M,M]$ associated with $u$ such that $$f_{u,N}(x)~=~u_{\kappa^{i}}\qquad\forall x\in \left[{i\cdot L\over N},{(i+1)\cdot L\over N}\right), i\in\left\{1,2,...,N^{n}-1\right\}\,.$$ Recalling (\[TV1\]), we have |Df\_[u,N]{}|((0,LN\^[n-1]{})) 2V([NL]{})\^[n-1]{}.\
[**3.**]{} Let’s define L\_N := LN\^[n-1]{},\_N := 2V([NL]{})\^[n-1]{}. We introduce the set $$\begin{gathered}
\tilde{\mathcal{F}}_N~=~\Big\{f:\left[0, L_N\right]\to [-M,M]~\big|~|Df|((0,L_N))~\leq~\beta_N~\mathrm{and}
\\
f(x)=f\left({i\cdot L\over N}\right)\quad\forall x\in \left[{i\cdot L\over N}, {(i+1)\cdot L\over N}\right) \Big\}\,.
\end{gathered}$$ From (\[TV2\]), one has $$f_{u,N}~\in~\tilde{\mathcal{F}}_N\qquad\forall u\in \mathcal{F}_{[L,M,V]}\,.$$ On the other hand, recalling that $$\mathcal{B}_{[L_N,2M,\beta_N]}~=~\left\{f\in {\bf L}^1([0,L_N],[0,2M])~\Big|~ |Df|((0,L_N))\leq \beta_N\right\}\,,$$ we have $$\tilde{\mathcal{F}}_N~\subset~\mathcal{B}_{[L_N,2M,\beta_N]}-M\,.$$ From Lemma \[1-D-BV\], for every $0<\ve'<{L_N\cdot(\beta_N+2M)\over 6}$, it holds $$\mathcal{H}_{\ve'}\left(\mathcal{B}_{[L_N,2M,\beta_N]}~\Big|~{\bf L}^1([0,L_N])\right)~\leq~8\cdot \left\lfloor{L_N(\beta_N+2M)\over \ve'}\right\rfloor\,,$$ and it yields $$\mathcal{H}_{\ve'}\left(\tilde{F}_{N}~\Big|~{\bf L}^1([0,L_N])\right)~\leq~8\cdot \left\lfloor{L_N(\beta_N+2M)\over \ve'}\right\rfloor\,.$$ By the definition \[def1\], there exists a set of $\Gamma_{N,\ve'}=\ds 2^{8\cdot \left\lfloor{L_N(\beta_N+2M)\over \ve'}\right\rfloor}$ functions in $\tilde{F}_N$, $$\mathcal{G}_{N,\ve'}~=~\left\{g_{1},g_2,\dots, g_{\Gamma_{N,\ve'}}\right\}~\subset~\tilde{F}_N\,,$$ such that $$\tilde{F}_{N}~\subset~\ds\bigcup_{i=1}^{\Gamma_{N,\ve'}}~B(g_i,2\ve')\,.$$ So for every $u \in \mathcal{F}_{[L,M,V]}$, for its corresponding $f_{u,N}, ~\exists~g_{i_u} \in \mathcal{G}_{N,\ve'}$ such that $$\|f_{u,N} - g_{i_u}\|_{{\bf L}^1([0,L_N])}~\leq~2\ve'\,.$$ Let $\mathcal{U}_{N,\ve'}$ be a set of $\Gamma_{N,\ve'}$ functions $u_j^{\dagger}:[0,L]^N\to[-M,M]$ defined as follows $$\begin{aligned}
u^{\dagger}_{j}~=~\left\{\bega{rl}
&\ds 0\qquad\qquad\qquad\qquad~~\mathrm{if}\qquad x\in \bigcup_{\iota\in \{1,2,...,N\}^n}\partial\square_{\iota}\,,
\\[3mm]
&\ds g_{j}\left({{i \cdot L}\over{N}}\right)\quad\qquad\quad~\mathrm{if}\qquad x\in\mathrm{int}\left(\square_{\kappa^i}\right), i\in \{1,2,\dots, N^n\}\,.
\enda\right. \end{aligned}$$ Then corresponding to every $u \in \mathcal{F}_{[L,M,V]}$, there exists $u^{\dagger}_{i_u}\in \mathcal{U}_{N,\ve'}$ for some $i_u\in\{1,2,\dots, \Gamma_{N,\ve}\}$ such that $$\begin{gathered}
\big\|\tilde{u}-u^{\dagger}_{i_u}\big\|_{{\bf L}^1([0,L]^n)}~=~\sum_{i=1}^{N^n}~\left|u_{\kappa^i}- g_{i_u}\left({{i \cdot L}\over{N}}\right)\right|\cdot\mathrm{Vol}\left(\square_{\kappa^i}\right)\\
\qquad~=~\sum_{i=1}^{N^n}~\left| f_{u,N}\left({{i \cdot L}\over{N}}\right)- g_{i_u}\left({{i \cdot L}\over{N}}\right)\right|\cdot {L\over N}\cdot {L^{n-1}\over N^{n-1}}\\
~=~{L^{n-1}\over N^{n-1}}\cdot \|f_{u,N} - g_{i_u}\|_{{\bf L}^1([0,L_N])}~\leq~2\ve'\cdot {L^{n-1}\over N^{n-1}}\,.\end{gathered}$$ Combining with (\[L-est1\]), we obtain u-u\^\_[i\_u]{}\_[[**L**]{}\^1(\[0,L\]\^n)]{} u-\_[[**L**]{}\^1(\[0,L\]\^n)]{}+-u\_[g\_[i\_u]{}]{}\_[[**L**]{}\^1(\[0,L\]\^n)]{} 2’+[LN]{}V.
[**4.**]{} For any $\ve>0$, we choose N = +1’ = [N\^[n-1]{}4L\^[n-1]{}]{} such that $$\big\|u-u^{\dagger}\big\|_{{\bf L}^1([0,L]^n)}~\leq~2\ve'\cdot {L^{n-1}\over N^{n-1}}+{L\sqrt{n}\over N}\cdot V~\leq~{\ve\over 2}+{\ve\over 2}~=~\ve$$ for all $u\in \mathcal{F}_{[L,M,V]}$ and for some $u^{\dagger}\in \mathcal{U}_{N,\ve'}$. From the previous step, it holds $$\mathcal{F}_{[L,M,V]}~\subseteq~\bigcup_{u^{\dagger}\in \mathcal{U}_{N,\ve'}}~\overline{B}(u^{\dagger},\ve)$$ provided that ’ = [N\^[n-1]{}4L\^[n-1]{}]{} [L\_N(\_N+2M)6]{} = [N\^[n-1]{}(VN\^[n-1]{}+ML\^[n-1]{})3 L\^[n-2]{}]{}. This condition is equivalent to $$\ve~\leq~{4\over 3}\cdot \left(LVN^{n-1}+ML^{n}\right)$$ From (\[cho1\]), one has that the condition (\[cond1\]) holds if [43]{}([2\^[n-1]{}n\^[[n-12]{}]{}L\^[n]{}V\^n\^[n-1]{}]{}+ML\^n). Assume that $0<\ve<{2ML^n\over 3}+ n^{{n-1\over 2n}} LV$, we claim that (\[cond1\]) holds. Indeed, if ${2ML^n\over 3}> n^{{n-1\over 2n}} LV$ then $$\ve~<~{2ML^n\over 3}+n^{{n-1\over 2n}} LV~\leq~\ds{4ML^n\over 3}$$ and it yields (\[cd2\]). Otherwise, we have that $\ve<{2ML^n\over 3}+n^{{n-1\over 2n}} LV\leq 2n^{{n-1\over 2n}} LV$. Thus $$\begin{aligned}
{4\over 3}\cdot \left({2^{n-1}n^{{n-1\over 2}}L^{n}V^n\over \ve^{n-1}}+ML^n\right)&\geq&{4\over 3}\cdot {2^{n-1}n^{{n-1\over 2}}L^{n}V^n\over 2^{n-1}n^{{(n-1)^2\over 2n}}L^{n-1}V^{n-1}}+{4\over 3}ML^n\\[4mm]
&=&{4\over 3}\cdot n^{{n-1}\over 2n}LV+{4\over 3}ML^n\,.\end{aligned}$$ and this implies (\[cd2\]).
To complete the proof, recalling (\[LbN\]) and (\[cho1\]), we estimate $$\begin{aligned}
\mathrm{card}(\mathcal{U}_{N,\ve'})&=&\Gamma_{N,\ve'}~=~2^{8\cdot \left\lfloor{L_N(\beta_N+2M)\over \ve'}\right\rfloor}~=~\ds 2^{8\cdot \left\lfloor{8\over \ve}\cdot \left(LVN^{n-1}+ML^n\right)\right\rfloor}\\[4mm]
&\leq&\ds 2^{{64\over \ve}\cdot\left(LV \left(\left\lfloor{2\sqrt{n}LV\over \ve}\right\rfloor+1\right)^{n-1} +ML^n\right)}\,.\end{aligned}$$ Therefore, $$\begin{aligned}
\mathcal{H}_{\ve}\left(\mathcal{F}_{[L,M,V]}~\Big|~{\bf L}^1([0,L]^n)\right)&\leq&{64\over \ve}\cdot\left(LV \left(\left\lfloor{2\sqrt{n}LV\over \ve}\right\rfloor+1\right)^{n-1} +ML^n\right)\\[4mm]
&\leq& \ds {64\over\ve}\cdot \left(LV\left({2^{2n-3}n^{{n-1\over 2}}L^{n-1}V^{n-1}\over \ve^{n-1}}+2^{n-2}\right)+ML^n\right)\\[4mm]
&=&\ds{2^{2n+3}n^{n-1\over 2}L^nV^n\over \ve^n}+{2^{n+4}LV+ML^n\over \ve}\,.\end{aligned}$$ In particular, if $0<\ve<{ML^n\over 8}$ then $$\mathcal{H}_{\ve}\left(\mathcal{F}_{[L,M,V]}~\Big|~{\bf L}^1([0,L]^n)\right)~\leq~\ds\left[2^{2n+3}n^{n-1\over 2}L^nV^n+\left(2^{n+4}LV+ML^n\right)\cdot\left({ML^n\over 8}\right)^{n-1} \right]\cdot {1\over \ve^n}$$
and it yields the right hand side of (\[m-est\]).\
\
[*(Lower estimate)*]{} We are now going to prove the lower estimate of $\mathcal{H}_{\ve}\left(\mathcal{F}_{[L,M,V]}~\Big|~{\bf L}^1([0,L]^n)\right)$.
[**1.**]{} Again given any $N\in\mathbb{N}$, we divide the square $[0,L]^n$ into $N^n$ small squares $\square_{\iota}$ for $\iota=(\iota_1,\iota_2,...,\iota_n)\in \{0,1,...,N-1\}^n$ such that $$\square_{\iota}~=~{\iota L\over N}+\Bigg(\left[0, {L\over N}\right]\times \left[0, {L\over N}\right]\times...\times \left[0, {L\over N}\right]\Bigg)\qquad\mathrm{and}\qquad \bigcup_{\iota\in \{0,1,2,...,N-1\}^n}~\square_{\iota}~=~[0,L]^n\,.$$ Consider the set of $N^{n}$-tuples $$\Delta_{N}~=~\left\{\ds\delta=(\delta_{\iota})_{\iota\in\{0,1,\dots,N-1\}^n}~\Big|~\delta_{\iota}\in \{0,1\}\right\}\,.$$ Given any $h>0$, for any $\delta\in \Delta_N$, define the function $u_{\delta}:[0,L]^n\to \{0,h\}$ such that $$u_{\delta}(x)~=~\sum_{\iota\in \{0,1,\dots,N-1\}^n}h\delta_{\iota}\cdot \chi_{\mathrm{int}(\square_{\iota})}(x)\qquad\forall x\in [0,L]^n\,.$$ One has $u_{\delta}\in BV((0,L)^n)$ and $$\left|Du_{\delta}\right|((0,L)^n)~\leq~\sum_{\iota\in\{0,1,\dots,N-1\}^n}|Du_{\delta}|(\square_{\iota})~\leq~2^{n-1}\left({L\over N}\right)^{n-1}N^nh~=~(2L)^{n-1}Nh\,.$$ Assuming that 0 < h {M , [V2\^[n-1]{}L\^[n-1]{} N]{}}, we have $$\left|Du_{\delta}\right|((0,L)^n)~\leq~(2L)^{n-1}N\cdot {V\over 2^{n-1}L^{n-1} N}~=~V\qquad\forall \delta\in \Delta_N\,,$$ and this implies $$\mathcal{G}_{h,N}~:=~\left\{u_{\delta}~|~\delta\in\Delta_N\right\}~\subset \mathcal{F}_{[L,M,V]}\qquad\forall N\in\mathbb{N}\,.$$ Hence, \_(\_[\[L,M,V\]]{} | [**L**]{}\^1(\[0,L\]\^n)) \_(\_[h,N]{} | [**L**]{}\^1(\[0,L\]\^n))>0. Towards an estimate of the covering number $\mathcal{N}_{\ve}\left(\mathcal{G}_{h,N}~\Big|~{\bf L}^1([0,L]^n)\right)$, for a fixed $\tilde{\delta}\in\Delta_N$, we can define \_[,N]{}(2) = {\_N | u\_-u\_\_[[**L**]{}\^1(\[0,L\]\^n)]{} 2}C\_[N]{}(2) = (\_[,N]{}(2)) since the cardinality of the set $\mathcal{I}_{\tilde{\delta},N}(\ve)$ is is independent of the choice $\tilde{\delta}\in\Delta_N$. Observe that an $\ve$-cover in ${\bf L}^1$ of $\mathcal{G}_{h,N}$ contains at most $C_N(2\ve)$ elements. Since $\mathrm{Card}(\mathcal{G}_{h,N})=\mathrm{Card}(\Delta_N)=2^{N^n}$, it holds \_(\_[h,N]{} | [**L**]{}\^1(\[0,L\]\^n)) [2\^[N\^n]{}C\_N(2) ]{}. We now provide an upper bound on $C_N(2\ve)$. For any given pair $\delta,\tilde{\delta}\in\Delta_N$, one has $$\|u_{\delta}-u_{\bar{\delta}}\|_{{\bf L}^1([0,L]^n)}~=~\sum_{\iota\in\{0,1,\dots,N\}^n}\|u_{\delta}-u_{\bar{\delta}}\|_{{\bf L}^1(\square_{\iota})}~=~ d(\delta,\tilde{\delta})\cdot {hL^n\over N^n}\,.$$ where $$d(\delta,\tilde{\delta})~:=~\mathrm{Card}\left(\{\iota\in\{0,1,\dots,N-1\}^n~|~\delta_{\iota}\neq \tilde{\delta}_{\iota}\}\right)\,.$$ From (\[II\]), we obtain $$\mathcal{I}_{\tilde{\delta},N}(2\ve)~=~\left\{\delta\in\Delta_N~\Big|~d(\delta,\tilde{\delta})~\leq~{2\ve N^n\over hL^n}\right\}\,,$$ and it yields $$C_N(2\ve)~=~\mathrm{Card}\left(\mathcal{I}_{\tilde{\delta},N}(2\ve)\right)~\leq~\sum\limits_{r = 0}^{\left\lfloor{2\ve N^n\over hL^n}\right\rfloor} {N^n \choose r}\,.$$ To estimate the last term in the above inequality, let’s consider $N^n$ independent random variables with uniform Bernoulli distribution $X_1,X_2,\dots, X_{N^n}$ $$\mathbb{P}(X_i=1)~=~\mathbb{P}(X_i=0)~=~{1\over 2}\qquad\forall i\in \{1,2,\dots, N^n\}\,.$$ Set $S_{N^n}:= X_1+X_2+\dots+X_{N^n}$. Observe that for any $k\leq N^n$, we have $$\sum_{r=1}^{k}~ {N^n \choose r}~=~2^{N^n}\cdot \mathbb{P}\left(S_{N^n}\leq k\right)\,.$$ Thanks to Hoeffding’s inequality [@Hoeffding Theorem], for all $\mu\leq {N^n\over 2}$, one has $$\mathbb{P}\left(S_{N^n}\leq \mathbb{E}[S_{N^n}]-\mu\right)~=~\mathbb{P}\left(S_{N^n}\leq {N^n\over 2}-\mu\right)~\leq~\exp\left(-{2\mu^2\over N^n}\right)$$ where $ \mathbb{E}[S_{N^n}]$ is the expectation of $S_{N^n}$. Hence, for every $0<\ve\leq{hL^n\over 8}$ such that ${2\ve N^n\over hL^n}\leq {N^n\over 2}$ and ${4\ve\over hL^n}\leq {1\over 2}$, it holds $$\begin{aligned}
C_N(2\ve)&\leq&\sum\limits_{r = 0}^{\left\lfloor{2\ve N^n\over hL^n}\right\rfloor} {N^n \choose r}~=~ 2^{N^n}\cdot\mathbb{P}\left(S_{N^n}\leq \left\lfloor{2\ve N^n\over hL^n}\right\rfloor\right)
\\[4mm]
&\leq&2^{N^n}\cdot \exp\left(-{2\left({N^n\over 2}-\left\lfloor{2\ve N^n\over hL^n}\right\rfloor\right)^2\over N^n }\right)~\leq~2^{N^n}\cdot \exp\left(-{\left(N^n-{4\ve N^n\over hL^n}\right)^2\over 2N^n }\right)\\[4mm]
&=&2^{N^n}\cdot \exp\left(-N^n\cdot {\left(1-{4\ve\over hL^n}\right)^2\over 2}\right)~\leq~2^{N^n}\cdot e^{-N^n/8}\,.\end{aligned}$$ From (\[lbb\]) and (\[condh\]), the following holds $$\begin{aligned}
\mathcal{N}_{\ve}\left(\mathcal{G}_{h,N}~\Big|~{\bf L}^1([0,L]^n)\right)&\geq&{2^{N^n}\over C_N(2\ve)}~\geq~e^{{N^n\over 8}}\end{aligned}$$ provided that 0 < h {M , [V2\^[n-1]{}L\^[n-1]{} N]{}}0< [hL\^n8]{}. Therefore, for every $0<\ve<{ML^n\over 8}$, by choosing $$h~=~\min\left\{M~,~{V\over 2^{n-1}L^{n-1} N}\right\}\qquad\mathrm{and}\qquad N~\doteq~\left\lfloor{VL\over 2^{n+2}\ve}\right\rfloor$$ such that (\[condl\]) holds, we obtain that $$\mathcal{N}_{\ve}\left(\mathcal{G}_{h,N}~\Big|~{\bf L}^1([0,L]^n)\right)~\geq~\exp\left({1\over 8}\cdot \left\lfloor{VL\over 2^{n+2}\ve}\right\rfloor^n\right)\,.$$ Recalling (\[comp\]), we have $$\mathcal{N}_{\ve}\left(\mathcal{F}_{[L,M,V]}~\Big|~{\bf L}^1([0,L]^n)\right)~\geq~\exp\left({1\over 8}\cdot \left\lfloor{VL\over 2^{n+2}\ve}\right\rfloor^n\right)$$ and this implies the first inequality in (\[m-est\]).
**Acknowledgments.** K.T. Nguyen is partially supported by a grant from the Simons Foundation/SFARI (521811, NTK).
[99]{}
G. Acosta and R. C. Dúran, An optimal Poincaré inequality in ${\bf L}^1$ for convex domains. Proc. Amer. Math. Soc. Vol 132 (2003), no.1, p. 195-202.
L. Ambrosio, N. Fusco and D. Pallara, Functions of Bounded Variation and Free Discontinuity Problems, Oxford Science Publications, Clarendon Press, Oxford, UK, (2000).
F. Ancona, P. Cannarsa and Khai T. Nguyen, Quantitative compactness estimates for Hamilton-Jacobi equations, [*Arch. Rat. Mech. Anal.*]{}, [**219**]{} (2016), no. 2, 793–828. F. Ancona, P. Cannarsa and Khai T. Nguyen, The compactness estimates for Hamilton Jacobi Equations depending on space, [*Bulletin of the Institute of Mathematics, Academia Sinica*]{} [**11**]{} (2016), no. 1, 63–113. F. Ancona, O. Glass and K. T. Nguyen, Lower compactness estimates for scalar balance laws, Comm. Pure Appl. Math 65 (2012), no. 9, 1303–1329.
F. Ancona, O. Glass and Khai T. Nguyen, On lower compactness estimates for general nonlinear hyperbolic systems, [*Ann. Inst. H. Poincaré Anal. Non Linéaire*]{}, [**32**]{} (2015), no. 6, 1229–1257. F. Ancona, O. Glass and K. T. Nguyen, On quantitative compactness estimates for hyperbolic conservation laws, to appear on: Hyperbolic problems: theory, numerics and applications. Proceedings of the 14th International Conference on Hyperbolic Problems (HYP2012), AIMS, Springfield, MO, 2014.
P. L. Bartlett, S. R. Kulkarni and S.E. Posner, Covering numbers for real-valued function classes. IEEE Trans. Inform. Theory 43 (1997), no. 5, 1721–1724. Y. Yang and A. Barron, Information-theoretic determination of minimax rates of convergence, [*Ann. Statist.*]{} [**27**]{} (1999), 1564–1599. L. Birgé, Approximation dans les espaces metriques et theorie de l’estimation, [*Zeitschrift fur Wahrscheinlichkeitstheorie und Verwandte [**65**]{} (1983), 181–237. Gebiete*]{} L. Birgé, estimating a density under order restrictions: nonasymptotic minimal risk, [*Ann. Stat.*]{} [**15**]{} (1987), 995–1012. L. Birgé and P. Massart, Rates of convergence for minimum contrast estimators, [*Probab. Theory Related Fields*]{} [**97**]{} (1993), 113–150. E. M. Bronshtein, $\ve$-entropy of convex sets and functions, [*Siberian Math J.*]{} [**17**]{} (1976), 393–398.
L. Le. Cam, Convergence of estimates under dimensionality restrictions, [*Ann. Statist.*]{} [**1**]{} (1973), 38–53.
C. De Lellis and F. Golse, A Quantitative Compactness Estimate for Scalar Conservation Laws, [*Comm. Pure Appl. Math.*]{} [**58**]{} (2005), no. 7, 989–998. R.M. Duley, Central limits theorems for empirical measure, [*Ann. Probability*]{} [**6**]{} (1978), 899–929.
D. Dryanov, Kolmogorov entropy for classes of convex functions, [*Constructive Approx*]{} [**30**]{} (2009), 137–153.
D. Haussler, Decision theoretic generalizations of the PAC model for neural net and other learning applications, [**100**]{} (1992), 78–150. D. Haussler, Sphere packing numbers for subsets of the Boolean $n$-cube with bounded Vapnik-Chervonenkis, [*Journal of Combinatorial Theorem, Series A*]{} [**69**]{} (1995).
S. Van de Geer, Applications of Empirical Process Theory, [*Cambridge*]{}, U. K: Cambridge Univ. Press. 200.
W. Hoeffding, Probability inequalities for sums of bounded random variables. J. Amer. Statist. Assoc. 58 (1963), 13–30. P. Groeneboom, Some current developments in density of estimation, [*CWI Monographs*]{}, North Holland, 1986.
A. Guntuboyina and B. Sen, Covering Numbers for Convex Functions, [*IEEE Transactions On Information Theory*]{} [**59**]{} (2013), no. 4, 1957–1965.
S.R. Kulkarni, S.K. Mitter, and J.N. Tsitsiklis, Active learning using arbitrary binary-valued queries, [*Machine Learning*]{} [**11**]{} (1993), 23–35.
A.N. Kolmogorov and V.M Tikhomirov, $\varepsilon$-Entropy and $\varepsilon$-capacity of sets in functional spaces. Uspekhi Mat. Nauk [**14**]{} (1959), 3-86.
P. D. Lax, Accuracy and resolution in the computation of solutions of linear and nonlinear equations. Recent advances in numerical analysis (Proc. Sympos., Math. Res. Center, Univ. Wisconsin, Madison, Wis., 1978). Publ. Math. Res. Center Univ. Wisconsin, 107–117. Academic Press, New York, 1978. P.D. Lax, Course on hyperbolic systems of conservation laws.. XXVII Scuola Estiva di Fis. Mat., Ravello, 2002. D. Pollard, Convergence of Stochastic Processes, Springer, New York, 1984. , On efficient learning of linear combinations of basic function, [*Proceedings of the Eight Annual Conference on Computational learning theory*]{} (1995), ACM Press, 369–376.
|
---
abstract: 'We report the CO($J=1-0$) observations of the Whirlpool Galaxy M51 using both the Combined Array for Research in Millimeter Astronomy (CARMA) and the Nobeyama 45m telescope (NRO45). We describe a procedure for the combination of interferometer and single-dish data. In particular, we discuss (1) the joint imaging and deconvolution of heterogeneous data, (2) the weighting scheme based on the root-mean-square (RMS) noise in the maps, (3) the sensitivity and uv-coverage requirements, and (4) the flux recovery of a combined map. We generate visibilities from the single-dish map and calculate the noise of each visibility based on the RMS noise. Our weighting scheme, though it is applied to discrete visibilities in this paper, should be applicable to grids in $uv$-space, and this scheme may advance in future software development. For a realistic amount of observing time, the sensitivities of the NRO45 and CARMA visibility data sets are best matched by using the single dish baselines only up to 4-6 $k\lambda$ (about 1/4-1/3 of the dish diameter). The synthesized beam size is determined to conserve the flux between synthesized beam and convolution beam. The superior $uv$-coverage provided by the combination of CARMA long baseline data with 15 antennas and NRO45 short spacing data results in the high image fidelity, which is evidenced by the excellent overlap between even the faint CO emission and dust lanes in an optical [*Hubble Space Telescope*]{} image and $PAH$ emission in an [*Spitzer*]{} $8\mu m$ image. The total molecular gas masses of NGC 5194 and 5195 ($d=8.2 \rm \, Mpc$) are $4.9\times 10^9 {{\rm M_{\odot}}}$ and $7.8 \times 10^7 {{\rm M_{\odot}}}$, respectively, assuming the CO-to-H$_2$ conversion factor of $X_{\rm CO}= 1.8\times 10^{20} \rm \, cm^{-2} [K \cdot km/s]^{-1}$. The presented images are an indication of the millimeter-wave images that will become standard in the next decade with CARMA and NRO45, and the Atacama Large Millimeter/Submillimeter Array (ALMA).'
author:
- 'Jin Koda, Tsuyoshi Sawada, Melvyn C. H. Wright, Peter Teuben, Stuartt A. Corder, Jenny Patience, Nick Scoville, Jennifer Donovan Meyer, Fumi Egusa'
title: 'CO($J=1-0$) Imaging of M51 with CARMA and Nobeyama 45m Telescope'
---
Introduction
============
Interferometers have an intrinsic limitation, namely, the problem of missing information. An interferometer records the target Fourier components of the spatial emission distribution, but an interferometer with a small number of antennas ($N$) can collect only a limited number, $N(N-1)/2$, of the Fourier components instantaneously. In addition, the finite diameter of each antenna limits the minimum separation between antennas, which, in turn, imposes a maximum size on an object that the interferometer can detect. The zero-spacing data (i.e. zero antenna separation data) carry the important information of the total flux, and this information is always missing. The incomplete Fourier coverage ($uv$-coverage) also degrades the quality of image. Deconvolution schemes have been developed to extrapolate the observed $uv$ data to estimate the missing information, however, the performance is poor for objects with high contrast, such as spiral arms and the interarm regions of galaxies.
The small-$N$ problem is particularly severe in millimeter astronomy, though it is greatly reduced with the 15-element Combined Array for Research in Millimeter Astronomy (CARMA). CARMA combines the previously independent Owens Valley Millimeter Observatory (OVRO) array ($N=6$) and Berkeley-Illinois-Maryland Association (BIMA) array ($N=10$ – reduced to 9 for CARMA). The number of antenna pairs, or [*baselines*]{}, is increased to 105 from the previous values of 15 (OVRO) and 45 (BIMA), providing a substantial improvement in $uv$ coverage. In most observatories, a few array configurations are used to increase the number of baselines. The $uv$ coverage from one CARMA configuration is equivalent to that from seven configurations with a 6-element array. CARMA ensures the unprecedented $uv$ coverage in millimeter interferometry compared to previous mm-wave arrays.
Single-dish telescopes complement the central $uv$ coverage and provide short baselines, including the zero-spacing baseline. The combination of interferometer and single-dish data is not trivial, though several methods have been suggested. Existing methods can be categorized into three types. The first method produces visibilities from a single-dish map [@vog84; @tak03] and adds single-dish and interferometer data in the $uv$ domain. @pet10 discussed a mathematica formalism. One issue faced when utilizing this method, the difficulties of which are discussed in [@hel03], has been the weighting of the two sets of data in combination. @rod08 and @kur09 manually set the single-dish weight relative to the weight of the interferometer to improve the shape of the synthesized beam. In this paper, we suggest a new weighting scheme based solely on the quality of the single-dish data. In our method, the single-dish weight is independent of the interferometer data and is intrinsic to the single-dish observations. It naturally down-weights (up-weights) the single-dish data when its quality is poor (high). In the appendix, we discuss the sensitivity matching which makes the combination most effective.
The second type of combination method co-adds two sets of data in the image domain [@sta99]. This approach produces a joint dirty image and synthesized beam [^1] by adding the single-dish map and the interferometer dirty image, and single-dish beam and interferometer synthesized beam, respectively. The joint dirty image is then deconvolved with the joint dirty beam. This technique was adopted for the BIMA-SONG survey [@hel03]. @cor88, and recently @sta02, also discussed a non-linear combination technique through joint deconvolution with the maximum entropy method (MEM).
The third method was introduced by @wei01 and operates in the Fourier plane. The deconvolved interferometer map and single-dish map are Fourier transformed and then the central $uv$-space from interferometer data is replaced with single-dish data.
This paper describes the observations, data reduction, and combination of CARMA and NRO45 data of M51. Our procedure unifies the imaging techniques for interferometer mosaic data, heterogeneous array data, and the combined data of single-dish and interferometer. Earlier data reduction and results have been published [@kod09]. The method and results are the same, but we have re-calibrated and reduced the entire data set using higher accuracy calibration data. In §\[sec:carma\] and §\[sec:nro45\], we describe the CARMA and NRO45 observations and calibration. The deconvolution (such as CLEAN) is detailed in §\[sec:imaging\] for three cases: (a) homogeneous array, single-pointing observations, (b) heterogeneous array, single-pointing observations, and (c) heterogeneous array, mosaic observations. The weighting scheme in co-adding the images from a heterogeneous array (with multiple primary beams) is discussed in §\[sec:weighting\]. The result from this subsection is also essential for the combination of interferometer and single-dish data. The conversion of a single-dish map to visibilities is explored in §\[sec:comb\], and §\[sec:fidelity\] discusses the resultant map and image fidelity. A summary of the requirements of single-dish observations for the combination are explained in §\[sec:req\]. Comments on other combination methods are given in §\[sec:othermethods\]. The summary is in §\[sec:summary\], and sensitivity matching between single-dish and interferometer observations is discussed in Appendix \[sec:senmatch\].
CARMA {#sec:carma}
=====
Observations {#sec:obscarma}
------------
High resolution observations of the Whirlpool galaxy M51 in the CO($J=1-0$) line were performed with the Combined Array for Research in Millimeter Astronomy (CARMA) during the commissioning phase and in the early science phase during the CARMA construction (2006-2007). CARMA is a recently-developed interferometer, combining the six 10-meter antennas of the Owens Valley Radio Observatory (OVRO) millimeter interferometer and the nine 6-meter antennas of the Berkeley-Illinois-Maryland Association (BIMA) interferometer. The increase to 105 baselines provides superior $uv$-coverage and produces high image fidelity. The C and D array configurations are used. The baseline length spans over 30-350 m (C array) and 11-150 m (D array).
The observations started with the heterodyne SIS receivers from OVRO and BIMA. The typical system temperature of these original receivers was $\sim 200$ K in double-side band. The receivers of the 15 antennas were being upgraded one antenna at a time during the period of observations, but the process was not completed before these observations finished. The system temperature of the new replacement receivers is typically $\sim 100$ K at 115 GHz.
The first-generation CARMA digital correlators were used as a spectrometer. They had three dual bands (i.e. lower and upper side bands) for all 105 baselines. Each band had five configurations of bandwidth – 500, 62, 31, 8, and 2 MHz – which have 15, 63, 63, 63, and 63 channels, respectively. We switched the configuration of band 1, 2, 3 between (band 1, 2, 3) = (500, 500, 500) for gain calibration quasar observations and (band 1, 2, 3) = (62, 62, 62) for target integrations. This “hybrid” configuration ensures both a sufficient detection of the gain calibrator 1153+495 with the total 3 GHz bandwidth (i.e. 3 bands $\times$ 2 side bands $\times$ 500MHz bandwidth) and a sufficiently wide velocity coverage for the main galaxy NGC 5194. The total bandwidth is 149.41 MHz after dropping edge 6 channels at each side, which could be noisier than the central channels. The companion galaxy NGC 5195 was not included in the velocity coverage, although it was detected in the NRO45 map (§\[sec:obsnro45\]).
The hybrid mode observations require a special calibration for amplitude and phase offsets between bands and between configurations. We observed a bright quasar by changing the correlator configurations in time sequence: 1. (band 1, 2, 3) = (500, 500, 500), 2. (62, 62, 62), 3. (500, 62, 62), 4. (62, 500, 62), and 5. (62, 62, 500). Each configuration spends 5 min on integration, and the whole sequence takes 25min integration in total. We used the bright quasars 3C273, 2C279, or 3C345, depending on availability during the observations. For any pair of band and bandwidth, this sequence has simultaneous integrations which can be used to calibrate the phase offset and amplitude scale between bands. The calibration observations took typically 45 min including the radio pointing and antenna slew. These integrations were used for passband calibration as well.
An individual observation consisted of a 4-10 h track. The total observing time (after flagging tracks under bad weather) is about 230 h ($\sim 30$ tracks). A typical track starts with radio pointing observations of a bright quasar available at the time, then observes a flux calibrator (e.g. a planet), and repeats the 25 min observing cycle of gain calibrator ($\sim$ 5 min) and target (20 min including antenna slew for mosaic). The passband/hybrid observations were performed at the middle of a track when M51 is at a high elevation ($\sim 80$ deg). At such a high elevation, each antenna slew between M51 and the calibrator takes a considerable amount of time. Observing a passband calibrator at a lower elevation avoids this loss. The system temperature (Tsys) was measured every gain calibrator cycle, and the atmospheric gain variation is corrected real-time using Tsys. We observed 1153+495 as a gain calibrator.
The telescope pointings were corrected every 4 h during the night and every 2 h during daytime. The last $\sim 10$ tracks of the 30 total tracks also included an additional optical pointing procedure developed by @cor10. The optical procedure can operate during daytime, as well as at night, and a pointing correction was made every gain calibration cycle. This method measures the offset between radio and optical pointing vectors at the beginning of track (which is stable over periods much longer than the typical observation). During the observing cycle of gain calibrator and target, the pointing drift, typically several arcsec per hour, is adjusted using a bright star close to the gain calibrator using an optical camera. The overhead of the pointing adjustment is less than 1 min.
We mosaiced the entire $6 \arcmin .0 \times 8\arcmin .4$ disk of M51, with the disk defined by optical images and shown in Figure \[fig:pointing\], in 151 pointings with Nyquist sampling of the 10m antenna beam (FWHM of 1 arcmin for the 115GHz CO J=1-0 line). Ideally, every pointing position would be observed every M51 observing cycle ($\sim$ 20 min duration) to maintain a uniform data quality and $uv$ coverage across the mosaiced area. However, the overhead for slewing is significant for the large mosaic. It is as long as 6 sec per slew, and about 15 min total for 151 pointings. We therefore observed every third pointing (total $\sim50$ pointings) in each observation cycle to reduce the overhead. Three consecutive cycles cover all 151 pointings. Each track started from a pointing randomly chosen from the table of the 151 pointings, which helps the uniform data quality among pointings. The resultant CARMA $uv$ coverage is very similar at all pointings, and an example of the $uv$ coverage at the central pointing is in Figure \[fig:uvcov\].
The primary flux calibrators, Uranus, Neptune, and MWC349, were observed in most tracks. We monitored the flux of gain calibrator 1153+495 every month over the course of the observations. The flux of 1153+495 varied slowly between 0.7 and 1.3 Jy. The CARMA observatory is separately monitoring the flux variations of common passband calibrators, and our flux measurements are consistent with the observatory values.
Calibration {#sec:redcarma}
-----------
The data were reduced and calibrated using the Multichannel Image Reconstruction, Image Analysis, and Display (MIRIAD) software package [@sau95]. We developed additional commands/tasks to investigate and to reduce the large amount of data effectively, and to combine interferometer and single-dish data.
The initial set of calibrations are the required routines for most CARMA data reduction. First, we flag the data with problems such as antenna shadowing and bad Tsys measurements. Second, we apply the correction for variation of optical fiber cable length, namely line length correction. CARMA is a heterogeneous array of two types of antennas (i.e., 6m and 10m), and the optical fiber cables that connect the antennas to the control building are mounted differently for the 10m and 6m dishes. The time variations of the cable lengths due to thermal expansion are therefore different, which results in phase wraps in the baselines between 6m and 10m antennas. The changes of the cable lengths were monitored to an accuracy of 0.1 pico-second by sending signals from the control building and measuring their round-trip travel time. The changes are stored in MIRAD data and are used for the line-length correction. Third, we smooth the spectra with the Hanning window function to reduce the high side-lobes in raw spectra from the digital correlators. The spectral resolution is lowered by a factor of 2 and becomes 1.954 MHz (5.08 km/s at the CO($J=1-0$) frequency).
Calibrations for passband and hybrid correlator configuration were made using the sequence of hybrid configuration observations described in §\[sec:obscarma\]. We first separate 500 MHz and 62 MHz integrations from the sequence and make two MIRIAD data sets containing only 500MHz data or 62 MHz data. These data sets are used to derive and apply passbands. The passband calibration removes the phase and amplitude offsets among Band 1, 2, and 3 in the 500 and 62 MHz modes. An offset/passband calibrator is significantly detected even in 10 sec integration both in the 62 MHz and in the 500 MHz mode. We derive the phase offset and amplitude scale between the 500 MHz and 62 MHz modes by comparing the visibilities from the two modes on the 10 sec integration basis, and averaging them over time to derive single values for the phase offset and amplitude scale. We applied these calibrations to the entire track, which removes the phase and amplitude offsets between gain calibrator and target integrations. Errors of the hybrid calibration are small compared to the other errors and are only a few percent in amplitude and a few degrees in phase.
The last set of calibrations includes the standard phase calibrations to compensate for atmospheric and instrumental phase drifts. We did not use the gain calibrator integrations with large phase scatters (due to bad weather) and flagged the target integrations in the cycles immediately before and after the bad gain data. The absolute fluxes of the gain calibrator were measured monthly against a planet (§\[sec:obscarma\]) and were applied to target data.
The resulting $1\sigma$ noise level of the CARMA data is 27 mJy/beam in each $10{\, {\rm km \, s^{-1}}}$ channel.
Nobeyama Radio Observatory 45m Telescope {#sec:nro45}
========================================
Observations {#sec:obsnro45}
------------
We obtained total power and short spacing data with the 5x5-Beam Array Receiver System [BEARS; @sun00] on the Nobeyama Radio Observatory 45m telescope (NRO45). The FWHM of the NRO45 beam is $15\arcsec$ at 115 GHz. We configured the digital spectrometer [@sor00] to 512 MHz bandwidth at 500 kHz channel resolution. This is wide enough to cover the entire M51 system (both NGC 5194 and 5195). Hanning smoothing was applied to reduce the side-lobe in channel, and therefore, the resolution of raw data is 1 MHz.
We scanned M51 in the RA and DEC directions using the On-The-Fly (OTF) mapping technique [@man07; @saw08]. We integrated OFF positions around the galaxy before and after each $\sim 1$ min OTF scan. A scan starts from an emission-free position at one side of the galaxy and ends on another emission-free position at the other side. Spectra are read-out every 0.1 second interval during the scan. The receiver array was rotated by $7 \deg$ with respect to the scan directions, so that the 25 beams draw a regular stripe with a $5\arcsec$ separation. In combining the RA and DEC scans, the raw data form a lattice with $5\arcsec$ spacing. This fine sampling, with respect to the beam size of $15\arcsec$, is necessary in reproducing the $uv$ data up to the 45m baseline (i.e. the diameter of NRO45), since we need the Nyquist sampling ($5.96\arcsec$) of $\lambda_{\rm CO}/D=11.92\arcsec$, where $\lambda_{\rm CO}$ is the wavelength ($=2.6\,\rm mm$) and $D$ is the antenna diameter [@man07]. If the sampling is coarser than $5.96\arcsec$, the aliasing effect in the Fourier space contaminates even shorter baseline data significantly. For example, if the sampling spacing is only $10.3\arcsec$ [i.e., typical sampling in past NRO45 observations; @kun07], the $uv$ data down to the $\sim 7\,\rm m$ baseline is contaminated (Figure \[fig:uvnro45\]), and cannot be combined with the interferometer data.
The typical system temperature in double side band was $\sim320$ K. The pointing of the telescope was checked every $\sim 45$ min and was accurate to within $\sim 2$-$3\arcsec$. BEARS is an array of double-side band (DSB) receivers and provides the antenna temperature $T_a^*(\rm DSB)$ in DSB. The upper/lower side band ratio, namely the scaling factor, was measured by observing Orion IRC2 using both BEARS and the single-side band (SSB) receiver S100 and taking the ratios of the two measurements. The error in the measurements is a few percent. The total observing time under good weather conditions is about 50 hours.
Calibration {#sec:nro45red}
-----------
The ON/OFF calibration to account for the sky background level was applied after the observations. We interpolated between two OFF-sky integrations before and after each OTF scan ($\sim1$ minute long), which reduced non-linear swells in the spectral baselines significantly. We used the [*NOSTAR*]{} data reduction package developed at the Nobeyama Radio Observatory [@saw08], converted the flux scale from $T_a^*(\rm DSB)$ of BEARS to $T_a^*(\rm SSB)$ of S100, subtracted linear spectral baselines, and flagged bad integrations.
The 5" lattice of data from the observations was re-gridded with a spheroidal smoothing function, resulting in a final resolution of $19.7\arcsec$. We used the grid size of $5.96\arcsec$, which is the Nyquist sampling of the 45 m spacing in Fourier space; this pixel scale is necessary to prevent artifacts from the aliasing effect (§\[sec:obsnro45\]).
We made maps of the RA and DEC scans separately. The two maps were co-added after subtracting spatial baselines in each scan direction to reduce systematic errors in the scan direction. Note that for OTF mapping, the sharing of an OFF among many ON scans may introduce noise correlations, primarily at small spatial frequencies in the Fourier space. @eme88 reduced such correlated noise using the basket-weave method, which down-weights the data at small spatial frequencies in the scan directions when the RA and DEC maps are added. We compared the spatial-baseline subtraction and basket-weave methods, and found that both diminish the large-scale noise well. The difference was subtle, but the former gave a slightly smaller RMS noise, and thus, we decided to use the spatial-baseline method.
The antenna temperature $T_a^*(\rm SSB)$ was converted to the main beam temperature $T_{\rm mb}$, using the main beam efficiency of $\eta_{\rm mb} = 0.4$ and $T_{\rm mb} = T_a^*(\rm SSB)/\eta_{\rm mb}$.
The flux of the final NRO45 map is consistent with most previous measurements within a typical error of millimeter-wave measurements (10-20%). It is compared with four other results: an image from the National Radio Astronomy Observatory 12 m telescope [NRAO12; @hel03], two previous measurements at NRO45 [@nak94; @mat99], and our new CARMA data (§\[sec:carma\]). The fluxes from @hel03, @mat99, and the new CARMA observations are 94%, 95%, and 93% of that of the new NRO45 map, respectively. For the comparisons, we re-sampled the new map to match the area coverage of the other maps. For the comparison with CARMA, the CARMA [*uv*]{}-distribution is generated from the new NRO45 map (as discussed in §\[sec:nrouv\], but for Hatcreek, OVRO, and CARMA primary beams), and the positive fluxes (above about $4\sigma$) in the dirty maps are compared to measure the flux ratio. We used a Gaussian taper (FWHM=$20\arcsec$) to make the dirty maps, which roughly reproduces the weight distribution of the NRO45 data.
Only the map of @nak94 [distributed through @kun07] shows a significant discrepancy: a factor of 1.82 higher total flux than the new NRO45 map. We attribute this discrepancy to an error in the old map, since all other measurements are consistent. Among these measurements, we decided to rely on the CARMA flux because we had the best understanding of the process of flux calibration, and because it is based on multiple flux calibrations over the duration of the observations. We scaled the flux of the NRO45 map to match the CARMA flux (i.e., multiplied 0.93).
The $1\sigma$ noise level of NRO45 data is 14.7 mK in $T_a^*(\rm SSB)$, 36.7 mK in $T_{\rm mb}$, and 155 $\rm mJy/beam$ in $10{\, {\rm km \, s^{-1}}}$ channel.
Imaging Heterogeneous-Array Mosaic Data {#sec:imaging}
=======================================
We use MIRIAD for joint-deconvolution of multi-pointing CARMA and NRO45 data. The method and algorithm for mosaic data with a homogeneous array are described in @sau96. Our imaging involves two additional complications: a heterogeneous array, and combinations with single-dish data, as well as mosaicing. We describe the essence of joint-deconvolution using MIRIAD, with an emphasis on the case of CARMA and NRO45.
Two points are of particular importance: the treatment of different primary beam patterns, and the weights of the data from the different primary beam patterns and from the single-dish. Here, we illustrate these two points, and define our notations.
Correction for primary beam attenuation is simple for a homogeneous array. All antennas have the same primary beam pattern $P(l,m)$, and the primary beam correction is $$I(l,m) = \frac{\bar{I}(l,m)}{P(l,m)},
\label{eq:pb1}$$ where the primary-beam corrected image is denoted $I$ and the uncorrected image is denoted $\bar{I}$. The sky coordinates are $(l,m)$. The uncorrected image $\bar{I}$ has two advantages: the synthesized beam $\bar{B}$ (i.e., point spread function, PSF) and noise level are position-invariant, which simplifies the process of deconvolution (§\[sec:homsin\]).
For a heterogeneous array, the differences between primary beam patterns have to be taken into account. For example, CARMA has three baseline types (i.e., antenna pairs), which result in three primary beam patterns – called “H” for Hatcreek (6m-6m dish pair), “O” for OVRO (10m-10m), and “C” for CARMA baseline types (6m-10m). Using appropriate weights $W_{\rm H}$, $W_{\rm O}$, and $W_{\rm C}$ (§\[sec:weighting\]), the images from “O”, “H”, and “C” baselines can be added as $$\begin{aligned}
I(l,m) &=&
W_{\rm H} \frac{\bar{I}_{H}}{P_{\rm H}} +
W_{\rm O} \frac{\bar{I}_{O}}{P_{\rm O}} +
W_{\rm C} \frac{\bar{I}_{C}}{P_{\rm C}} \nonumber \\
&=&
W_{\rm H} I_{H} +
W_{\rm O} I_{O} +
W_{\rm C} I_{C}.
\label{eq:pb3}\end{aligned}$$ The weight $W$ is a function of position $(l,m)$. The co-added image has been corrected for primary beam attenuation. In the co-added plane, the synthesized beam pattern $B$ and noise level are position-variant, which complicates the deconvolution.
Homogeneous Array, Single-Pointing Data {#sec:homsin}
---------------------------------------
Traditionally, the imaging of interferometer data has been performed as follows. A set of [*one*]{} dirty map $\bar{I}^{\rm dm}(l,m)$ and [*one*]{} synthesized beam pattern $\bar{B}(l,m)$ is made from visibilities. The dirty map $\bar{I}^{\rm dm}$ is deconvolved with $\bar{B}$. For example, the deconvolution scheme CLEAN replaces the pattern $\bar{B}(l-l_0,m-m_0)$, centered at an emission peak at ($l_0$, $m_0$), with an ellipsoidal Gaussian to reduce the sidelobes of $\bar{B}$. CLEAN usually runs in the $\bar{I}^{\rm dm}$ domain; the synthesized beam $\bar{B}$ and noise level $\sigma$ are position-invariant and their treatments are simple. The CLEANed image $\bar{I}^{\rm mp}$ is corrected for primary beam attenuation (eq. (\[eq:pb1\])), providing the final map $I^{\rm mp}$. We note again that primary beam uncorrected and corrected images (of any kind) are differentiated with “bar” (e.g., $\bar{I}^{\rm dm}$ vs. $I^{\rm dm}$ and $\bar{I}^{\rm mp}$ vs. $I^{\rm mp}$).
The deconvolution is also possible in the $I^{\rm dm}$ domain. The synthesized beam pattern $B$ and noise level are [*not*]{} position-invariant. Thus, we define a position-variant synthesized beam pattern, $$B(l,m; l_0,m_0) = \frac{\bar{B}(l-l_0,m-m_0)}{P(l,m)}, \label{eq:synbm}
\label{eq:bmpb}$$ centered at $(l_0,m_0)$, and a position-variant noise level $\sigma/P(l,m)$. Emission peaks are searched on a basis of signal-to-noise ratio. In MIRIAD, a set of primary-beam corrected $I$ and uncorrected $\bar{B}$ is calculated from visibilities, and the command “mossdi” (i.e., CLEAN) calculates $B$ with eq. (\[eq:synbm\]) at peak position $(l_0,m_0)$.
Heterogeneous Array, Single Pointing Data {#sec:hetero1}
-----------------------------------------
The deconvolution in the image domain is applicable to heterogeneous array data. The joint dirty image $I^{\rm dm}$ is defined as a linear summation of three dirty maps $I^{\rm dm}_{\rm H}$, $I^{\rm dm}_{\rm O}$, and $I^{\rm dm}_{\rm C}$ (eq. (\[eq:pb3\])). The corresponding synthesized beam $B$ is also a linear summation with the same weights, $$\begin{aligned}
B(l,m; l_0,m_0) &=& W_{\rm H}(l,m) \frac{\bar{B}_{H}(l-l_0, m-m_0)}{P_{H}(l,m)} \nonumber \\
&+& W_{\rm O}(l,m) \frac{\bar{B}_{O}(l-l_0, m-m_0)}{P_{O}(l,m)} \nonumber \\
&+& W_{\rm C}(l,m) \frac{\bar{B}_{C}(l-l_0, m-m_0)}{P_{C}(l,m)}.
\label{eq:bm3}\end{aligned}$$
The MIRIAD command “invert” with the mosaic option outputs a set of [*one*]{} joint dirty map $I^{\rm dm}$ (primary beam corrected) and [*three*]{} synthesized beams $\bar{B}_{\rm H}$, $\bar{B}_{\rm O}$, and $\bar{B}_{\rm C}$ (uncorrected). The command “mossdi” finds a peak emission in $I^{\rm dm}$, and calculates $B$ at its position with eq. (\[eq:bm3\]).
In the case of a heterogeneous array, such as CARMA, the primary beam correction always needs to be applied to the dirty map. Thus, even for single-pointing observations, we always use “options=mosaic” for “invert”.
Heterogeneous Array, Mosaic Data {#sec:hetmos}
--------------------------------
The deconvolution of mosaic data with a heterogeneous array is a further extension of the same procedure. Eq. (\[eq:pb3\]) is extended as $$I (l,m) = \sum_{b,p} W_{b,p} \frac{\bar{I}_{b,p}}{P_{b,p}} = \sum_{b,p} W_{b,p} I_{b,p}\\
\label{eq:pbm}$$ where the summation is taken for all baseline types $b$ and pointings $p$. $W_{b,p}$ is a weight for $b$ and $p$. In practice, $P$ is truncated at some radius, and only a subset of pointings contribute to a given position.
The joint synthesized beam is defined as in eq. (\[eq:bm3\]), but includes all pointings. In the case of the CARMA M51 observations, the command “invert” with “option=mosaic” outputs [*one*]{} joint dirty map and 453 synthesized beams (= 3 baseline types $\times$ 151 pointings). A joint synthesized beam $B$ is calculated with the 453 synthesized beams for every emission peak in $I^{\rm dm}$.
The spatial resolution is calculated by taking a weighted average of all 453 synthesized beams using $W_b$ ($b=$H, O, C) and by fiting a Gaussian. In theory, the sizes of the synthesized beams are different among the pointings, since the $uv$ coverage is not exactly the same for all of the pointings. In practice, we designed the observations to provide uniform $uv$ coverage for all pointings (§\[sec:obscarma\]). We therefore adopt a single beam size over the whole mosaic.
Weighting {#sec:weighting}
---------
The noise level is position-dependent, $\sigma /P(l,m)$, in the image domain. Therefore, the weights $W$ are defined as $$W_b(l,m) \propto \left( \frac{P_b(l,m)}{\sigma}\right)^2
\label{eq:w}$$ for $b=$ H, O, and C, and are normalized as $W_{\rm H} + W_{\rm O} + W_{\rm C} = 1$ at each position $(l,m)$. The theoretical noise $\sigma$ depends on baseline type $b$, and is the same as the imaging sensitivity $\Delta S^{\rm i}$ discussed below.
### Thermal Noise and Its Coefficient {#sec:noise}
Two sensitivities, fringe sensitivity $\Delta S^{\rm f}$ and imaging sensitivity $\Delta S^{\rm i}$ \[$\equiv \sigma$\], are important [@tay99 see their section 9]. The fringe sensitivity is a sensitivity per visibility. The theoretical sensitivity $S^{\rm f}$ for each visibility is calculated with the system temperature $T_{\rm sys}$, bandwidth $B$, and integration time of the visibility $t_{\rm vis}$ as $$\Delta S^{\rm f} = C_{ij} \sqrt{\frac{ T_{{\rm sys}, i} T_{{\rm sys}, j}}{B \cdot t_{\rm vis}}},
\label{eq:dsk}$$ where $$C_{ij} = \frac{2 k_{\rm B}}{ \sqrt{(\eta_{a, i} A_i)(\eta_{a, j} A_j)} } \frac{1}{\sqrt{2} \eta_q}.
\label{eq:cij}$$ The aperture efficiency $\eta_a$ and collecting area $A$ of antennas $i$ and $j$ have a relation with the beam solid angle $\Omega_{\rm A}$ given by $1/(\eta_a A)= \Omega_{\rm A} / \lambda^2$. The Boltzman constant is $k_{\rm B}$ and the last term $1/\sqrt{2}\eta_q$ is due to the backend (i.e., digitizer and correlator), and $\eta_q$ is the quantum efficiency [@roh00]. $C_{ij}$ is approximated as a constant for a homogeneous array, since the parameters are very similar for all antennas. In the case of a heterogeneous array, $C_{ij}$ depends on baseline type. Parameters are listed in Table \[tab:ant\].
The imaging sensitivity is a root-mean-square (RMS) noise in a final image, and depends on control parameters (see Appendix \[sec:app\]; e.g., natural and uniform weighting). If the natural weighting is employed, the imaging sensitivity is simply a statistical summation of fringe sensitivities, $1/(\Delta S^{\rm i})^2 = \Sigma_k 1/(\Delta S^{\rm f})^2$. For a homogeneous array ($C_{ij}$ is constant), it is $$\Delta S^{\rm i} [\equiv \sigma] = C_{ij} \sqrt{\frac{ T_{{\rm sys}, i} T_{{\rm sys}, j}}{B \cdot t_{\rm tot} }}, \label{eq:senim}$$ assuming that $T_{\rm sys}$ is a constant during observations. The total integration time is $t_{\rm tot} = N_{\rm vis} t_{\rm vis}$, where $N_{\rm vis}$ is the number of visibilities.
The Combination of NRO45 with CARMA {#sec:comb}
===================================
The NRO45 image is converted to visibilities and combined with CARMA data in $uv$ space. Here we discuss four steps for combination: (1) generating visibilities from the single-dish image, (2) the calculation of the weights of the single-dish visibilities, in the same form as interferometer visibilities, (3) the determination of synthesized beam size, and (4) an imaging/deconvolution scheme. A flow chart of the procedure is in Figure \[fig:combflow\].
Converting the NRO45 Map to Visibilities {#sec:nrouv}
----------------------------------------
To produce NRO45 visibilities, we first deconvolve a NRO45 map with a NRO45 point spread function (PSF), multiply a dummy primary beam, generate a Gaussian visibility distribution, and calculate the amplitude and phase of the visibilities from the deconvolved, primary-beam applied NRO45 map (Figure \[fig:combflow\]). The following sections describe these steps.
One limitation arises from the current software, though it should be easily modified in future software development. NRO45 visibilities must have the same form as those of interferometers, and therefore, a dummy primary beam needs to be applied to the NRO45 map.
### Deconvolution with the NRO45 Beam {#sec:deconvnro45}
A NRO45 map is a convolution of a true emission distribution with a point spread function (PSF). In the case of OTF mapping (§\[sec:obsnro45\]), the PSF is not literally the NRO45 beam, but is a convolution of the NRO45 beam and the spheroidal function which is used to re-grid the observed data to a map grid (§\[sec:nro45red\]). The intrinsic Gaussian FWHM of the NRO45 beam is $15\arcsec$, and is degraded to $19.7\arcsec$ after the re-gridding. The NRO45 map needs to be de-convolved with this PSF.
Figure \[fig:sen\_data\] shows the sensitivity (noise) as a function of $uv$-distance (baseline length). It has a dependence on the Fourier-transformed PSF FT{PSF} as $\propto 1/\sqrt{\rm FT\{PSF\}}$ (see Appendix \[sec:senmatch\]). The standard deviation of FT{PSF} is $\sigma_{\rm F}=3.9 \,\rm k\lambda$ for a Gaussian PSF with the FWHM of $19.7\arcsec$. Thus, the noise increases significantly beyond 4-6 k$\lambda$ (i.e. $\sqrt{2} \sigma_{\rm F}$). Figure \[fig:sen\_data\] shows that the NRO45 sensitivity is comparable to that of CARMA up to 4-6 k$\lambda$ and deviates beyond that. With the resultant sensitivities, we decided to flag the data at $>4 \rm k\lambda$. The long baselines have negligible effects if we use only the weight based on sensitivity [i.e. [*robust=+2*]{}, @bri95], but could introduce an elevated error when $robust < +2$.
CARMA and NRO45 are complementary in terms of $uv$-coverage and sensitivity (Figure \[fig:uvcov\] and \[fig:sen\_data\]). @kur09 suggested that the single-dish diameter should be 1.7 times as large as the minimum baseline of interferometer data, which is $\sim 18$ meters in our case. However, we seem to need a 45m class telescope to satisfy the sensitivity requirement within realistic observing time. The sensitivity matching between NRO45 and CARMA data is discussed in Appendix \[sec:senmatch\].
### Applying a Dummy Primary Beam
The imaging tasks in MIRIAD assume that all visibilities are from interferometric observations, and apply a primary beam correction in the process of imaging. Consequently, the NRO45 visibilities need to be attenuated by a pseudo primary beam pattern $P_{\rm N}$. The choice of $P_{\rm N}$ is arbitrary, and we employ a Gaussian primary beam with the FWHM of 2 arcmin. $P_{\rm N}$ is multiplied to the deconvolved NRO45 map at each of the 151 CARMA pointings separately. Since the map will be divided by $P_{\rm N}$ during the deconvolution, the choice of $P_{\rm N}$ does not affect the result. However, it is safer to use $P_{\rm N}$ at least twice as large as the separation of the pointings, so that the entire field is covered at the Nyquist sampling (or over-sampling) rate.
We note that this multiplication of a primary beam in the image domain is equivalent to a convolution in the Fourier domain. It smoothes the sensitivity distribution in $uv$-space, and therefore, the weight discussed in §\[sec:nrowei\]. The size of the primary beam in Fourier space is only 1/6 of that of the NRO45 beam. Therefore, this effect should be small and negligible.
### Generating a Gaussian Visibility Distribution
The distribution of visibilities in $uv$ space should reproduce the NRO45 beam (more precisely, the PSF in §\[sec:nrouv\]) as a synthesized beam in image space. The Fourier transformation of a Gaussian PSF is a Gaussian. Therefore, visibilities are distributed to produce a Gaussian density profile in $uv$ space. The size of the Gaussian distribution is set to reproduce the beam size of $19.7\arcsec$. We manually add a visibility at ($u$, $v$) = (0,0), so that the zero-spacing is always included. The number of visibilities $N_{\rm vis}$ and integration time per visibility $t_{\rm int}$ are control parameters, and are discussed in §\[sec:nrowei\].
### Resampling
From the Gaussian visibility distribution and the primary beam attenuated maps, the visibility amplitudes and phases are derived, which gives the NRO45 visibilities.
Theoretical Noise and Other Parameters {#sec:nrowei}
--------------------------------------
The relative weights of the CARMA and NRO45 visibilities are important for proper combination. MIRIAD requires a weight (sensitivity) per individual visibility for imaging, and we calculate the weight based on the RMS noise of a NRO45 map. For the interferometer data (§\[sec:noise\]), we start from the fringe sensitivity $S^{\rm f}$ and calculate the imaging sensitivity $S^{\rm i}$ by summing up the $S^{\rm f}$s of all visibilities. Here, we start from the RMS noise of a map (i.e., $S^{\rm i}$) and determine $S^{\rm f}$ and its coefficient.
The theoretical noise of a single-dish map, in main beam temperature $T_{\rm mb}$, is $$\Delta T_{\rm mb} = \frac{T_{\rm sys}}{\eta_q \eta_{\rm mb} \sqrt{B \cdot t_{\rm tot}}}, \label{eq:deltatmb}$$ where $\eta_q$ and $\eta_{\rm mb}$ are the quantum efficiency of the spectrometer and the main beam efficiency of the antenna, respectively. $B$ and $t_{\rm tot}$ are the bandwidth and total integration time, respectively. \[Note that the contribution to the noise from the OFF position integrations should be negligible in OTF mapping [@saw08]\]. The total integration time (per point) of the NRO45 map is derived from the RMS noise in the map using this equation.
The imaging sensitivity, corresponding to eq. (\[eq:senim\]), is calculated by converting the unit of eq. (\[eq:deltatmb\]) from Kelvin to Jy, $$\Delta S^{\rm i} = \frac{2 k_{\rm B}}{\eta_a A} \frac{T_{\rm sys}}{\eta_q \eta_{\rm mb} \sqrt{B \cdot t_{\rm tot}}}.\label{eq:nro45isen}$$ Comparing with eq. (\[eq:senim\]), we obtain $$C_{ij} = \frac{2 k_{\rm B}}{\eta_{\rm mb} \eta_a A } \frac{1}{\eta_q}.
\label{eq:nrocij}$$
The fringe sensitivity per visibility should be $$\Delta S^{\rm f} = C_{ij} \frac{T_{\rm sys}}{\sqrt{B\cdot t_{\rm vis}}}, \label{eq:nro45fsen}$$ where the integration time per visibility is $t_{\rm vis} = t_{\rm tot} / N_{\rm vis}$ and $N_{\rm vis}$ is the number of visibilities. The $t_{\rm vis}$ value should be set (arbitrary) to a small number, so that $N_{\rm vis}$ becomes large enough to fill the $uv$ space. We set $t_{\rm vis} = 0.01 \rm \, sec$ and $N_{\rm vis} = 42075$.
Conceptually, we can understand the meaning of the NRO45 visibilities by comparing the definition of fringe sensitivities (eqs. \[eq:dsk\] and \[eq:nro45fsen\]). They are the ones observed virtually with two identical NRO45 antennas. The two antennas can physically overlap (in our virtual observations), so they can provide $uv$ coverage down to zero spacing. The beam shape of the NRO45 dish plays the role of synthesized beam, but not primary beam. The primary beam shape is arbitrarily defined by $P_{\rm N}$ – if we seek a meaning, it corresponds to the beam shape of small patches within the NRO45 dishes.
There is one caveat when this weighting method is applied with the current version of MIRIAD. MIRIAD is designed for an array with the same backend for all visibilities, and therefore, it neglects the $1/\sqrt{2}\eta_q$ term from $C_{ij}$ (eq. \[eq:cij\]). It defines an alternative parameter, $${\rm JYPERK} = \frac{2 k_{\rm B}}{ \sqrt{(\eta_{a, i} A_i)(\eta_{a, j} A_j)} },
\label{eq:jyperk}$$ which is stored in data header. The weights ($\Delta S^{\rm f}$) are calculated with JYPERK, instead of $C_{ij}$, and do not take into account the backend. In combining CARMA data with single-dish data, we can overwrite JYPERK in CARMA data with $C_{ij}$, or define JYPERK for single-dish (NRO45), as $${\rm JYPERK} = \frac{2 \sqrt{2} k_{\rm B}}{\eta_{\rm mb} \eta_a A } \left( \frac{\eta_{q, \rm CARMA}}{\eta_{q, \rm NRO45}} \right).
\label{eq:nrojyperk}$$ Parameters are listed in Table \[tab:ant\].
Synthesized Beam Size {#sec:synbm}
---------------------
The deconvolution process (e.g., CLEAN) replaces a synthesized beam with a convolution beam (typically a Gaussian). We determine the convolution beam size so that its beam solid angle matches that of the synthesized beam. Theoretically, the beam solid angle is an integration of a beam response function over $4\pi$ steradians. In principle, we could calculate it by integrating a synthesized beam image or by taking the weight of the zero-spacing data (Appendix \[sec:solidangle\]). These methods worked reasonably well, but showed some error, introduced perhaps by the limited size of the beam image (not over $4\pi$ steradians). In practice, we found that the following method provides better flux conservation: we calculate the total fluxes of the galaxy with the single-dish map and with the dirty image (with an unknown beam area as a free parameter), and find the beam area that equalizes these total fluxes. The position angle and axis ratio of the beam is derived by a Gaussian fitting to the synthesized beam. The Gaussian is linearly scaled to reproduce the beam area from the flux comparison.
If the solid angles do not match, the total flux is not conserved in the final deconvolved map (e.g., CLEANed map). The CLEANed map has two emission components, deconvolved emission and noise/residual emission, and they have their own units of flux, [*Jy/(convolution beam)*]{} and [*Jy/(synthesized beam)*]{}, respectively. Therefore, the convolution beam smaller than the synthesized beam elevates the flux of the residual emission, while a larger beam reduces it. The error becomes particularly problematic for an object with extended, low-flux emission (such as galaxies), which are inherently missed in the deconvolution process, but exist in the CLEANed map. The two units in the final map does not degrade an image quality much in case of the CARMA and NRO45 image, as long as the two beam areas are the same, because the synthesized beam is already similar to a Gaussian beam that we adopt as a convolution beam.
We note that if the deconvolution procedure (such as CLEAN) can ’dig’ all positive components down to the zero flux level, the convolution beam could have any shape. We also note that in case of pure interferometer observations, the beam solid angle is zero (Appendix \[sec:solidangle\]), and thus, this method cannot be applied.
Joint Imaging and Deconvolution
-------------------------------
The procedure for imaging and deconvolution is the same as the one in §\[sec:hetmos\], but we add the term $W_{\rm N} I_{\rm N}$ in eq. (\[eq:pbm\]), $$I(l,m) =
W_{\rm H} I_{\rm H} +
W_{\rm O} I_{\rm O} +
W_{\rm C} I_{\rm C} +
W_{\rm N} I_{\rm N},$$ where $I_{\rm N}$ and $W_{\rm N}$ are the image and weight from the NRO45 visibilities, respectively. $W_{\rm N}$ is calculated with the pseudo primary beam $P_{\rm N}$ (§\[sec:nrouv\]) and the theoretical noise $\sigma_{\rm N}$ ($=\Delta S^{\rm i}$) derived with eq. (\[eq:nro45isen\]).
The number of NRO45 visibilities is a control parameter in this method (§\[sec:nrowei\]), and thus, should not be involved in weighting. Instead, the weight should be calculated solely based on the sensitivity. We use the theoretical noise for weighting. If all visibilities have exactly the same fringe sensitivity, our weighting becomes the same as the conventional “natural” weighting. The sensitivity-based weighting for interferometer data was discussed in @bri95, and our method extends it to the combination with single-dish data.
The robust weighting scheme suppresses the pixels of high natural weights in $uv$ space [@bri95] – if the natural weight is lower than a threshold the weight is unchanged, but if it is higher, the weight of the pixel is set to this threshold. Our weighting scheme reproduces a natural weighting and works with the robust weighting scheme. We made two data cubes with $robust = -2$ and $+2$. The resolution of the final combined data cube with $robust =-2$ is $3.7\arcsec \times 2.9\arcsec$ (PA=$79\arcdeg$) and $5.08 {\, {\rm km \, s^{-1}}}$. The RMS noise is 35 mJy/beam (i.e., 300 mK) in $10{\, {\rm km \, s^{-1}}}$ channel. $robust =+2$ gives the resolution of $8.5\arcsec \times 7.3\arcsec$ (PA=$76\arcdeg$) and $5.08 {\, {\rm km \, s^{-1}}}$, and the RMS noise of 52 mJy/beam (77 mK) in $10{\, {\rm km \, s^{-1}}}$ channel. Figure \[fig:synbeam\] shows a synthesized beam pattern for $robust =+2$. Both cubes have the same total luminosity when the synthesized beam sizes are determined as in §\[sec:synbm\].
Integrated Intensity Map and Image Fidelity {#sec:fidelity}
===========================================
Figure \[fig:combmap\] and \[fig:combmapna\] show the CO($J=1-0$) integrated intensity maps of M51, the combination of CARMA and NRO45 data, with [*robust*]{} = -2 and +2, respectively. These maps are made with the “masked moment method” in @adl92. We also dropped the low sensitivity region (outer region) of the CARMA mosaic (see Figure \[fig:pointing\]). The data with [*robust*]{} = -2 is used in following discussions, since it shows finer structures at a higher resolution.
The combination of CARMA (15 antennas) and NRO45 enables a full census of the population of giant molecular clouds (GMCs) over the entire galactic disk. Molecular gas emission in two spiral arms and interarm regions are prominent in this map. @kod09 showed the distribution of GMCs both in spiral arms and interarm regions, and the high molecular gas fraction in both regions. These two results suggest that stellar feedback is inefficient to destroy GMCs and molecules, which is supported by a recent analysis by @sch10. Molecular structures in the interarm regions were often an issue of debate in previous observations due to poor image fidelity [@ran90; @aal99; @hel03]. Figure \[fig:hstspitzerco\] compares the CO distribution with a $B$-band image from the [*Hubble Space Telescope*]{} (HST) and an $8\mu m$ image from the [*Spitzer Space Telescope*]{}. Dust lanes in the $B$-band image indicate the distribution of the dense interstellar medium (ISM), and the $8\mu m$ image shows the distribution of the PAH (large molecules) illuminated by UV photons from surrounding young stars. The CO emission coincides very well with the dust lanes and $8\mu m$ emission in both spiral arms and interarm regions, which evidences the high image-fidelity over a wide range of flux.
Figure \[fig:nrorec\] shows the NRO45 map (left) and the ratio of the combined map (smoothed to $\sim 20\arcsec$ resolution) over the NRO45 map, i.e. recovered flux map (right). The recovered flux map shows an almost constant ratio $\sim 1$ over the entire map, and no correlation with galactic structures (i.e., no size dependence – in contrast to the dependence expected in pure-interferometer maps). Some extended CO emission is not significantly detected at the high resolution of the combined image, but becomes apparent when the image is smoothed. Note that the companion galaxy NGC 5195 is not included in the CARMA velocity coverage, nor in the combined map, though it is in the NRO45 map.
Both the main galaxy NGC 5194 and companion galaxy NGC 5195, are observed with NRO45. The total flux of NGC 5194 is $(1.022\pm 0.002 ) \times10^4 \,\rm Jy \cdot km/s$ in the NRO45 map, which is consistent with the measurement of @hel03. With the Galactic CO-to-H$_2$ conversion factor $X_{\rm CO}= 1.8\times 10^{20} \rm \, cm^{-2} [K \cdot km/s]^{-1}$ and the distance of 8.2 Mpc, the total molecular gas mass in NGC 5194 is $4.9\times 10^9 {{\rm M_{\odot}}}$. The $X_{\rm CO}$ similar to the Galactic one is found in M51 recently [@sch10]. The total flux of the combined cube is also $1.0\times 10^4 \rm Jy \cdot km/s$, consistent with the NRO45-only measurement. The total flux and mass of NGC 5195 is $162 \pm 4 \,\rm Jy \cdot km/s$ and $7.8\times 10^7 {{\rm M_{\odot}}}$, respectively. The errors are based on the RMS from the map, and do not include the systematic error due to the flux calibration in the CARMA observations ($\sim 15$%).
Requirements {#sec:req}
============
There are requirements for sampling, field of view, $uv$-coverage, and sensitivity for single-dish data to be combined with interferometer data in an optimal manner.
First, a spatial fine sampling is necessary [@vog84]. The half-beam sampling, a typical practice in most single-dish mapping observations, is not sufficient, since the aliasing effect destroys visibilities [*both*]{} at long and very short baselines. Figure \[fig:uvnro45\] illustrates the effect schematically: if the spatial sampling is $10.3\arcsec$ [$=\lambda_{\rm CO} / 52\rm m$, a typical sampling in NRO45 observations; e.g. @kun07], the tail of the $uv$ distribution leaks into baselines as short as $\sim 7$ m. Hence, the Nyquist sample of $11.9\arcsec$ (=$\lambda_{\rm CO}$/45m) is necessary to properly reproduce visibilities up to the 45 m baseline. The observing grid and pixel size in the NRO45 map must be at most $5.96\arcsec$ (Figure \[fig:uvnro45\]).
The single-dish map should cover an area larger than the area of the joint map. The deconvolution with the single-dish beam (§\[sec:deconvnro45\]) causes artifacts at the edges of the images. It is ideal to have extra-margins with the width of a few single-dish beam sizes at each image edge.
The sensitivity match between single-dish and interferometer data should also be considered in matching their $uv$ coverages; the maximum effective NRO45 baselines are limited by the matched sensitivity in our observations. Only the baselines of about 1/4-1/3 of the 45m diameter take practical effect in the combination. It is often discussed that a single-dish telescope needs to be about twice larger than the shortest baseline used in interferometer observations due to uncertainty in the single-dish beam shape and errors in pointing [see @kur09; @cor10b and references therein]. In our case, the maximum effective baseline is shorter than this length. In practice, interferometer data rarely cover the theoretical minimum baseline (i.e. dish diameter). The long baselines of single-dish data do not have a sensitivity comparable to interferometer’s (§\[sec:nrouv\]). To avoid a gap in $uv$ coverage without sensitivity loss, the diameter of the single-dish needs to be 3-4 times larger, unless the receiver of the single-dish telescope has a significantly higher sensitivity. The sensitivity match is discussed in detail in Appendix \[sec:senmatch\].
Comparisons with Other Methods {#sec:othermethods}
==============================
Several methods for the combination of single-dish and interferometer data have been applied at millimeter wavelength. None of the previous data, however, have a sufficient overlap between single-dish and interferometer $uv$ coverages (in the sense discussed in §\[sec:deconvnro45\]). The weighting schemes are artificial, rather than based on the sensitivity (i.e., data quality). Nevertheless, these methods have some advantages in simplicity, as well as disadvantages in detail.
@sta99 introduce a combination method in the image domain. This method is adopted for the BIMA Survey Of Nearby Galaxies (BIMA-SONG) to combine BIMA interferometer with NRAO12 single-dish data [@hel03]. They set the weights to be inversely proportional to the beam area (i.e., one term in eq. \[eq:senim\] and \[eq:nro45isen\]) and add the dirty maps and beams of BIMA and NRAO12 linearly (eq. \[eq:pb3\]) to produce a joint dirty map and beam. The relative weights are manually and continuously changed with $uv$ distance. The joint dirty map is then CLEANed with the joint synthesized beam. This method starts the combination process from images, rather than visibilities, and is simple. It should be able to use a more natural weighting scheme (e.g., sensitivity $uv$-distribution based on the beam shape and eq. \[eq:nro45fsen\]; see also Appendix \[sec:senmatch\]) if software is developed.
@wei01 also combine a single-dish map and CLEANed interferometer map. They deconvolve the single-dish map with its beam pattern and convolve the result with an interferometer convolution beam, so that the beam attenuation becomes the same for both single-dish and interferometer images. Then, they Fourier-transform both images and replace the interferometer data with the single-dish data at the central $uv$-spacing. CLEAN is performed separately for interferometer data alone, which does not take advantage of the high image fidelity of the combined map. Having only one control parameter – the choice of $uv$ range to be replaced – can be advantageous.
Visibilities are generated from a single-dish map by several authors [@vog84; @tak03; @rod08; @kur09]. Our method is in this branch. @hel03 summarize difficulties to set the weights for this combination scheme, and conclude that it is too sensitive to the choice of parameters. @rod08 and @kur09 suggest to set the relative weight to obtain a cleaner synthesized beam shape, which is advantageous in deconvolution (e.g., CLEAN, MEM). More specifically, @rod08 set the single-dish weight density in $uv$ space equal to that of the interferometer visibilities that surround the single-dish $uv$ coverage. @kur09 adjusted the relative weight to zero out the total amplitude of the sidelobes of a synthesized beam. Our weighting scheme is more intrinsic to each set of data and is based solely on their qualities; the single-dish weight is independent of the interferometer data and is set based on the RMS noise of a single-dish map. The weight is not a parameter of choice.
In pure interferometer imaging, the synthesized beam shape is historically controlled by changing the weight density in $uv$ space. The robust parameter [@bri95] is a famous example that converts the weight smoothly from the [*natural*]{} to [*uniform*]{} weightings. Once the weight is set in our method, the robust weighting works even for the combined data, exactly as designed for pure interferometer data.
Summary {#sec:summary}
=======
We describe the CARMA observations at the early phase of its operation, and the OTF observations with the multi-beam receiver BEARS at NRO45. The standard reduction of CARMA and NRO45 data are also discussed and extended to the combined data set case.
We explain the basics of the imaging technique for heterogeneous array data, and show that the combination of interferometer and single-dish data is an extension of the imaging of heterogeneous array data. We introduce a method of combination of interferometer and single-dish data in $uv$-space. The single-dish map is converted to visibilities in $uv$-space. The weights of the single-dish visibilities are determined based on the RMS noise of the map, which is more natural than any other artificial weighting schemes. The synthesized beam size is determined to conserve the flux between the dirty beam and the convolution beam. Comparisons with other methods are discussed. The advantages and disadvantages of those methods are summarized. In the appendices, we discuss the matching of single-dish and interferometer sensitivities for the combination of the data.
The resultant map shows the high image fidelity and reveals, for the first time, small structures, such as giant molecular clouds, both in bright spiral arms and in faint inter-arm regions [@kod09]. From the new map, we calculate that the total masses of NGC 5194 and 5195 are $4.9\times 10^9 {{\rm M_{\odot}}}$ and $7.8\times 10^7 {{\rm M_{\odot}}}$, respectively, assuming $X_{\rm CO}= 1.8\times 10^{20} \rm \, cm^{-2} [K \cdot km/s]^{-1}$.
The combination method is designed on a platform of available software (i.e., MIRIAD) and generates a finite number of discrete visibilities from a single-dish map. Future software should enable data manipulation directly on maps (grids) both in real and Fourier spaces, instead of in visibilities (Appendix \[sec:gridbase\]). The weights can be determined on a grid basis, rather than on a visibility basis. Even in such cases, the weights should be determined from the RMS noise of the map which are related to the quality of data.
We thank Yasutaka Kurono for insightful comments on an early draft and the anonymous referee for useful comments. We also thank all the CARMA integration team members and the support staff at the Nobeyama Radio Observatory. The Nobeyama 45-m telescope is operated by the Nobeyama Radio Observatory, a branch of the National Astronomical Observatory of Japan. Support for CARMA construction was derived from the Gordon and Betty Moore Foundation, the Kenneth T. and Eileen L. Norris Foundation, the James S. McDonnell Foundation, the Associates of the California Institute of Technology, the University of Chicago, the states of California, Illinois, and Maryland, and the National Science Foundation. Ongoing CARMA development and operations are supported by the National Science Foundation under a cooperative agreement, and by the CARMA partner universities. This research was partially supported by HST-AR-11261.01.
[*Facilities:*]{} ,
Aalto, S., Huttemeister, S., Scoville, N. Z. & Thaddeus, P. 1999, , 522, 165
Adler, D. S., Lo, K. Y., Wright, M. C. H., Rydbeck, G., Plante, R. L. & Alllen, R. J. 1992, , 392, 497
Briggs, D. 1995 PhD thesis
Corder, S. A., Wright, M. C. H. & Carpenter, J. M. 2010, SPIE Conference Series, 7733, 115
Corder, S. A., Wright, M. C. H. & Koda, J. 2010, SPIE Conference Series, 7737, 36
Cornwell, T. J. 1988, , 202, 316
Emerson, D. T. & Gräve, R. 1988, , 190, 353
Helfer, T. T., Thornley, M. D., Regan, M. W., Wong, T., Sheth, K., Vogel, S. N., Blitz, L. & Bock, D. C. -J. 2003, , 145, 259
Koda, J. et al. 2009, , 700, 32
Kuno, N., Nakai, N., Handa, T. & Sofue, Y. 1995, , 47, 745
Kuno, N., Sato, N., Nakanishi, H., Hirota, A., Tosaki, T., Shioya, Y., Sorai, K., Nakai, N., Nishiyama, K. & Vila-Vilaro, B. 2007, , 59, 117
Kurono, Y., Morita, K. -I. & Kamazaki, T. 2009, , 61, 873
Matsushita, S., Kohno, K., Vila-Vilaro, B., Tosaki, T. & Kawabe, R. 1999, “Advances in Space Research”, Vol. 23, p. 1015.
Mangum, J. G., Emerson, D. T. & Greisen, E. W. 2007, , 474, 679
Nakai, N., Kuno, N., Handa, T. & Sofue, Y. 1994, , 46, 527
Pety, J. & Rodríguez-Fernández, N. 2010, , 517, A12
Rand, R. J. & Kulkarni, S. R. 1990, , 349, 43
Rodríguez-Fernández, N. J., Pety, J., Gueth, F. 2008, IRAM memo 2008-2
Rohlfs, K. & Wilson, T. L. 2000, “Tools of Radio Astronomy”, Astronomy and Astrophysics Library.
Sault, R. J., Teuben, P. J. & Wright, M. C. H. 1995, ASP Conference series, 77, 433
Sault, R. J., Staveley-Smith, L. & Brouw, W. N. 1996, , 120, 375
Sawada, T., Ikeda, N., Sunada, K., Kuno, N. Kamazaki, T., Morita, K. -I., Kurono, Y., Koura, N., Abe, K., Kawase, S., Maekawa, J., Horigome, O. & Yagagisawa, K. 2008, , 60, 445
Schinnerer, E., Wei$\ss$, A., Aalto, S. & Scoville, N. Z. 2010, astro-ph 1007.0692.
Sorai, K., Sunada, K., Okumura, S. K., Iwasa, T., Tanaka, A., Natori, K. & Onuki, H. 2000, Proc. SPIE, vol 4015, p 86
Sunada, K., Yamaguchi, C., Nakai, N., Sorai, K., Okumura, S. K. & Ukita, N. 2000, Proc. SPIE, vol 4015, p 237
Stanimirovic, S., Staveley-Smith, L., Dickey, J. M., Sault, R. J. & Snowden, S. L. 1999, , 302, 417
Stanimirovic, S., Altschuler, D., Goldsmith, P. & Salter, C. 2002, “Short-Spacing Correction from the Single-Dish Perspective”, ASP Conference Series, 278, 375.
Takakuwa, S. 2003, NRO Technical Report, No. 65
Taylor, G. B., Carilli, C. L. & Perley, R. A. 1999, “Synthesis Imaging in Radio Astronomy II”, ASP Conference Series, Vol 180.
Vogel, S. N., Wright, M. C. H., Plambeck, R. L. & Welch, W. J. 1984, , 283, 655
Wei$\ss$, A., Neininger, N., Hüttemeister, S. & Klein, U. 2001, å, 365, 587
[lcccccccc]{} Main Beam Size & FWHM & $\rm arcsec$ & 100 & 60 & 77.5$^a$ & 19.7$^b$\
Beam Solid Angle & $\Omega_{\rm b}$ & $\rm arcsec^2$ & $1.51\times 10^4$ & $6.77\times 10^3$ & $1.01\times 10^4$$^a$ & $1.10\times 10^3$$^b$\
Quantum Efficiency & $\eta_{\rm q}$ & & 0.87 & 0.87 & 0.87 & 0.87\
Main Beam Efficiency & $\eta_{\rm mb}$ & & 0.41 & 0.61 & 0.50$^a$ & 0.40\
Noise Coef. (general) & $C_{\rm ij}$ & Jy/K & 116.8 & 52.2 & 78.1$^a$ & 12.0\
Noise Coef. (MIRIAD) & $\rm JYPERK$ & Jy/K & 145.3 & 65.0 & 97.2$^a$ & 14.9\
Weight Functions {#sec:app}
================
The dirty image $\bar{I}^{\rm dm}$ and synthesized beam $\bar{B}$ are defined with a set of visibilities $V(u,v)$ as
$$\bar{I}^{\rm dm}(l,m) = \int \int V(u,v) W(u,v) e^{2 \pi i (ul+vm)} dudv
\label{eq:dmap}$$
and $$\bar{B}(l,m) = \int \int W(u,v) e^{2 \pi i (ul+vm)} dudv,
\label{eq:bmap}$$ where ($u$, $v$) is the coordinates in the $uv$ space.
The sampling and weighting function of visibilities $W(u,v)$ can be written more explicitly as $$W(u,v) = \sum^{M}_{k=1} R_k T_k D_k \delta(u-u_k, v-v_k),$$ where $T_k$ is the tapering function, and $D_k$ is the density weighting function [see @tay99]. $M$ is the number of visibilities obtained in observations. $T_k$ and $D_k$ are arbitrary functions, and are often used to control the synthesized beam shape and noise level. For example, the Gaussian taper is $T_k=\exp(-\sqrt{u_k^2+v_k^2}/2a^2)$ with the half power beam width $\theta_{\rm HPBW} = \sqrt{2 \ln 2/\pi}/a=0.37/a$ \[radian\]. The natural and uniform weightings are $D_k=1$ and $D_k=1/N_k$, respectively, where $N_k$ is the number of visibilities within a pixel in $uv$ space.
$R_k$ is a weight based on noise, and has the relation $R_k=1/\Delta S_k^2$ with $\Delta S_k$ ($=\Delta S^f$ in §\[sec:noise\]). The theoretical noise of an image $\sigma$ can be calculated as $$\sigma = \sqrt{\left( \sum^M_{k=1} T_k^2 D_k^2 R_k\right) \left( \sum^M_{j=1} R_j\right)} / \sum^M_{i=1} T_i D_i R_i.$$
Beam Solid Angle {#sec:solidangle}
================
The beam solid angle $\Omega_{\rm A}$ of a synthesized beam (dirty beam) is defined as $$\begin{aligned}
\Omega_{\rm A} &= & \int \int \bar{B}(l,m) dldm \label{eq:inte1} \\
&=& \int \int W(u,v) \left[ \int \int e^{2 \pi i (ul+vm)} dldm \right] dudv \label{eq:inte2}\\
&=& W(0,0)\end{aligned}$$ Eq. (\[eq:bmap\]) is used bewteen eq. (\[eq:inte1\]) and (\[eq:inte2\]). The bracket in eq. (\[eq:inte2\]) is a $\delta$-function. We assumed that the maximum of $\bar{B}(l,m)$ is normalized to 1.
Pure interferometer observations do not have zero-spacing data, and therefore, $\Omega_{\rm A}=0$. A Gaussian beam $ \bar{B}(l,m) = \exp [-(l^2+m^2)/2 \sigma^2]$ has $W(u,v)=2 \pi \sigma^2 \exp [-(u^2+v^2)/ 2 \sigma_{\rm F}^2]$, where $\sigma_{\rm F} = 1/ 2 \pi \sigma$. Therefore, $\Omega_{\rm A} = 2 \pi \sigma^2$.
Sensitivity Matching Between CARMA and NRO45 {#sec:senmatch}
============================================
Matching the sensitivities of CARMA and NRO45 is crucial in combination. The sensitivity requirements of the interferometer and single-dish maps are important for the observation plan, and a simple way to calculate matching sensitivities is therefore important.
One approach is to match the sensitivities in $uv$-space around the $uv$ range (baseline range) where two data sets overlap. In other words, we want to match the pixel sensitivities $\Delta S^{\rm p}$ of CARMA and NRO45, i.e, their sensitivities at pixel ($u$, $v$). Among some definitions of sensitivity (e.g., §\[sec:noise\]), the imaging sensitivity $\Delta S^{\rm i}$, i.e., noise fluctuation in the map (eq. \[eq:senim\]), is most used to characterize the quality of map and to estimate the feasibility of observations. Therefore, we first derive the relation between the imaging and pixel sensitivities in $uv$ space.
For simplicity, we assume that all CARMA antennas are identical to each other, having exactly the same $T_{\rm sys}$ and $C_{\rm ij}$. Then, a sensitivity is $$\Delta S = C_{\rm ij} \frac{T_{\rm sys}}{\sqrt{B \cdot t}},$$ where $B$ is the channel width and $t$ is the integration time. $\Delta S$ is applied to both CARMA and NRO45, and can mean one of the following three sensitivities: fringe sensitivity $\Delta S^{\rm f}$ when $t$ is the integration time of a visibility $t_{\rm vis}$ (eq. \[eq:dsk\]); imaging sensitivity $\Delta S^{\rm i}$ when $t$ is the total integration time, i.e, $t_{\rm vis} N_{\rm vis}$, where $N_{\rm vis}$ is the total number of visibilities; and pixel sensitivity $\Delta S^{\rm p}$ when $t$ is the total integration time of the pixel at ($u$,$v$), i.e, $t_{\rm vis} n(u,v)$, where $n(u,v)$ is the number of visibilities in the pixel. Therefore, the imaging and pixel sensitivities are related as $$\Delta S^{\rm p}(u,v) = \Delta S^{\rm i} \sqrt{\frac{N_{\rm vis}}{n(u,v)}}. \label{eq:senrel}$$ Hereafter, we derive the relation between $N_{\rm vis}$ and $n(u,v)$.
The $n(u,v)$ for the interferometer (e.g., CARMA) was discussed by @kur09. For a target at a reasonably high declination, synthesis interferometric observations provide a visibility distribution of $n(b) \propto 1/b$, where $b$ is the $uv$ distance $b=\sqrt{u^2 + v^2}$. Th visibilities (total of $N_{\rm vis}$) are distributed within the minimum and maximum baseline lengths, $b_{\rm min}$ and $b_{\rm max}$ respectively. From $N_{\rm vis} = \int^{b_{\rm max}}_{b_{\rm min}} n(b) 2\pi b db$, we derive $$n(b) = \frac{N_{\rm vis}}{2 \pi (b_{\rm max} - b_{\rm min})} \frac{1}{b}. \label{eq:intnvis}$$
The $n(u,v)$ for a single-dish telescope (e.g., NRO45) is determined by the beam shape of the telescope. We assume a Gaussian beam shape, $\propto \exp[-(l^2+m^2)/2\sigma^2]$, in sky coordinate ($l$,$m$). The full width half maximum (FWHM) of the beam is ${\rm FWHM} = 2\sqrt{2\ln 2} \sigma$. The $n(u,v)$ is proportional to the Fourier transformation of the beam shape, $\propto \exp[-(2 \pi \sigma)^2 b^2/2]$. The total of $N_{\rm vis}$ visibilities are within the $uv$ range from zero to the antenna diameter $d$. Thus, $$n(b) = \frac{N_{\rm vis} \cdot 2 \pi \sigma^2}{1-\exp[-(2\pi \sigma)^2 d^2/2]} e^{-\frac{(2\pi \sigma)^2 b^2}{2}}.\label{eq:sinnvis}$$
Eq. (\[eq:senrel\]), (\[eq:intnvis\]), and (\[eq:sinnvis\]) give the pixel sensitivity for interferometer $\Delta S^p_{\rm int}(b)$ and single-dish $\Delta S^p_{\rm sd}(b)$. Equalizing these two $\Delta S^p_{\rm int}(b)=\Delta S^p_{\rm sd}(b)$ at $b=b_{\rm overlap}$ where the two $uv$ coverages overlap leads to a relation between the image sensitivities (i.e., RMS map noise) of the interferometer and single-dish. This relation is a rough measure of the matched sensitivities for the combination of the single-dish and interferometer, and would be useful in planning observations. The sensitivity matching can be calculated more accurately with eq. (\[eq:senrel\]), as performed in §\[sec:deconvnro45\], if we know an accurate $uv$ coverage $n(u,v)$ of interferometer observations.
Figure \[fig:sens\] plots the pixel sensitivities $\Delta S^{\rm p}$ of CARMA and NRO45 as function of baseline length $b$ for fixed image sensitivities $\Delta S^{\rm i}$. We set $b_{\rm min}$ and $b_{\rm max}$ to 10 and 300 m ($\sim 4$ and $115 \rm k\lambda$), respectively, for CARMA C & D-configuratlions. The CARMA and NRO45 $uv$ coverages overlap significantly between 4 and 10 $\rm k\lambda$. The NRO45 noise (sensitivity) increases rapidly beyond the baseline length of about half the diameter ($\sim 8 \rm k\lambda$), and CARMA can complement the $uv$ range beyond that. The imaging sensitivities of our CARMA and NRO45 observations are 27 and 155 mJy in the velocity width of $10{\, {\rm km \, s^{-1}}}$, respectively. Their sensitivities match around $b\sim 4$-$6\rm k\lambda$, within the range where the $uv$ coverages overlap.
Application To Grid-Based Combination Scheme {#sec:gridbase}
============================================
The new combination technique discussed in this paper converts a single-dish map to a finite number of visibilities (discrete data points in $uv$-space). The weight of each single-dish visibility is determined based on the RMS noise of the map (i.e., the quality of the data) using the fringe sensitivity $\Delta S^{\rm f}$ (eq. \[eq:nro45fsen\]). In the Fourier transformation, the visibilities are mapped to a grid in $uv$-space, and the pixel sensitivity $\Delta S^{\rm p}$ is calculated for each pixel of the grid by summing up the $\Delta S^{\rm f}$ of all the visibilities in the pixel. The $\Delta S^{\rm p}$ for single-dish data can be calculated directly without going through visibilities once proper software is developed.
The pixel sensitivity $\Delta S^{\rm p}$ for the single-dish data can be defined with eqs. (\[eq:senrel\])(\[eq:sinnvis\]). The RMS noise of the single-dish map $\Delta S^{\rm i}$ gives the normalization of the equations. Eq. (\[eq:sinnvis\]) is for a Gaussian beam, and could be replaced with some other shapes, such as a Fourier transformation of a single-dish beam or PSF if we have better knowledge of them. The $\Delta S^{\rm p}$ for the interferometer data should be calculated from the fringe sensitivities of visibilities $\Delta S^{\rm f}$ using $C_{ij}$ (eqs \[eq:dsk\], \[eq:cij\]) – mapping the visibilities onto a grid in $uv$-space and summing up the fringe sensitivities in each pixel. These $\Delta S^{\rm p}$ naturally set the relative weight of the single-dish and interferometer data.
[^1]: The synthesized beam is the instrumental point spread function for the aperture synthesis array; also know as the “dirty beam”.
|
Introduction
============
It is well-known [@Gazeau-Klauder; @Gazeau] that the standard coherent states of the harmonic oscillator (HO) show many attractive properties and that not all of them can be maintained when we consider other quantum systems. A problem is thus to decide the ones that will be taken as pertinent and the ones that are more peripheral when we try to generalize these states to other quantum systems.
The standard coherent states of the HO are constructed as eigenstates of the annihilation operator. For the one-dimensional infinite square well, such eigenstates have also been built and we find a good revue of their properties in the litterature [@Dong]. We will call them “generalized coherent states” (GeCS) in the following.
Another type of states has been constructed in order to get a good localization in the phase space of the quantum system under consideration. They are called “Gaussian Klauder coherent states” (GCS) [@Fox-Choi2000]. For the HO, they are shown to be a good approximation of the standard coherent states [@Fox-Choi2000]. For the infinite well, they have been analyzed most notably in [@Fox-Choi2000].
In this work, we want to exhibit the relation between the above two constructions in the case of the infinite square well. We also insist on some analytical results on the behaviour of the wave function and the corresponding quantum-classical correspondence.
In Section 2, we define the two sets of coherent states we will be dealing with. Some of their well-known properties are again exhibited. A new result is that we are able to give a relation between the parameters of these states which leads to an equivalence between the two sets. In Section 3, we examine some properties of the GCS specifically. Whereas numerical results have already exposed the main features \[4\], our approach provides an elegant explanation for them. We manage to approximate the probability density and the GCS by close expressions, from which we will be able to deduce the behaviour of the main observables (the position and the momentum). We will examine as well the minimization of the Heisenberg uncertainty relation to establish the quantum-classical correspondence for these states. Section 4 will be devoted to some conclusions and future works.
Two sets of coherent states of the 1D infinite well
===================================================
Let us first set our notational convention concerning the infinite square well [@Fox-Choi2000] to be used throughout this work. A particle of mass $M$ moves in a potential taken to be $$V(x)=\begin{cases}0,&0<x<L\\\infty,&\text{otherwise}.\end{cases}\nonumber$$ The stationary eigenstates and the discrete energies of this system are $$\psi_n(x)\equiv\sqrt{\frac{2}{L}}\sin{\frac{(n+1)\pi x}{L}},\quad\quad E(n)\equiv\frac{(n+1)^2\pi^2\hbar^2}{2ML^2}=\hbar\omega(n+1)^2,
\label{eigenISW}$$ where $n=0,1,2,...$ and $\omega\equiv\frac{\pi^2\hbar}{2ML^2}$.
Generalized coherent states
---------------------------
The GeCS are usually defined as eigenstates of the annihilation operator of the quantum system under consideration. They can be used as long as the Hamiltonian $H$ of the system has a non degenerate spectrum and admits a lowest energy equal to zero. For the infinite well, we thus work with the shifted Hamiltonian $\hbar\omega\mathcal{H}\equiv H-E_0 {\mathbb I}$ instead of $H$. It has the same eigenstates as in (\[eigenISW\]) but the eigenvalues are now $$E(n)-E(0)=\hbar \omega n(n+2)\equiv \hbar \omega \mathcal{E}(n).\nonumber$$
Ladder operators are chosen such that their action on the energy eigenstates is $$a \psi_n(x)=\sqrt{\mathcal{E}(n)}\psi_{n-1}(x),\quad a^\dagger \psi_n(x)=\sqrt{\mathcal{E}(n+1)}\psi_{n+1}(x).\nonumber$$
Note that other types of GeCS have been constructed by generalizing the preceding action of the ladder operators (see [@Dong] for a review). They are usually called Perelomov, Barut-Girardello, Gazeau-Klauder and deformed coherent states. Indeed, we can take $$A \psi_n(x)=\sqrt{n} f(n)\psi_{n-1}(x),\quad A^\dagger \psi_n(x)=\sqrt{n+1}f(n+1)\psi_{n+1}(x), \nonumber$$ for a positive real function $f(n)$ of the quantum number $n$. Here we limit ourselves to $f(n)=\sqrt{n+2}$ since we have essentially a factorization of the Hamiltonian of the system as for the HO: $$a^\dagger a \psi_n(x)=\mathcal{E}(n) \psi_n(x)=\mathcal{H} \psi_n(x).\nonumber$$
The main difference with respect to the HO case is that the set $\{a, a^\dagger, N\}$, where $N$ is the usual number operator ($N\psi_n(x)\equiv n\psi_n(x)$), satisfies a $su(1,1)$ algebra: $$[a,N]=a,\quad [a^\dagger,N]=-a^\dagger,\quad [a,a^\dagger]=2\left(N+\frac{3}{2}\right).\nonumber$$
A realization of the ladder operators [@Dong] in terms of the position $x$, momentum $p=-i \hbar \frac{d}{dx}$ and number operators is given by ($\alpha\equiv{\frac{\pi}{L}}$): $$a=\left[\cos(\alpha x)-\frac{i \sin(\alpha x)}{\hbar \alpha} p \frac{1}{N+1}\right] \sqrt{\mathcal{E}(N)},\nonumber$$ $$a^\dagger=\left[\cos(\alpha x)+\frac{i \sin(\alpha x)}{\hbar \alpha} p \frac{1}{N+1}\right] \sqrt{\mathcal{E}(N+1)}.\nonumber$$
The GeCS can be written as a function of the real position $x$, time $t$ and a continuous complex parameter $z$ which is the eigenvalue of the annihilation operator $a$: $$\Psi_{\text{Ge}}(z;x,t)\equiv\frac{1}{\sqrt{N_\text{Ge}(z)}}\sum_{n=0}^\infty\frac{z^n}{\sqrt{\rho(n)}}e^{-i\omega \mathcal{E}(n)t}\psi_n(x),
\quad\quad \rho(n)=\begin{cases}1,&n=0,\\\prod_{i=1}^n \mathcal{E}(i),&n>0.\end{cases}\label{GeGeneral}$$ The normalization factor is $$N_\text{Ge}(z)\equiv\sum_{n=0}^\infty \frac{|z|^{2n}}{\rho(n)}.\nonumber$$
These states are widely used because they are a direct generalization of the HO coherent states, $\rho(n)$ being essentially the product of the shifted energies. The properties of those states are well-known [@Gazeau-Klauder; @Gazeau; @Dong]. In particular, the resolution of the identity is satisfied as well as time stability and continuity in $z$ and $t$.
We can write more succinctly as [@Gazeau]: $$\Psi_{\text{Ge}}(z;x, t)=\sum_{n=0}^\infty C_n^{\text{Ge}}(z_0,\phi_0)e^{-i\omega n(n+2)t}\psi_n(x),\quad C_n^{\text{Ge}}(z_0,\phi_0)\equiv\frac{z_0^{n+1} e^{-in\phi_0}}{\sqrt{I_2(2 z_0)n!(n+2)!}},\label{Ge}$$ using $z=z_0\sqrt{\hbar\omega} e^{-i\phi_0} (z_0,\phi_0\in\mathbb{R})$ and denoting by $I_2$ the second-order modified Bessel function of the first kind.
Gaussian Klauder coherent states
--------------------------------
Even though they are less frequently used then the GeCS, the GCS can be built for many different systems as a special superposition of energy eigenstates in order to get a reasonably well localized probability density distribution for a short period of time [@Fox-Choi2000]. For real parameters $\phi_0, \ n_0\geq0$ and $\sigma_0>0$, they are defined as $$\Psi_{\text{G}}(n_0,\sigma_0,\phi_0; x,t)=\sum_{n=0}^\infty C_n^{\text{G}}(n_0,\sigma_0,\phi_0)e^{-i\omega\mathcal{E}(n)t}\psi_n(x),\quad C_n^{\text{G}}(n_0,\sigma_0,\phi_0)=\frac{e^{-\frac{(n-n_0)^2}{4\sigma_0^2}-in\phi_0}}{\sqrt{N_\text{G}(n_0,\sigma_0)}},\label{G}$$ where the normalization factor is $$N_\text{G}(n_0, \sigma_0)=\sum_{n=0}^\infty e^{-\frac{(n-n_0)^2}{2\sigma_0^2}}.\nonumber$$
The resolution of the identity is satisfied as well as time stability and continuity in $n_0$ and $\sigma_0$ [@Fox-Choi2000].
Let us stress the introduction of the same factor $e^{-in\phi_0}$ in and . It was not included in the original construction [@Fox-Choi2000] but it will help making the connection between the two sets of coherent states. It also has a very simple physical interpretation as we shall show in section 3.2.
Equivalence between the two sets of coherent states
---------------------------------------------------
In order to compare our coherent states, we consider $|C_n^{\text{Ge}}(z_0,\phi_0)|^2$ as given from and assume $z_0\gg1$. First we have: $$|C_n^{\text{Ge}}(z_0,\phi_0)|^2=\frac{z_0^{2n+2}}{I_2(2z_0)n!(n+2)!}=\frac{e^{2z_0}}{I_2(2z_0)}\left[\frac{e^{-z_0}z_0^{n+1}}{(n+1)!}\right]^2\frac{n+1}{n+2}\label{Poisson}.$$
The expression inside the bracket on the right-hand side of is a Poisson distribution in $(n+1)$, that can be approximated by a Gaussian distribution of mean $z_0$ and standard deviation $\sqrt{z_0}$. Looking back at , we get $$|C_n^{\text{Ge}}(z_0,\phi_0)|^2\simeq\frac{e^{2z_0}N_\text{G}(z_0-1,\sqrt{z_0/2})}{2\pi z_0I_2(2z_0)}|C_n^{\text{G}}(z_0-1,\sqrt{z_0/2},\phi_0)|^2\frac{n+1}{n+2}.\label{GeApproxG}$$
The leading behaviour of the normalization factor is straightforwardly obtained from a standard Euler-Maclaurin asymptotic expansion: $$\begin{aligned}
\sum_{n=0}^\infty e^{-\frac{(n+1-z_0)^2}{z_0}}&\sim\int_0^\infty e^{-\frac{(n+1-z_0)^2}{z_0}}dn+\frac{1}{2}e^{-\frac{(1-z_0)^2}{z_0}}-\sum_{k=1}^\infty\frac{B_{2k}}{(2k)!}\left.\frac{d^{2k-1}}{dn^{2k-1}}\right|_{n=0} e^{-\frac{(n+1-z_0)^2}{z_0}}\nonumber \\
&=\frac{\sqrt{\pi z_0}}{2}\left[\text{erf}\left(\frac{1-z_0}{\sqrt{z_0}}\right)+1\right]+e^{-\frac{(1-z_0)^2}{z_0}}\left[\frac{1}{2}-\sum_{k=1}^\infty\frac{B_{2k}}{(2k)!}2^{2k-1}+O\left(z_0^{-1}\right)\right],\nonumber\end{aligned}$$ where $B_{2k}$ are Bernoulli numbers. Using the identity $$\frac{1}{2}\coth{\left(\frac{x}{2}\right)}-\frac{1}{x}=\sum_{k=1}^\infty\frac{B_{2k}}{(2k)!}x^{2k-1},\nonumber$$ valid for $0<|x|<2\pi$, and the asymptotic expansion $$\text{erf}(x)\sim 1+e^{-x^2}\left[-\frac{1}{\sqrt{\pi}x}+O\left(x^{-3}\right)\right] \quad (x\rightarrow\infty),\nonumber$$ we find $$N_\text{G}(z_0-1,\sqrt{z_0/2})\sim\sqrt{\pi z_0}+e^{-\frac{(1-z_0)^2}{z_0}}\left[\frac{1}{1-e^2}+O\left(z_0^{-1}\right)\right].\label{Nbehaviour}$$
Approximating sums by integral in this way will be a recurring theme in this document. Our analysis will be first order and we will thus typically keep only the dominant behaviour without mentioning the corrections.
Along with the $z_0\rightarrow\infty$ expansion $I_2(2z_0)\sim e^{2z_0}/\sqrt{4\pi z_0}[1+O(z_0^{-1})]$ [@Arfken], turns into $$|C_n^{\text{Ge}}(z_0,\phi_0)|^2\simeq[1+O(z_0^{-1})]|C_n^{\text{G}}(z_0-1,\sqrt{z_0/2},\phi_0)|^2\frac{n+1}{n+2}.\nonumber$$
Taking finally into account that only terms with $n$ close to $z_0-1$ contribute significantly (i.e. terms within a few standard deviations from the Gaussian mean), we can approximate by one the $n$-dependent ratio. Matching the phases $\phi_0$ properly, we conclude that the two sets of states are equivalent in the limit $z_0\gg1$ if the parameters are related as $n_0=z_0-1$ and $\sigma_0^2=z_0/2$. We see that there is more freedom in the GCS, where $\sigma_0$ and $n_0$ are a priori independent, than in the GeCS.
Quantum-classical correspondence for the Gaussian Klauder coherent states
=========================================================================
As clear from any introductory Quantum Mechanics textbook, the quantum-classical correspondence of the standard HO coherent states $\Psi_{\text{HO}}(z;x,t)$ relies on two important properties. First, the main observables of these states have classical sinusoidal time dependence. Second, the Heisenberg product saturates the uncertainty relation at any time. This is as close as a quantum state can get to being classical.
These remarkable features all trace to the fact that the probability density can be written exactly $$|\Psi_{\text{HO}}(z;x,t)|^2=\sqrt{\frac{m\omega}{\pi\hbar}}e^{-\frac{(x-\left\langle x \right\rangle)^2}{2\Delta_x^2}},\quad \Delta_x^2=\left\langle x^2 \right\rangle-\left\langle x \right\rangle^2.\label{HOprobdensity}$$ for any time $t$. In this section, we examine how much of these properties survive in the case of the infinite square well GCS. The correspondence between these states and the GeCS given in section 2.3 makes our discussion applicable for both sets of states. We choose to focus on the GCS since their parameters will translate more naturally in terms of the quantum observables.
Computed behaviour of the main observables
------------------------------------------
Some characteristics of GCS have been explored by Fox and Choi in [@Fox-Choi2000]. They highlighted the fact that the main observables behave quasi-classically for a short period of time before the wave packet decays. In particular, ${\left< \Psi_{\text{G}} \vphantom{x |\Psi_{\text{G}}} \right|
\left. x |\Psi_{\text{G}} \vphantom{\Psi_{\text{G}}} \right>}\equiv\left\langle x \right\rangle$ as a function of time is approximately a triangular wave, which is in good agreement with the classical back and forth motion resulting from bounces on the walls. This behaviour is shown on figure 1a (obtained by summing numerically a finite number of terms from ). Moreover, figure 1b shows that the average momentum $p$ is constant except at regularly spaced bounces, as we shall expect for a classical system [@Fox-Choi2000].

**Figure 1** - (a) Expectation value of the position $\left\langle x \right\rangle$ and (b) momentum $\left\langle p \right\rangle$ as a function of time for $n_0=500, \sigma_0=5, \phi_0=\pi/2, L=\pi, \hbar=1$ and $M=1$.
We propose here another way of getting those results with a conceptual improvement. Instead of studying numerically features of $|\Psi_{\text{G}}(n_0,\sigma_0,\phi_0;x,t)|^2$, namely $\left\langle x \right\rangle$ and $\left\langle p \right\rangle$, we focus analytically on the probability density as a whole. We then obtain the behaviour of the main observables as corollaries.
Approximate formula for the probability density and properties
--------------------------------------------------------------
The HO coherent state probability density is a Gaussian wavepacket nicely packaging the position expectation value (see ). The next proposition shows that a similar formula holds for the infinite square well coherent states, as much as permitted by the limited domain $x\in[0,L]$. To our knowledge, it is the first time that this similarity with the HO is pointed out.
[The probability density of the infinite square well GCS is such that $$|\Psi_{\text{G}}(n_0,\sigma_0,\phi_0;x,t)|^2\simeq\frac{1}{\sqrt{2\pi}s}e^{-\frac{(x-X)^2}{2s^2}}\label{probDensity}$$ for $x\in[0,L]$, $t>0$ with $$\begin{aligned}
&X\equiv\frac{\phi_0L}{\pi}+\frac{Pt}{M}, \quad P\equiv\frac{(n_0+1)\pi\hbar}{L},\\
&s\equiv\frac{L}{2\pi\sigma}, \quad \sigma\equiv\sqrt{\frac{\tau}{4\omega(\tau^2+t^2)}}, \quad \tau\equiv(4\omega\sigma_0^2)^{-1}$$ under the conditions $n_0\gg\sigma_0\gg1$, $X\gg s$, $L-X\gg s$ and $t\ll\tau$.]{}
See appendix 1.
As promised, the results of [@Fox-Choi2000] can be extracted from this proposition. For example, $X$, which we readily identify with $\left\langle x\right\rangle$, depends linearly on time. In other words, the particle moves with constant momentum $\left\langle p\right\rangle=M\frac{dX}{dt}=P=\frac{(n_0+1)\pi\hbar}{L}$. The period of motion inferred from this momentum and the length of the well, $\frac{2ML^2}{(n_0+1)\pi\hbar}$, also agrees with the one found from Taylor expansions of the quantized energies (see [@Gazeau; @Aronstein-Strout]).
Because of its linear dependence, $X$ however evades the interval $[0,L]$ after a short time. The approximate formula of proposition 1 is then trivially wrong. This is a minor complication since numerical evidence suggest that, provided we account manually for the discrete bounces, our formula still approximate correctly the wave packet. We strongly believe that it would be easy to generalize our proposition by having $X$ to be a triangular wave instead of a linear function. However, the regularly spaced moments when the packet is close to the boundaries of the well would still be badly described by an analogue of our close expression.
Unlike the time when the packet hits the wall, $\tau$ has a physical significance worth mentioning. It is an intrinsic property of the GCS that the initially highly localized wave packet decays as time evolves [@Fox-Choi2000]. This is reflected in the Lorentzian time-evolution of the width $s$ of the Gaussian packet as defined in the proposition. $\tau$ serves here as a typical order of magnitude for the decay process. The condition $t\ll\tau$ simply establishes our restricted attention to early instants free of this complication.
We finally see from the expression of $X$ that the parameter $\phi_0$ serves as an initial position of the wave packet. This simple interpretation justifies our introduction of that parameter in the definition of . Another interesting observation we can make is that the maximum of $|\Psi_{\text{G}}(n_0,\sigma, x,t)|^2$ can be found from . It is close to$\frac{\sqrt{2\pi}\sigma}{L}$ at the middle of the well.
Minimization of the uncertainty product
---------------------------------------
One of the most important properties of the HO coherent states is the minimization of the Heisenberg uncertainty relation $\Delta\equiv\Delta x\Delta p$ at any time. Since we also noted striking quantum-classical similarities for $t\ll\tau$ for the infinite square well GCS , we now want to see whether $\Delta$ would be around $\hbar/2$ in some conditions.

**Figure 2** - Uncertainty product $\Delta\equiv\Delta x\Delta p$ for $n_0=50, \sigma_0=5, \phi_0=\pi/2, L=\pi, \hbar=1$ and $M=1$.
Figure 2 shows the numerically calculated uncertainty product for the GCS. It is easy to show from proposition 1 that the peaks of high $\Delta$ coincide with the bouncing of the particle on the walls. These moments set apart, $\Delta$ is close to one half which is the minimal possible value. This can be understood analytically as we will now see. The most general wave function that minimizes $\Delta$ being [@Cohen-Tannoudji] $$\Psi(x,t)=A(t)e^{-\frac{(x-\left\langle x \right\rangle)^2}{4s^2}+\frac{i\left\langle p \right\rangle x}{\hbar}}\label{minimalWaveFunction},$$ the corresponding probability density $|\Psi(x,t)|^2$ should certainly show a Gaussian dependence on $x$. Even though we have proven that the GCS have this Gaussian probability density, we know nothing about the wave function. We thus need the following result.
The wave function of the infinite square well GCS satisfies (up to a $x$-independent phase factor) $$\Psi_{\text{G}}(n_0,\sigma_0,\phi_0;x,t)\simeq\frac{1}{(\sqrt{2\pi}s)^{1/2}}e^{-\frac{(x-X)^2}{4s^2}+\frac{iPx}{\hbar}}\label{waveFunction}$$ with the same parameters and under the same conditions as in proposition 1.
See appendix 2.
The GCS wave function then has approximately the specific Gaussian form of explaining why the Heisenberg relation reaches its minimum at $t\ll\tau$. The identification of $P$ with $\left\langle p \right\rangle$ we are making here is moreover consistent with the speed of the wave packet found in section 3.2.
Proposition 2 is not really surprising given proposition 1. However, they are complementary. As the proofs show, proposition 1 gives a precise understanding of the decay of the wave packet. In particular, it gives the expression for $\sigma$. On the other hand, proposition 2 gives the extra dependence on $P$ which contributes to explain the minimization of the Heisenberg product. Both consistently exhibit the quasi-classical behaviour of the GCS.
Conclusion
==========
In this work, we have provided a short review of two well-known coherent states built for the one-dimensional infinite square well: the generalized and Gaussian-Klauder coherent states. We gave a proof that the two sets of states were equivalent for $z_0\gg1$ if the parameters are related as $n_0=z_0-1$ and $\sigma_0^2=z_0/2$.
We then turned to the analysis of the quantum-classical correspondence properties of those states. Using an approximate close expression for the probability density , we readily obtained the behaviour of the observables of interest. This approach also exhibited clearly the spreading of the wave packet as a function time.
Another close expression for the wave function explained the exhibited minimization of the Heisenberg uncertainty product. Both results and were consistent with a Gaussian wave function just as in the case of the harmonic oscillator coherent states.
Acknowledgements {#acknowledgements .unnumbered}
================
This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC).
Appendix 1 {#appendix-1 .unnumbered}
==========
From and the explicit form of $\psi_{n}(x)$ given in , the probability density takes the exact form $$|\Psi_{\text{G}}(x,t)|^2=\frac{1}{N_\text{G}(n_0,\sigma_0)L}\sum_{n'=0}^\infty \sum_{n=0}^\infty e^{-\Phi(n,n';n_0,\sigma_0,\phi_0,t)}\left[\cos{\frac{(n'-n)\pi x}{L}}-\cos{\frac{(n'+n+2)\pi x}{L}}\right].\nonumber$$ with $$\Phi(n,n';n_0,\sigma_0,\phi_0,t)={\frac{(n-n_0)^2+(n'-n_0)^2}{4\sigma_0^2}+i\phi_0(n-n')+i \omega t(n(n+2)-n'(n'+2))}.\nonumber$$
The probability density can be separated into two parts: $|\Psi_{\text{G}}(x,t)|^2=P_0 (n_0,\sigma_0,\phi_0;x,t)+P_l (n_0,\sigma_0,\phi_0;x,t)$ where $P_0$ contains the sum with $\cos{\frac{(n'-n)\pi x}{L}}$ and $P_l$ the other sum. Let us start by working out $P_0$ in details. Introducing the new summation index $j=n'-n$ and $u(j)$ that is $0$ for $j\geq0$ and $-j$ for $j<0$, we get $$\begin{aligned}
P_0(n_0,\sigma_0,\phi_0;x,t)&=\frac{1}{N_\text{G}(n_0,\sigma_0)L}\sum_{j=-\infty}^\infty \sum_{n=u(j)}^\infty e^{-\Phi(n,n+j;n_0,\sigma_0,\phi_0,t)}\cos{\frac{j\pi x}{L}}\nonumber\\
&=\frac{1}{N_\text{G}(n_0,\sigma_0)L}\sum_{j=-\infty}^\infty e^{-\frac{j^2}{4\sigma_0^2}+ij^2\omega t+ij\phi_0}\cos{\frac{j\pi x}{L}}\sum_{n=u(j)}^\infty e^{-\frac{(n-n_0)^2+j(n-n_0)}{2\sigma_0^2}+2i\omega tj(n+1)}.\nonumber\end{aligned}$$
If $n_0\gg1$, it makes no difference to use minus infinity in place of $u(j)$ since $e^{-\Phi(n,n+j;n_0,\sigma_0,\phi_0,t)}$ only selects terms near $j=0$ and $n=n_0$ which is far from $u(j)$. The approximation of the second sum by an integral yields the dominant behaviour $$\sum_{n=-\infty}^\infty e^{-\frac{(n-n_0)^2+j(n-n_0)}{2\sigma_0^2}+2i\omega tj(n+1)}\simeq\sqrt{2\pi}\sigma_0 \ e^{\frac{j^2}{8\sigma_0^2}-2j^2\omega^2t^2\sigma_0^2-ij^2\omega t+2ij\omega t(n_0+1)}.\nonumber$$
Hence, recycling $N_\text{G}(n_0,\sigma_0)\simeq\sqrt{2\pi}\sigma_0$ from , we get $$\begin{aligned}
P_0(n_0,\sigma_0,\phi_0;x,t)&\simeq\frac{1}{L}\sum_{j=-\infty}^\infty \cos{\frac{j\pi x}{L}}e^{-\frac{j^2}{8\sigma_0^2}-2j^2\omega^2\sigma_0^2t^2+2i\omega tj(n_0+1)+ij\phi_0} \nonumber\\
&=\frac{1}{L}+\frac{2}{L}\sum_{j=1}^\infty e^{-\frac{j^2}{8\sigma^2}}\cos{\frac{j\pi x}{L}}\cos{[j(\phi_0+2\omega t(n_0+1))]}.\label{P_0}\end{aligned}$$
Note the convenient introduction of $\sigma$ as defined in the proposition. A very similar derivation for $P_l(n_0,\sigma,\phi_0;x,t)$ gives $$P_l(n_0,\sigma_0,\phi_0;x,t)\simeq-\frac{e^{-2\sigma^2(\phi_0+2\omega t(n_0+1))^2}}{L}\sum_{j=0}^\infty e^{-\frac{(j-2n_0)^2}{8\sigma^2}}\cos{\frac{j\pi x}{L}}.\label{P_lSeries}$$
Now the question is how to interpret the Fourier series and . Let us write in such a way the even $2L$-periodic extension of the Gaussian function $$\Pi(X,s,\gamma;x)\equiv\frac{1}{\sqrt{2\pi}s}e^{-\frac{(x-X)^2}{2s^2}}\cos{\gamma x}=\frac{a_0}{2}+\sum_{j=1}^\infty a_j\cos{\frac{j\pi x}{L}}\quad\quad x,x_0\in[0,L],\label{Pi}$$ with $\gamma=0$. The $a_j$ are given by $$a_j=\frac{1}{\sqrt{2\pi}sL}\int_0^L f\left(X,s,2,\frac{j\pi}{L};x\right)
+f\left(X,s,2,-\frac{j\pi}{L};x\right)dx,\label{a_j}$$ where $$f\left(X,s,\alpha,\beta;x\right)\equiv e^{-\frac{(x-X)^2}{\alpha s^2}+i\beta x}.\label{f}$$
The integration of can be carried out explicitly, but the result is simpler if $$s^2|\beta|\ll\frac{2}{|\alpha|}X\quad\text{and}\quad s^2|\beta|\ll\frac{2}{|\alpha|}(L-X),\label{approxSimplificationOnF}$$ in which case $$\int_0^L f\left(X,s,\alpha,\beta;x\right)dx\simeq\frac{\sqrt{\pi\alpha}s}{2}e^{i\beta X-\frac{\alpha\beta^2s^2}{4}}\left[\text{erf}\left(\frac{L-X}{\sqrt{\alpha}s}\right)+\text{erf}\left(\frac{X}{\sqrt{\alpha}s}\right)\right].\label{integralF}$$
and then yield $$a_j\simeq\frac{2}{L}e^{-\frac{j^2}{8\sigma^2}}\cos{\frac{j\pi X}{L}}\quad\text{if }X\gg s\text{ and }L-X\gg s.\nonumber$$
Comparing with , this means $P_0(n_0,\sigma,\phi_0;x,t)\simeq \Pi(X,s,0;x)$ provided holds. A quick validation convinces that this is the case for the conditions given in the proposition. A similar development gives $$P_l(n_0,\sigma_0,\phi_0;x,t)\simeq-e^{-\frac{X^2}{2s^2}}\Pi(X,s,\frac{2\pi n_0}{L};x).\nonumber$$ but we realize that $P_l$ is actually negligible in front of $P_0$. We simply drop it to complete the proof.
Some comments are in order concerning $P_l$. It is interesting to notice that it introduces the signature of fine oscillations of period $\frac{L}{n_0}$. These oscillations were already observed in [@Fox-Choi2000] when the wave packet is near the boundaries of the well. They allow the wave packet to get nearer of the walls than it would without deforming in that way. We see here that they arise because of $P_l$. The latter acts as a border correction on $P_0$, which embodies the dominant resemblance with a Gaussian probability density. Strangely, the proof did not yield the border contribution $$P_r(n_0,\sigma_0,\phi_0;x,t)\equiv-e^{-\frac{(L-X)^2}{2s^2}}\Pi(X,s,\frac{2\pi n_0}{L};x),\nonumber$$ which we expect based on the obvious requirement that the solution must behave symmetrically about the middle of the well.
Appendix 2 {#appendix-2 .unnumbered}
==========
Let us Fourier-expand the odd $2L$-periodic extension of the Gaussian function $$(\sqrt{2\pi}s)^{-1/2}f(X,s,4,\frac{P}{\hbar};x)=\sum_{n=1}^\infty b_n\sin{\frac{n\pi x}{L}}\label{FourierOdd}$$
with $f(X,s,\alpha,\beta;x)$ as defined by . The coefficient $b_n$ is given by $$b_n=\frac{(\sqrt{2\pi}s)^{-1/2}}{iL}\int_0^L f(X,s,4,\frac{P}{\hbar}+\frac{n\pi x}{L};x)-f(X,s,4,\frac{P}{\hbar}-\frac{n\pi x}{L};x) dx.\nonumber$$
The first integral is negligible for $n_0\gg \sigma_0$ and $t\ll\tau$. Refering to , this expression becomes $$b_n\simeq\frac{i(\sqrt{8\pi}s)^{1/2}}{L}e^{i\left(\frac{P}{\hbar}-\frac{n\pi}{L}\right)X-s^2\left(\frac{P}{\hbar}-\frac{n\pi}{L}\right)^2}.\nonumber$$
Up to re-indexing the sum and dropping irrelevant phase factors, becomes $$(\sqrt{2\pi}s)^{-1/2}f(X,s,4,\frac{P}{\hbar};x)=\frac{1}{(\sqrt{2\pi}\sigma)^{1/2}}\sum_{n=0}^\infty e^{-in\phi_0-i\omega t(n+1)(n_0+1)-\frac{1}{4\sigma^2}(n-n_0)^2}\psi_n(x).$$
The similarity with is now clearer. We complete the proof by using again the fact that relevant $n$’s are close to $n_0$, which leads to the identification $\hbar\omega(n+1)(n_0+1)\simeq E(n)$. The expansion finally lead to $N_{G}(n_0,\sigma_0)\simeq\sqrt{2\pi}\sigma$ for $t\ll\tau$, which completes the proof.
[99]{} Gazeau J.P. and Klauder J.R., “Coherent states for systems with discrete and continuous spectrum”, [*J. Phys. A*]{}, [**32**]{}, 123-132 (1999).
Gazeau J.P., “Coherent States in Quantum Physics”, Wiley-VCH, Weinheim, 2009.
Dong S.H., “Factorization Method in Quantum Mechanics”, Springer, Dordrecht, 2007.
Fox R.F. and Choi M.H., “Generalized coherent states and quantum-classical correspondence”, [*Phys. Rev. A*]{}, [**61**]{}, 032107 (2000).
Arfken G.B. and Weber H.J., “Mathematical Methods for Physicists”, Elsevier Academic Press, Burlington, 2005.
Aronstein D.L. and Strout Jr, C.R., “Fractional wave-function revivals in the infinite square well”, [*Phys. Rev. A*]{}, [**55**]{}, 4526 (1997).
Cohen-Tannoudji C., Diu B. and Laloe F., “Mécanique quantique I”, Hermann, Paris, 1997.
|
---
abstract: 'In this paper we present several multipartite quantum systems featuring the same type of genuine (tripartite) entanglement. Based on a geometric interpretation of the so-called $|W\rangle$ and $|GHZ\rangle$ states we show that the classification of all multipartite systems featuring those and only those two classes of genuine entanglement can be deduced from earlier work of algebraic geometers. This classification corresponds in fact to classification of fundamental subadjoint varieties and establish a connection between those systems, well known in Quantum Information Theory and fundamental simple Lie algebras.'
author:
- 'Frédéric Holweck[^1], Péter Lévay[^2][^3]'
title: 'Classification of multipartite systems featuring only $|W\rangle$ and $|GHZ\rangle$ genuine entangled states'
---
Introduction
============
Entanglement is a key resource of quantum information. It corresponds to a form of correlation between subsystems of a given composite system which is stronger than any correlation arising from classical communication[@bell]. Since the advent of Quantum Information a large amount of experimental and theoretical evidence demonstrates that quantum protocols featuring this phenomenon exist that can overperform their classical counterparts. Beyond this entanglement is of basic importance for obtaining new communication protocols such as quantum teleportation, quantum superdense codding or quantum cryptography. It is also acknowledged that quantum entanglement plays a central role in quantum algorithms and quantum computation[@nielsen; @KSV].
Quantum entanglement is a consequence of the superposition principle. Let us illustrate this for a bipartite system. Given two copies of two-state systems (qubits) represented by the vectors $|\psi\rangle_A\in \mathcal{H}_A$ and $|\psi\rangle_B\in \mathcal{H}_B$ with $\mathcal{H}_A\simeq \mathcal{H}_B\simeq \CC^2$, we define the composite system of two qubits as the one represented by the tensor product $\mathcal{H}=\mathcal{H_A}\otimes\mathcal{H}_B$. Then a canonical basis of $\mathcal{H}$ is $\{|00\rangle, |01\rangle, |10\rangle, |11\rangle\}$ and the superposition principle tells us that one possible state for the composite system is $$|\psi\rangle_{AB}=\dfrac{1}{\sqrt{2}}(|00\rangle+|11\rangle).$$ However, $\vert\psi\rangle_{AB}$ cannot be created from an initial state of type $\vert\varphi\rangle_A\otimes \vert\chi\rangle_B$ by applying only local operations (i.e. operations acting on $|\varphi\rangle_A$ and $|\chi\rangle_B$ separately). Such states are called entangled. Since entangled states cannot be generated locally they correspond to a global resource shared by the actors of the protocol. On the other hand a state is said to be separable if it can be generated locally from a state of the form $\vert\varphi\rangle_A\otimes \vert\chi\rangle_B$.
The multipartite generalization of our example provides the basic resource for Quantum Information. However, as a resource entanglement needs to be classified. One possible classification scheme is obtained by finding equivalence classes of entangled states under Stochastic Local Operations and Classical Communications (SLOCC). SLOCC transformations are the mathematical representatives of certain physical manipulations allowed to be performed on our composite system. These manipulations are consisting of local reversible operations on each component of the multipartite system assisted by classical communication (i.e. the local operations may be coordinated). The word stochastic refers to the possibility of converting a particular state of the system to another one and vice versa with some (generally different) probability of success. The mathematical representative of such SLOCC transformations turns out to be a group acting on the Hilbert space of the composite system. The precise form of this group will depend on the observables characterizing this system.
The classification of entanglement classes of multipartite systems under SLOCC has been investigated in the last 10 years by many authors[@Dur; @brody2; @BDDER; @My; @My2; @My3; @VDMV; @HLT; @HLT2; @LV; @VL; @BDD; @DF; @DF2; @SL]. Interestingly under SLOCC some[@brody2; @Dur; @LV; @VL; @SL; @DF; @BDDER] of these entangled systems is featuring two genuine types of entanglement. The aim of this paper for these systems is to provide a unified approach based on recent results of algebraic geometry.
The paper is organized as follow. In Section \[3qubit\] we introduce a geometric interpretation[@HLT; @HLT2] of the entangled classes $|W\rangle$ and $|GHZ\rangle$ which correspond to the two genuine entangled classes in the Dür, Vidal and Cirac[@Dur]’s classification of entanglement classes of three qubits. Thanks to this geometric interpretation we can use, in Section \[tripartite\], classical results of invariant theory and algebraic geometry to classify all Hilbert spaces and quantum systems which will feature those two and exactly those two types of genuine entangled classes. In this process we recover different quantum systems investigated in the quantum information literature and three new cases. The corresponding Hilbert spaces have a similar SLOCC orbit structure (except for the case of three bosonic qubits see Remark \[bosons\]). We also classify quantum systems with two and exactly two entanglement classes (not necessarly of type $|W\rangle$ and $|GHZ\rangle$). The connexion of those quantum systems with classification results from algebraic geometry allows to give a uniform description of those quantum systems and establish a link between those systems and the simple Lie algebras. In particular we collect some geometrical information about such system in Appendix \[app\]. [**Notations.**]{} In the text $V$ (resp. $\mathcal{H}$) will denote a vector (resp. Hilbert) space over the field of complex numbers $\CC$, and $\PP(V)$ (resp. $\PP(\mathcal{H})$) will denote the corresponding projective space. A vector $v\in V$ will be projectivized to a point $[v]\in \PP(V)$. A projective algebraic variety $X\subset \PP(V)$ will be defined as the zero locus of a collection of homogeneous polynomials. A point $[x]\in X$ will be said to be smooth if and only if the partial derivatives of the defining equations do not simultaneously vanish at $[x]$. If $[x]\in X$ is smooth, one can define $\tilde{T}_x X\subset \PP(V)$ the embedded tangent space of $X$ at $[x]$.
In this article we only consider pure quantum systems, i.e. a state $|\psi\rangle$ of such systems will always be considered as a (normalized) vector of $\mathcal{H}$.
The three qubit classification revisited {#3qubit}
========================================
Starting from the paper of Dür, Vidal and Cirac [@Dur] three-qubit entanglement has given rise to a number of interesting applications [@CKW; @KL; @BDDER2; @BDL; @Levay1]. Let us denote by $\mathcal{H}_A$, $\mathcal{H}_B$ and $\mathcal{H}_C$ the three Hilbert spaces isomorphic to $\CC^2$ corresponding to qubits $A$, $B$ and $C$, then the Hilbert space of the composite system is $\mathcal{H}=\mathcal{H}_A\otimes \mathcal{H}_B\otimes \mathcal{H}_C$. In this section for simplicity let us adopt the notation $\vert\psi\rangle_A\equiv \psi_A$. If we forget about scalar normalization the relevant SLOCC group turns out to be $GL_2(\CC)\times GL_2(\CC)\times GL_2(\CC)$ and the result established in Ref[@Dur] states that under SLOCC action three qubits can be organized into six orbits i.e. SLOCC entanglement classes (Table \[table\_3q\]).
Name Normal form
--------------- ------------------------------------------------------------
Separable $|000\rangle$
Biseparable $\dfrac{1}{\sqrt{2}}(|000\rangle+|011\rangle)$
Biseparable $\dfrac{1}{\sqrt{2}}(|000\rangle+|101\rangle)$
Biseparable $\dfrac{1}{\sqrt{2}}(|000\rangle+|110\rangle)$
$|W\rangle$ $\dfrac{1}{\sqrt{3}}(|100\rangle+|010\rangle+|001\rangle)$
$|GHZ\rangle$ $\dfrac{1}{\sqrt{2}}(|000\rangle+|111\rangle)$
: Three qubit classification[]{data-label="table_3q"}
The three qubits classification features the interesting property of having exactly two classes of genuine entanglement, called the $|W\rangle$ and $|GHZ\rangle$ classes. It should be also clear that, for instance, for the bipartite state $|\psi\rangle=\dfrac{1}{\sqrt{2}}(|000\rangle+|101\rangle)$, particles $A$ and $C$ are entangled while $B$ is not. Note that from the projective point of view multiplication by a nonzero scalar does not change the nature of the state and thus we can instead consider the $SL_2(\CC)\times SL_2(\CC)\times SL_2(\CC)$ orbits of $\PP(\mathcal{H})$. It turns out that this classification was known, from mathematical perspective, since the work of Le Paige[@LePai]. Motivated by this example we can adress the basic question of our paper as: which other types of quantum systems do have two and only two types of genuine non-equivalent entangled states?
Following Ref[@HLT] let us rephrase the classification of three qubit entanglement classes by means of algebraic geometry. In the projectivized Hilbert space $\PP(\mathcal{H})$ we consider the image of the following map: $$\begin{array}{cccc}
\phi :& \PP(\mathcal{H}_A)\times\PP(\mathcal{H}_B)\times\PP(\mathcal{H}_C) & \to& \PP(\mathcal{H})\\
& ([\psi_A],[\psi_B],[\psi_C]) & \mapsto & [\psi_A\otimes \psi_B\otimes \psi_C]
\end{array}$$
The map $\phi$ is well-known as the Segre map[@Ha; @Lan]. Let $X=\phi(\PP(\mathcal{H}_A)\times\PP(\mathcal{H}_B)\times \PP(\mathcal{H}_C))\subset \PP(\mathcal{H})$, in what follows we will denote this image simply by $X=\PP^1\times\PP^1\times\PP^1$ because the projectivization of the Hilbert space of each particle clearly corresponds to a projective line ($\PP(\mathcal{H})=\PP(\CC^2)=\PP^1$). It can be easily shown that $X$ is a smooth projective algebraic variety[@Ha]. It is also clear that the variety $X$ is the $G=SL_2(\CC)\times SL_2(\CC)\times SL_2(\CC)$ orbit of any rank one tensor in $\PP(\mathcal{H})$. Indeed given $[\psi_A\otimes \psi_B\otimes\psi_C]$, then for any $[\tilde{\psi}_A\otimes\tilde{\psi}_B\otimes\tilde{\psi}_C]$, there exists $g=(g_1,g_2,g_3)\in G$ such that $[\tilde{\psi}_A\otimes\tilde{\psi}_B\otimes\tilde{\psi}_C]=[(g_1\cdot\psi_A)\otimes (g_2\cdot\psi_B)\otimes (g_3\cdot\psi_C)]$. In terms of quantum entanglement it follows from this description that the variety $X$ represents the set of separable states and can be described as the projectivized orbit of $\psi=|000\rangle$, i.e. $X=G\cdot[|000\rangle]\subset \PP(\mathcal{H})$.
Similarly to what was done in Ref[@HLT; @HLT2] our goal is now to build from the algebraic variety of separable states some auxiliary varieties which will encode different type of entanglement classes. Consider $Y^n\subset\PP(V^{n+a+1})$ a projective algebraic variety of dimension $n$ embedded in a projective space of dimension $n+a$, such that $Y$ is smooth and not contained in a hyperplane. Then one defines the secant variety[@Z2] of $Y$, denoted by $\sigma(Y)$, as the algebraic closure, for the Zariski topology, of the union of secant lines of $Y$ (Eq. (\[secant\])) $$\label{secant}
\sigma(Y)=\overline{\bigcup_{x,y\in Y} \PP^1 _{xy}}$$ Another interesting auxiliary variety is the tangential variety of $Y$, which is defined as the union of embedded tangent spaces, $\tilde{T}_y Y$ of $Y$ (Eq. (\[tangent\])) $$\label{tangent}
\tau(Y)={\bigcup_{y\in Y} \tilde{T}_y Y}$$ One point of importance is the following: If the variety $Y$ is $G$-invariant for the action of a group $G$ (i.e. if $y\in Y$ then $g.y\in Y$ for all $g\in G$) then so are the varieties $\tau(Y)$ and $\sigma(Y)$. This property follows from the defintions of the two auxiliary varieties $\sigma(Y)$ and $\tau(Y)$ which are built from points of $Y$.
Clearly $\tau(Y)\subset \sigma(Y)$, as tangent lines can be seen as limits of secant lines, and the expected dimensions of those varieties are $\text{min}\{2n,n+a\}$ for $\tau(Y)$ and $\text{min}\{2n+1,n+a\}$ for $\sigma(Y)$. A consequence of the Fulton-Hansen connectedness Theorem[@FHa] is the following corollary which will be central to what follows.
\[dim\]\[Corollary 4 of Ref[@FHa]\] One of the following must hold
1. $\text{dim}(\tau(Y))=2n$ and $\text{dim}(\sigma(Y))=2n+1$, or
2. $\tau(Y)=\sigma(Y)$.
If we go back to the case where $X=\PP^1\times\PP^1\times\PP^1\subset \PP^7$, a standard calculation[^4] shows that $\text{dim}(\sigma(X))=7$, i.e. the secant variety is of expected dimension and fills the ambient space. Therefore one automaticaly knows that $\tau(X)$ exists and is of codimension one in $\PP^7$. Moreover given a general pair of points $(x,y)\in X\times X$ denoted by $x=[\psi_A\otimes \psi_B\otimes \psi_C]$ and $y=[\tilde{\psi}_A\otimes \tilde{\psi}_B\otimes \tilde{\psi}_C]$ it is not difficult to see that there exists $g=(g_1,g_2,g_3)\in SL_2(\CC)\times SL_2(\CC)\times SL_2(\CC)$ such that $[g.(|000\rangle+|111\rangle)]=[(g_1.|0\rangle)\otimes (g_2.|0\rangle)\otimes (g_3.|0\rangle)+(g_1.|1\rangle)\otimes (g_2.|1\rangle)\otimes (g_3.|1\rangle)=[\psi_A\otimes \psi_B\otimes \psi_C+\tilde{\psi}_A\otimes \tilde{\psi}_B\otimes \tilde{\psi}_C]$. In other words we have for $G=SL_2(\CC)\times SL_2(\CC)\times SL_2(\CC)$ $$\label{secantGHZ}
\sigma(X)=\sigma(G.[|000\rangle])=\overline{G.[|000\rangle+|111\rangle]}$$ Similarly one can provide an orbit description of $\tau(X)$: Let $\gamma(t)=[(\psi_A+t\tilde{\psi}_A)\otimes (\psi_B+t\tilde{\psi}_B)\otimes(\psi_C+ t\tilde{\psi}_C)]$ be a general curve of $X$ passing through $[\psi_A\otimes \psi_B\otimes \psi_C]$ such that $\psi_i$ and $\tilde{\psi}_i$ are not colinear. Then a straightforward calculation shows that after differentiation we get $\gamma'(0)=[\tilde{\psi}_A\otimes\psi_B\otimes\psi_C+\psi_A\otimes\tilde{\psi}_B\otimes\psi_C+\psi_A\otimes\psi_B\otimes\tilde{\psi}_C]\in \tilde{T}_{[\psi_A\otimes \psi_B\otimes \psi_C]}X$. Again under the group action $G$ this calculation tells us that $$\label{tangentW}
\tau(X)=\tau(G.[|000\rangle])=\overline{G.[|100\rangle+|010\rangle+|001\rangle]}$$
Equations (\[secantGHZ\]) and (\[tangentW\]) say that the $|GHZ\rangle$ and $|W\rangle$ states form open subsets of the secant and tangential varieties of the set of separable states respectively. Moreover, the fact that the secant variety has the expected dimension implies that those two states are non equivalent. Considering the biseparabe states the geometric picture can be completed as follows:
$$\xymatrix{&\sigma(X)= \PP^7& \\
&\tau(\PP^1\times \PP^1 \times \PP^1){\ar@{^{}-}}[dr]{\ar@{^{}-}}[u] & \\
\sigma(\PP^1\times \PP^1)\times \PP^1 {\ar@{^{}-}}[ru] {\ar@{^{}-}}[rd] & \PP^1\times\sigma(\PP^1\times \PP^1){\ar@{^{}-}}[u] {\ar@{^{}-}}[d]& \sigma(\PP^1\times\underline{\PP^1}\times \PP^1)\times \PP^1 \\
& X=\PP^1\times \PP^1 \times \PP^1 {\ar@{^{}-}}[ur]& \\
}$$
In Figure \[222onion\] the lines represent inclusions as algebraic varieties and $\sigma(\PP^1\times\underline{\PP^1}\times \PP^1)\times \PP^1$ is a notation introduced in Ref[@HLT] to denote the variety of the closure of secant lines of $X$ between points of type $[u\otimes v\otimes w]$ and $[\tilde{u}\otimes v\otimes \tilde{w}]$.
Based on the previous analysis we see, that an alternative way of saying:
[*<< Three qubits have two non equivalent classes of genuine entanglement, one of type $|GHZ\rangle$ and the other of type $|W\rangle$>>*]{}
would be, in geometrical terms,
[*<<The secant variety of the set of separable states $\PP^1\times\PP^1\times\PP^1$ has the expected dimension and fills the ambient space>>.*]{}
In Section \[tripartite\] we show what this last geometric formulation tells us about other types of multipartite systems.
The geometry of tripartite entanglement {#tripartite}
=======================================
Let us now consider a semi-simple complex Lie group $G$ and an irreducible $G$-module $\mathcal{H}$, i.e. a vector space such that $G$ acts on $\mathcal{H}$ and does not contain any nontrivial submodule. We call $\mathcal{H}$ an irreducible representation of $G$. Taking the projective space $\PP(\mathcal{H})$ there exists a unique smooth orbit $X=G.[v]\subset \PP(\mathcal{H})$ called the highest weight orbit[^5]. In the case of three qubits we have $\mathcal{H}=\CC^2\otimes\CC^2\otimes\CC^2$, $G=SL_2(\CC)\times SL_2(\CC)\times SL_2(\CC)$ and $v=|000\rangle$, i.e. $X=G\cdot[|000\rangle]=\PP(\{|\psi\rangle \text{ separable}\}$. Let us look at an other standard example.
\[grass\] We consider $G=SL_6(\CC)$ and $\mathcal{H}=\Lambda^3 \CC^6$. The vector space $\mathcal{H}$ is an irreducible representation of $G$ (more generaly $\Lambda^k V$ are irreducible representations of $SL(V)$ called the fundamental representations, see Ref[@F-H] page 221). Given $(e_1,e_2,\dots,e_6)$ a basis of $\CC^6$, a highest weight vector can be chosen to be $v=e_1\wedge e_2\wedge e_3$. Then in this case $X=SL_6(\CC)\cdot [e_1\wedge e_2\wedge e_3]\subset \PP(\Lambda^3 \CC^6)=\PP^{19}$. The variety $X$ represents the set of 3-dimensional planes in $\CC^6$ also known as the Grassmannian $G(3,6)$. Given a three plane of $\CC^6$ spanned by $u, v$ and $w$ we can always find $g\in SL_6(\CC)$ such that $[g\cdot (e_1\wedge e_2\wedge e_3)]=[g\cdot e_1\wedge g\cdot e_2\wedge g\cdot e_3]=[u\wedge v\wedge w]$. In terms of skew-symmetric tensors, the variety $X$ is the set of rank one tensors in $\mathcal{H}$. Now let us recall that, in quantum information theory, $\mathcal{H}=\Lambda^3\CC^6$ is the Hilbert space describing systems made of three fermions with six single-particles states. Then, from the above description, it follows that for three fermions with six single-particle states, $X=G(3,6)$ is the set of separable states and $G=SL_6(\CC)$ is the corresponding SLOCC group.
Let $P\subset G$ denotes the stabilizer of the highest weight $v\in \mathcal{H}$, then $X=G/P$ and $X=G/P\subset \PP(\mathcal{H})$ realizes the minimal embedding of the homogenous variety $G/P$. The subgroup $P\subset G$ is called a parabolic subgroup of $G$[@F-H].
Based on our geometric interpretation of the classes $|GHZ\rangle$ and $|W\rangle$ in the three qubits case, let us ask a more general question: what are the semi-simple Lie groups $G$ and the corresponding irreducible representations $\mathcal{H}$ such that $$\tau(X)\subsetneq\sigma(X)=\PP(\mathcal{H})$$ where $X$ is arising from the projectivization of the highest weight vector ?
First it should be noticed that $\sigma(X)=\sigma(G/P)=\PP(\mathcal{H})$ and $\tau(X)$ is a hypersurface of $\PP(\mathcal{H})$ imply the ring of $G$-invariant polynomials on $\mathcal{H}$ is generated by the $G$-invariant irreducible polynomial vanishing on $\tau(X)$, i.e. $\CC[\mathcal{H}]^G=\CC[F]$ where $F$ is the irrecudible (up to scale) homogeneous polynomial defining $\tau(X)$. Indeed the fact that $\sigma(G/P)=\PP(\mathcal{H})$ says there is a dense orbit (because the secant variety is always the closure of the orbit $G.[u+v]$ where $(u,v)$ is a general pair of points of $X$, see Ref[@Z2]). Therefore there are either no invariants or the ring of invariants is generated by a single polynomial. The fact that $\tau(X)$ is a $G$-invariant hypersurface tells us we are in the second case. The representations such that $\CC[\mathcal{H}]^G=\CC[F]$ have been classified by Kac, Popov and Vinberg[@KPV]. It can be deduced from this classification which representations satisfy $\sigma(G.[v])=\PP(\mathcal{H})=\PP^{2n+1}$ where the highest weight orbit $G.[v]$ has dimension $n$. This is in fact done explicitly in the book of F. Zak[@Z2] page 51 and 53 where the author studies in detail homogeneous varieties of small codimension in order to understand a special class of them called the Severi varieties. We summerize the result of Zak in Table \[classification\] and we put in perspective the corresponding systems in quantum information theory as well as the references where those cases have been separately investigated.
$\begin{array}{c|c|c|c|c}
G & \mathcal{H} & \text{\scriptsize Highest weight orbit} & \text{\scriptsize QIT interpretation} & \text{\scriptsize References} \\
\hline
SL_2(\CC) & Sym^3(\CC^2) & v_3(\PP^1)\subset \PP^3 & \text{\scriptsize Three bosonic} &\text{\scriptsize Brody, Gustavsson, Hughston.}\cite{brody2} \\
& & &\text{\scriptsize qubits} &\text{\scriptsize Vrana and L\'evay}\cite{VL} \\
\hline
SL_2(\CC)\times SO(m) & & & &\\
\hline
m=3 & \CC^2\otimes Sym^2(\CC^2) & \PP^1\times v_2(\PP^1)\subset \PP^5 & \text{\scriptsize 1 distinguished qubit} & \text{\scriptsize Vrana and L\'evay}\cite{VL} \\
& & & \text{\scriptsize and 2 bosonic qubits} &\\
\hline
m=4 & \CC^2\otimes\CC^2\otimes\CC^2 & \PP^1\times\PP^1\times\PP^1\subset \PP^7& \text{\scriptsize 3 qubits} & \text{\scriptsize D\"ur, Vidal, Cirac}\cite{Dur}\\
\hline
m=5 & \CC^2\otimes \Lambda^{<2>} \CC^4 & \PP^1\times LG(2,4)\subset \PP^9 & \text{\scriptsize 1 distinguished qubit} & \text{\scriptsize New} \\
& & & \text{\scriptsize and two fermions with} & \\
& & & \text{\scriptsize 4 single-particle state} &\\
& & & \text{\scriptsize and a symplectic form condition} & \\
\hline
m=6 & \CC^2\otimes \Lambda^2\CC^4 & \PP^1\times G(2,4)\subset \PP^{11} &\text{\scriptsize 1 qubit and two fermions} & \text{\scriptsize Vrana and L\'evay\cite{VL}}\\
& & &\text{\scriptsize with 4 single-particle states} &\\
\hline
m>6 & \CC^2\otimes \CC^{m} & \PP^1\times Q^{m-2}\subset \PP^{2m-1} & \text{\scriptsize 1 qubit and 1 isotropic (m-1)-dits} & \text{\scriptsize New} \\
\hline
Sp_6(\CC) & \Lambda^{<3>}\CC^6 & LG(3,6)\subset \PP^{13} & \text{\scriptsize Three fermions } & \text{\scriptsize New} \\
& & & \text{\scriptsize with six single-particle states} & \\
& & &\text{\scriptsize and a symplectic form condition} & \\
\hline
SL_6(\CC) & \Lambda^3\CC^6 & G(3,6)\subset \PP^{19} & \text{\scriptsize Three fermions} & \text{\scriptsize Levay and Vrana}\cite{LV}\\
& & & \text{\scriptsize with six single-particle states} & \\
\hline
Spin_{12} & \Delta_{12} & \SS_6\subset \PP^{31} & \text{\scriptsize Particles in } & \text{\scriptsize S\'arosi and L\'evay}\cite{SL}\\
& & & \text{\scriptsize Fermionic Fock spaces}\\
\hline
E_7 & V_{56} & E_7/P_1\subset \PP^{55} & \text{\scriptsize Tripartite entanglement} & \text{\scriptsize Duff and Ferrara\cite{DF}}\\
& & & \text{\scriptsize of seven qubits} & \\
\hline
\end{array}$
The notations of Table \[classification\] are as follow:
- $Sym^n V$ and $\Lambda^n V$ denote respectively the symmetric and skew-symmetric parts of $V^{\otimes n}$.
- $v_k:\PP(V)\to \PP(Sym^k V)$ is the Veronese map defined by $v_k([v])=[v\circ v\circ \dots \circ v]$ and $v_2(\PP^1)$ and $v_3(\PP^1)$ are curves corresponding to the images of $\PP^1$ by $v_2$ and $v_3$ also known as the conic and the twisted cubic[@Ha].
- $Q^{n-1}\subset \PP^n$ denotes a smooth quadric in $\PP^n$.
- The variety $LG(k,n)\subset \PP(\Lambda^{<k>}\CC^n)$ is the so-called Lagrangian Grassmannian. Given a non degenerate symplectic form $\omega$ on $\CC^n$, $LG(k,n)$ is the variety of isotropic $k$-planes in $\CC^n$ with respect to $\omega$.
- As already mentioned in Example \[grass\], the variety $G(k,n)\subset \PP(\Lambda^k \CC^n)$ is the Grassmannian variety of $k$-planes in $\CC^n$.
- The vector space $\Delta_{12}$ is the standard representation of the group $Spin_{12}$, i.e. the double covering of $SO(12)$, see Ref[@F-H]. The variety $\SS_6\subset \PP(\Delta_{12})$ is the corresponding highest weight orbit, called the Spinor variety. It is the variety of pure spinors[@chevalley].
- The vector space $V_{56}$ is the standard representation of the Lie group $E_7$ and $E_7/P_1$ denotes the corresponding highest weight orbit (in terms of parabolic groups $P_1$ corresponds to the parabolic group defined by the root $\alpha_1$).
Table \[classification\] provides a classification of quantum systems featuring two and only two classes of genuine entanglement of types $|W\rangle$ and $|GHZ\rangle$. Although most of these systems have been studied independently by various authors of the quantum information theory community, it is interesting to point out here that thanks to the work of F. Zak now a purely geometric approach allows us to present them in a unique classification scheme. As we will discuss it in the Appendix \[app\] this classification also corresponds to the classification of Freudenthal varieties. The role of Freudenthal construction in the study of those quantum systems, in particular the role of Freudenthal triple systems (FTS) has been already understood and used by different authors[@BDDER; @BDFMR; @VL]. The Hilbert spaces and quantum systems of our Table \[classification\] obtained by geometric arguments are the same Hilbert spaces and SLOCC groups of Table II of Ref[@BDDER] built from FTS. However the FTS construction does not show that this Table provides a complete classification of quantum systems featuring this pecular entanglement behavior.
Let us also point out that three new types of quantum systems with entanglement classes similar to the three qubit systems appear in this classification. Their set of separables states correspond to the following three algebraic varieties:
- $X=\PP^1\times LG(2,4)\subset \PP^9$,
- $X=LG(3,6)\subset \PP^{13}$
- $X=\PP^1\times Q^{m-2}\subset \PP^{2m-1}$, $m>6$.
As mentioned in Table \[classification\], the first system is made of a distinguished qubit and two fermions with four single-particle states satisfying a symplectic condition and the second system corresponds to three fermions with six single-particle states satisfying a symplectic condition. The last new case corresponds to a system made of a qubit and a $m-1$-dits ($m>6$) which satisfies an isotropic condition given by a quadratic form.
\[bosons\] The orbit structure of the projectivized Hilbert spaces $\PP(\mathcal{H})$ with the SLOCC groups $G$ of Table \[classification\] is fully provided by Ref[@LM1]. In particular the authors show that, except for $G=SL_2(\CC)$ and $\mathcal{H}=Sym^3(\CC^2)$ (three bosonic qubits), there are exactly $4$ orbits. The Zariski closures of those orbits can be described as follow: $$\underbrace{X}_{\text{Separable}}\subset \underbrace{\sigma_{+}(X)}_{\text{Biseparable}}\subset \underbrace{\tau(X)}_{|W\rangle}\subset \underbrace{\sigma(X)}_{|GHZ\rangle}=\PP(\mathcal{H})$$ The variety $\sigma_{+}(X)$ is the closure of points of type $|\psi\rangle+|\chi\rangle$ where $|\psi\rangle$ and $|\chi\rangle$ are two separable states which do not form a generic pair (see Ref[@LM1] for the description of the isotropic condition satisfied by this pair $(|\psi\rangle,|\chi\rangle)$). The smooth points of $\sigma_{+}(X)$ are therefore identified with biseparable states. This variety is irreducible except in the case of three qubits where $\sigma_{+}(\PP^1\times \PP^1\times \PP^1)$ splits in three irreducible components (see Figure \[222onion\]).
For three bosonic qubits, $X=v_3(\PP^1)$, the orbit structure is sligthly different. It is only made of three orbits as there is no variety such as $\sigma_{+}(X)$.
We conclude this Section with a variation of our initial problem. Instead of classifying systems with two and only two classes of genuine entanglement of type $|W\rangle$ and $|GHZ\rangle$, let us consider systems having two and only two types of genuine entanglement (but not necessarly featuring $|W\rangle$ and $|GHZ\rangle$).
Let $\mathcal{H}=\CC^3\otimes\CC^3$, $G=SL_3(\CC)\times SL_3(\CC)$, and $X=G.[|00\rangle]=\PP^2\times\PP^2\subset \PP^8=\PP(\mathcal{H})$, i.e. $X$ is the set of separable states of two qutrits. The variety $X$ can also be identified with the projectivization of the rank one $3\times 3$ matrices and $\PP(\mathcal{H})$ is the projectivization of the space of $3\times 3$ matrices. Under the action of SLOCC group $G$, it is well known that we have only three orbits: $$X=\PP\{\text{Matrices of rank } 1\}\subset \PP\{\text{Matrices of rank }\leq 2\}\subset \PP\{\text{Matrices of rank }\leq 3\}=\PP^8$$ Then the variety of rank less than two matrices is the secant variety of $X$ and general points of this variety correspond to the orbit of the state $|\psi\rangle=|00\rangle+|11\rangle$. But in this example there is no tangential variety because of dimension condition. Indeed $dim(\sigma(X))=7<2\times 4+1$ and by Proposition \[dim\] we have $\sigma(X)=\tau(X)$. Thus this is an example of a multipartite system with two and only two types of non equivalent entangled states but there is no entangled class of type $|W\rangle$.
It is clear form the previous example that quantum systems with only two types of genuine entangled classes which are not considered in Table \[classification\] should correspond to systems whose set of separable states $X\subset \PP(\mathcal{H})$ satisfies the following geometric conditions: $$\label{condition2}
dim(\sigma(X))<2dim(X)+1 \text{ and there is a SLOCC orbit corresponding to } \PP(\mathcal{H})\backslash \sigma(X)$$
In turns out that the classification of homogeneous varieties $X=G/P$ under the conditions of Eq. (\[condition2\]) can also be deduced from Zak’s work (See Ref[@Z2] page 54 and 59). We summerize this result in Table \[classification2\].
$\begin{array}{c|c|c|c}
G & \mathcal{H} & \text{\scriptsize Highest weight orbit} & \text{\scriptsize QIT interpretation} \\
\hline
SL_2(\CC) & Sym^2(\CC^3) & v_2(\PP^2)\subset \PP^5 & \text{\scriptsize Two bosons} \\
& & &\text{\scriptsize with 3 single-particle states} \\
\hline
SL_3(\CC)\times SL_3(\CC)& \CC^3\otimes \CC^{3} & \PP^2\times \PP^2\subset \PP^{8} & \text{\scriptsize Two qutrits} \\
\hline
SL_5(\CC) & \Lambda^{2}\CC^5 & G(2,5)\subset \PP^{14} & \text{\scriptsize Two fermions } \\
& & & \text{\scriptsize with five single-particle states} \\
\hline
E_6 & V_{27} & E_6/P_1\subset \PP^{26} & \text{\scriptsize Bipartite entanglement of three qutrit\cite{DF2}} \\
& & & \\
\hline
SL_3(\CC)\times SL_4(\CC) & \CC^3\otimes \CC^4 & \PP^2\times\PP^3\subset \PP^{11} & \text{\scriptsize One qutrit and one 4-qudit } \\
\hline
SL_7(\CC) & \Lambda^2\CC^7 & G(2,7)\subset \PP^{20} & \text{\scriptsize Two fermions} \\
& & & \text{\scriptsize with 7 single-particle states} \\
\hline
\end{array}$
The notations for Table \[classification2\] are as follows:
- $V_{27}$ is the standard representation of $E_6$ and $E_6/P_1$ is the highest weight orbit.
The first four varieties of Table \[classification2\] are the so-called Severi varieties studied by F. Zak[@Z2]. In terms of entanglement Tables \[classification\] and \[classification2\] lead to the following result.
The pure quantum systems having two and only two type of genuine entanglement classes are classified by Tables \[classification\] and \[classification2\].
It should be pointed it out that the composite quantum systems of Table \[classification\] are all tripartite systems (except in the case of $\mathcal{H}=\CC^2\otimes \CC^m$ with $G=SL_2(\CC)\times SO(m)$ for $m>6$), while the composite systems of Table \[classification2\] are all bipartite systems. This will be emphasized in Appendix \[appendix\] when we refer to a uniform geometric parametrization of the varieties of separable states given by Ref[@LM1].
Conclusion
==========
By means of algebraic geometry in this paper we intended to provide a uniform description of pure quantum systems featuring a classification of entanglement types similar to the famous case of three qubits. More precisely we explained how a geometric interpretation of what the $|W\rangle$ and $|GHZ\rangle$ states are, allows us to use results of algebraic geometry and invariant theory to give an explicit list (Table \[classification\]) of all Hilbert spaces, with the corresponding SLOCC group, such that the only types of genuine entangled states are the exact analogues of the $|W\rangle$ and $|GHZ\rangle$ states. It turns our that this list of separable states for those Hilbert spaces correponds to the list of subexceptional Freudenthal varieties. Those varieties have a strong connexion with exceptional simple Lie algebras (as fundamental subadjoint varieties). They also admit a uniform description as image of the same rational map (Plücker embedding) over different composition algebras. This map found by Ref[@LM1] is described in Appendix \[appendix\]. The translation of the work of algebraic geometers[@LM1; @LM2; @LM3; @Z2] we manage to do to quantum information theory language could be summerize in the following sentence: <<[*Three fermions with $6$ single-particle states over composition algebras can be entangled in two different ways*]{}>>. This sentence includes all known cases of tripartite systems having a similar orbit structure as the three qubit case.
The tripartite entanglement and the Freudenthal varieties {#app}
=========================================================
The algebraic varieties of Table \[classification\] have been studied in the mathematics literature as the fundamental subadjoint varieties or the Freudenthal varieties. In the early 2000, Landsberg and Manivel investigated the geometry of those varieties in a series of papers[@LM1; @LM2; @LM3]. Their goal was to establish new connections between representation theory and algebraic geometry. In this Appendix we collect some results and descriptions of this sequence of varieties which we believe to be relevant for quantum information theory.
The subadjoint varieties
------------------------
Let us consider $\mathfrak{g}$ a complex simple Lie algebra of type $B_n,D_n, G_2, F_4, E_6, E_7$ and $E_8$ (i.e. all complex simple Lie algebras except those of type $A_n$ and $C_n$). They correspond to the fundamental simple Lie algebras, i.e. Lie algebras whose adjoint representation is fundamental[@LM1]. Then let $X_G\subset \PP(\mathfrak{g})$ be the highest weight orbit for the adjoint representation of the corresponding Lie group $G$. Consider $\tilde{T}_x X_{G}$ the embedded tangent space at any point $[x]$ of the homogeneous variety $X_G$. Then $Y=X_G\cap T_{x} X_{G}$ is a homogenous variety. Table \[subadjoint\] gives the correspondence between the Lie algebras $\mathfrak{g}$ and the homogenenous varieties $Y$.
$$\begin{array}{c|c}
Y\subset \PP^{n} & \mathfrak{g} \\
\hline
v_3(\PP^1)\subset \PP^3 & \mathfrak{g}_2\\
\PP^1\times \QQ^{m-4} \subset \PP^{2m-5} & \mathfrak{so}_m\\
LG(3,6)\subset \PP^{13} & \mathfrak{f}_4\\
G(3,6)\subset \PP^{19} & \mathfrak{e}_6\\
\SS_6\subset \PP^{31} & \mathfrak{e}_7\\
E_7/P_1\subset \PP^{55} & \mathfrak{e}_8
\end{array}$$
The sequence of algebraic varieties corresponding to quantum multipartite systems featuring only the two types of genuine entanglement $|W\rangle$ and $|GHZ\rangle$ are connected to fundamental adjoint representations of Lie algebras by this construction. Moreover in Ref[@LM2] Landsberg and Manivel prove the existence of a rational map of degree $4$ which allows from the knowledge of $Y$ to reconstruct the adjoint varieties $X_G\subset \PP(\mathfrak{g})$ and thus to recover the structure of the Lie algebra $\mathfrak{g}$. To illustrate the construction of this rational map, let us detail one example.
\[rationnalmap\] Let $Y=G(3,6)\subset \PP^{19}=\PP(V)$ be the variety of separable states for a system made of three fermions with six single-particule states. Let us denote by $\{x_1,\dots, x_{20}\}$ a dual basis of $V$. Then embedded linearly $\PP(V)=\PP^{19}\subset_{\{x_0=0\}} \PP^{20}\subset_{\{x_{21}=0\}} \PP^{21}$ and consider the rational map $\phi: \PP^{21} \to \PP^{77}$ defined by $$\phi([x_0,\dots,x_{21}])=[x_0^4,x_0^3x_{21}, x_0^3 x_i,x_0^2 I_2(Y), x_0x_{21}x_i-x_0 I_3(\tau(Y)_{sing}),x_0^2x_{21}^2-I_4(\tau(Y))]$$
where $1\leq i\leq 20$, $I_k(Z)$ denotes a set of generators of the ideal of degree $k$ polynomials defining $Z$ and $\tau(Y)_{sing}$ is the subvariety of singular points of $\tau(Y)$. Then $\phi(G(3,6))=X_{E_6}$, i.e. $\phi$ maps the set of separable three fermions with six single-particules states to the $E_6$ adjoint variety. The $E_6$ adjoint variety contains the information defining the Lie algebra $\mathfrak{e}_6$ as we have $<X_{E_6}>=\PP(\mathfrak{e}_6)$, i.e. the linear span fills the full space and the algebraic structure can be recovered[@LM2] from the geometry of $X_{E_6}$.
One sees from the previous example that the Lie algebra $\mathfrak{e}_6$ can be reconstructed from the defining equation of $\tau(G(3,6))$, i.e. the unique (up to a multiplication by a scalar) SLOCC invariant irreducible quartic on $\mathcal{H}=\Lambda^3(\CC^6)$. Indeed the ideal of degree three polynomials vanishing on the singular locus of $\tau(G(3,6))$ is generated by the derivatives of the quartic invariant and the ideal of degree two polynomials defining $G(3,6)$ is spaned by the second derivative of the quartic invariant. But this quartic invariant is known in the context of entanglement as the analogue for three bosons of the $3$-tangle[@VL].
Therefore in the context of entanglement Landsberg and Manivel’s construction tells us that Table \[subadjoint\] can be read as follows: Consider a fundamental Lie algebra $\mathfrak{g}$ and the corresponding mutlipartite quantum system $Y$. Then $\mathfrak{g}$ can be reconstructed from the knowlegde of the unique irreducible SLOCC invariant of degree $4$ (i.e. the generalization of the $3$-tangle). This is another approach to construct Lie algebra from qubits[@CB].
The Freudenthal subexceptional series {#appendix}
-------------------------------------
The subadjoint varieties also appeared in the work of Landsberg and Manivel in their geometric investigation of the so-called Freudenthal magic square. Let us recall that the Freudenthal magic square is a square of semi-simple Lie algebras due to Freudenthal[@Freu] and Tits[@T] obtained from a pair of composition algebras $(\AA,\BB)$ (where $\AA$ and $\BB$ are the complexification of $\RR, \CC, \HH$, the quaternions or $\OO$, the octonions) by the following construction: $$\mathfrak{g}=Der(\AA)\oplus(\AA_0\otimes J_3(\BB)_0)\oplus Der(J_3(\BB))$$
where $\AA_0$ denotes the space of imaginary elements, $J_3(\BB)$ denotes the Jordan algebra of $3\times 3$ Hermitian matrices over $\BB$ and $J_3(\BB)_0$ is the subspace of traceless matrices of $J_3(\BB)$. For an algebra $A$, $Der(A)$ is the derivation of $A$, i.e. the Lie algebra of the automorphism group of $A$.
The Freudenthal magic square is thus given by:
$\RR$ $\CC$ $\HH$ $\OO$
------- ------------------------------ --------------------------------------------------------------- --------------------------------- ------------------
$\RR$ $\mathfrak{s}\mathfrak{o}_3$ $\mathfrak{s}\mathfrak{l}_3$ $\mathfrak{s}\mathfrak{p}_6$ $\mathfrak{f}_4$
$\CC$ $\mathfrak{s}\mathfrak{l}_3$ $\mathfrak{s}\mathfrak{l}_3\times \mathfrak{s}\mathfrak{l}_3$ $\mathfrak{s}\mathfrak{l}_6$ $\mathfrak{e}_6$
$\HH$ $\mathfrak{s}\mathfrak{p}_6$ $\mathfrak{s}\mathfrak{l}_6$ $\mathfrak{s}\mathfrak{o}_{12}$ $\mathfrak{e}_7$
$\OO$ $\mathfrak{f}_4$ $\mathfrak{e}_6$ $\mathfrak{e}_7$ $\mathfrak{e}_8$
: The Freudenthal magic square
The relevence of Freudenthal construction to the study of entanglement has been pointed out by various authors[@BDDER; @VL]. However the geometric contribution has not been completely explained so far in the context of quantum information theory. The geometric version of the Freudenthal magic square given in Ref[@LM2; @LM3] is the following square of homogeneous varieties:
$\RR$ $\CC$ $\HH$ $\OO$
------- -------------- -------------------- ------------ -----------------
$\RR$ $v_2(Q^1)$ $\PP(T(\PP^2))$ $ LG(2,6)$ $E_6/P_1\cap H$
$\CC$ $v_2(\PP^2)$ $\PP^2\times\PP^2$ $G(2,6)$ $E_6/P_1$
$\HH$ $LG(3,6)$ $G(3,6)$ $\SS_6$ $ E_7/P_7$
$\OO$ $X_{F_4}$ $X_{E_6}$ $ X_{E_7}$ $X_{E_8}$
: The Geometric magic square
The geometric magic square has the property that each homogeneous variety of the square is homogenous for the corresponding Lie group in the Freudenthal magic square. Moreover each variety of a given row can be recovered as a section (tangential or linear) of the next one. The connexion with composition algebras leads Landsberg and Manivel to formulate a geometrical uniform description of the varieities of the third row (the one relevant for the classification of Table \[classification\]) as Grassmannians over the composition algebras.
It is well known that the variety $G(3,6)$ of Example \[rationnalmap\] can be parametrized by the so-called Plücker map[@Ha]. Let $v_1,v_2$ and $v_3$ be three complex vectors defining a three plane in $\CC^6$. The coordinates can be chosen so that $v_1=[1:0:0:0:0:0]$, $v_2=[0:1:0:0:0:0:0]$ and $v_3=[0:0:1:0:0:0]$. Let $[\tilde{v}_1\wedge \tilde{v}_2\wedge \tilde{v}_3]$ be a three plane in the neighborhood of $[v_1\wedge v_2\wedge v_3]$. One can choose $\tilde{v}_1=[1:0:0:a_{11}:a_{12} :a_{13}]$, $\tilde{v}_2=[0:1:0:a_{21}:a_{22} :a_{23}]$ and $\tilde{v}_3=[0:0:1:a_{31}:a_{32} :a_{33}]$. Locally the variety $G(3,6)$ is parametrized in the neighborhood of $[v_1\wedge v_2\wedge v_3]$ by $$\label{plucker}
\phi(1,P)=(1,P,com(P),det(P))$$
where $P$ is the matrix $P=(a_{ij})$ and $com(P)$ is its comatrix. The map $\phi$ is the Plücker map.
An alternative description of $G(3,6)$ can be given by considering $P\in J_3(\AA)$ where $\AA=\CC\oplus \CC$ is the complexification of $\CC$, i.e. $P=\begin{pmatrix}
\alpha & x_1 & x_2\\
\overline{x_1} & \beta & x_3\\
\overline{x_2} & \overline{x_3} & \gamma
\end{pmatrix}$ with $\alpha, \beta, \gamma\in \CC$ and $x_1, x_2, x_3\in \CC\oplus \CC$. Then to recover the same parametrization one needs to require that the three row vectors defining the matrix $(I_3|P)$ are orthogonal with respect to the symplectic form $\omega=\begin{pmatrix}
0 & I_3\\
-I_3 & 0
\end{pmatrix}$, i.e. the corresponding $3$-plane is isotropic for $\omega$.
Under the symplectic condition, one has $G(3,6)=LG_{\CC\oplus \CC}(3,6)$.
Similarly the Plücker map of Eq. (\[plucker\]) can be defined for $P\in J_3(\AA)$, with $\AA$ one of the three other complex composition algebras. Then if we denote by $\AA=\CC$, $M_2(\CC), \OO_\CC$ the complexifications of $\RR, \HH, \OO$, Landsberg and Manivel proved[@LM2] that the varieties of the third row can all be interpreted as $LG_{\AA}(3,6)$, i.e. $$\left.\begin{array}{c}
LG(3,6)=LG_\CC(3,6)\\
G(3,6)=LG_{\CC\oplus \CC}(3,6)\\
\SS_{6}=LG_{M_2(\CC)}(3,6) \\
E_7/P_7=LG_{\OO_{\CC}}(3,6)
\end{array}\right\}\begin{array}{c}
\text{Three $\AA$-fermions with six single-particle}\\
\text{states satisfying a symplectic condition}
\end{array}$$
Moreover if we consider the case $P\in J_3(\underline{-1})=\{\begin{pmatrix}
\alpha & 0 & 0\\
0 & \alpha & 0\\
0 & 0& \alpha
\end{pmatrix}, \alpha\in \C\}$ (notations of Ref[@LM2]) and the case $P\in J_3(\underline{0})=\{\begin{pmatrix}
\alpha & 0 & 0\\
0 & \beta & 0\\
0 & 0 & \gamma
\end{pmatrix}, \alpha, \beta,\gamma\in \CC\}$, one obtains a similar Plücker parametrization of $v_3(\PP^1)=LG_{\underline{-1}}(3,6)$ and $\PP^1\times \PP^1\times \PP^1=LG_{\underline{0}}(3,6)$.
An important consequence for quantum information theory is that this geometric interpretation of the varieties of the extended third row as Lagrangian Grassmannians over $\AA$ say that all quantum systems, which feature only the states $|W\rangle$ and $|GHZ\rangle$ as their genuine entangled classes, are [*tripartite systems of indistingushable particles with six-single particle states with coefficients in a complex composition algebra satisfying a symplectic condition*]{}.
Similarly a description of the first four varieties of separable states of Table \[classification2\] as Lagrangian Grassmannians $LG(\AA^2,\AA^6)$ is given in Ref[@LM2]. Those varieties correspond to the second row of the geometric magic square.
[^1]: [email protected], Laboratoire IRTES-M3M, Université de Technologie de Belfort-Montbéliard, 90010 Belfort Cedex, France
[^2]: [email protected], Department of Theoretical Physics, Institute of Physics, Budapest University of Technology and Economics , H-1521 Budapest, Hungary
[^3]: Hungarian Academy of Sciences
[^4]: The dimension of secant variety can be calculated via the Terracini’s Lemma. The case we are interested in, is for instance explicitly done in Ref[@Lan3] example 5.3.1.5 page 123. Calculations involving Terracini’s Lemma in the context of QIT can also be found in Ref[@HLT; @HLT2]
[^5]: Highest weight vectors are defined after a choice of an ordering of the roots of the Lie algebra $\mathfrak{g}=Lie(G)$ which defines for each irreducible representation a unique highest weight[@F-H]. There is a bijection between the highest weights (up to a choice of an ordering of the root system) and the irreducible representations of $G$.
|
---
abstract: 'The recent submission of Google TPU-v3 Pods to the industry wide MLPerf v0.6 training benchmark demonstrates the scalability of a suite of industry relevant ML models. MLPerf defines a suite of models, datasets and rules to follow when benchmarking to ensure results are comparable across hardware, frameworks and companies. Using this suite of models, we discuss the optimizations and techniques including choice of optimizer, spatial partitioning and weight update sharding necessary to scale to 1024 TPU chips. Furthermore, we identify properties of models that make scaling them challenging, such as limited data parallelism and unscaled weights. These optimizations contribute to record performance in transformer, Resnet-50 and SSD in the Google MLPerf-0.6 submission.'
author:
- |
Sameer Kumar, Victor Bittorf, Dehao Chen, Chiachen Chou, Blake Hechtman, HyoukJoong Lee,\
**[Naveen Kumar, Peter Mattson, Shibo Wang, Tao Wang, Yuanzhong Xu, Zongwei Zhou]{}\
Google Research, Brain Team \
`{sameerkm, vbittorf, dehao}@google.com` \
**
bibliography:
- 'bibliography.bib'
date: September 6 2019
title: 'Scale MLPerf-0.6 models on Google TPU-v3 Pods'
---
Introduction
============
MLPerf [@mlperf] is a machine learning benchmark suite that has gained industry wide support and recognition. Recently, in Jul 2019, the second round of results for the training benchmarks, MLPerf v0.6, were published including submissions from NVIDIA, Intel, Google, Fujitsu and Alibaba. Submissions ranged in size from a machine with 8 accelerators to clusters with over 1000 accelerators using ML frameworks including Tensorflow, Pytorch and MXNet and others. [^1] Like systems benchmark suites which have come before it, the MLPerf Benchmark suite is pushing performance forward and our v0.6 MLPerf submission on Google TPU-v3 accelerators showcases the large scale we are able to achieve. MLPerf follows in the footsteps of SPEC [@DBLP:journals/pc/Dixit91] and TPCH [@tpch] to create an industry standard benchmark suite for ML systems including accelerators, frameworks and modeling on state of the art ML training tasks. Not only does MLPerf allow for comparisons across frameworks and hardware, but it fundamentally drives understanding and development of ML systems and methodology.
An MLPerf training benchmark involves training a model (e.g. Resnet-50) on a specific dataset (e.g. Imagenet) while following specific methodology for parameters, optimizations, and timing. For v0.6, the MLPerf rules were expanded to enable larger scale of systems to submit to the benchmark. Particular changes included allowing the LARS optimizer for Resnet-50 and a time budget allowing for large scale systems to initialize while also increasing the accuracy requirements for the trained models. MLPerf is still challenging to run at scale, for example the rules require implementations to context switch between training and evaluation every few seconds at large scales which incurs significant overhead not seen in production use cases. MLPERF-0.6 accuracy targets present a significant challenge at scale as increasing the global batch size can reduce the accuracy that can be achieved.
In this paper, we present techniques used to optimize MLPerf benchmark results on the third generation Google Tensor Processing Units (TPU-v3) shown in Figure \[fig:tpuv3\]. The Google TPU-v3 is an ML accelerator designed to accelerate neural network workloads by enabling significant matrix-matrix and matrix-vector compute acceleration on each TPU-v3 chip coupled with 32 GB of high bandwidth memory and 32 MB of scratchpad memory for storing weights and activations, respectively. Each TPU chip has two separate cores. In a TPU-v3 pod (Figure \[fig:tpupod\]), 1024 TPU-v3 chips are interconnected by a custom high throughput 2-D torus interconnect to accelerate remote DMA and global summation operations.
![Google TPU-v3 pod with 1024 chips, 107 PetaFlops and 32 TB of HBM interconnected by a 2-D torus network.[]{data-label="fig:tpupod"}](tpuv3.png){width="100.00000%"}
![Google TPU-v3 pod with 1024 chips, 107 PetaFlops and 32 TB of HBM interconnected by a 2-D torus network.[]{data-label="fig:tpupod"}](tpupod.png){width="100.00000%"}
Methods
=======
We present performance optimization techniques to optimize MLPerf 0.6 training time on TPU-v3 pods. We use [@tensorflow2015-whitepaper; @dean2012large] for all the MLPerf 0.6 benchmarks. The TensorFlow graphs are lowered by the XLA compiler [@xla] to the cloud TPU-v3 pods. The XLA compiler enables various optimizations like unrolling and pipelining loops and fusion of compute kernels to maximize the execution throughput of the matrix unit [@Jouppi_17] on cloud TPU-v3 accelerator cores. We use mixed precision with the bfloat16 precision in all our benchmark runs [@bf16]. To maintain comparable accuracy with 32-bit floating point networks, all non-convolutional operations (e.g. batch normalization, loss computation, gradient summation) use 32-bit floating point numbers. Since the majority of the computational and memory access overheads in MLPerf models are in the convolutional operations, use of bfloat16 enables higher training throughput with minimal or no loss in model accuracy. When the number of examples per TPU accelerator is below a threshold, we use the distributed normalization technique presented in [@resnet_18]. The TensorFlow runtime on TPU-v3 pods execute the input pipeline to pre-process inputs on host CPUs. We use caching, host to device offload of select TF ops and prefetching [@resnet_18] to optimize the host input pipeline throughput. In addition, we explore the following optimization techniques to achieve peak scaling on TPU-v3 pods.
[**Distribute evaluation computation**]{}: in a traditional TensorFlow model trained on a cloud TPU-v3 pod, the evaluation job is executed separately on a side card with additional TPU chips. In the MLPerf models, the execution of the evaluation metric can become an Amdahl bottleneck limiting the scalability of the benchmark. We designed a new train and evaluation tight loop that is executed on the TPU accelerators. Both train and evaluation are distributed on all the TPU-v3 pod accelerator cores. The output evaluation metric tensor is computed at the epochs specified in the MLPerf rules. For example, in ResNet-50, the eval metric tensors are computed every 4 epochs. The evaluation metric tensors are used to compute top-1 accuracy published in the training job’s standard output. The evaluation dataset is padded with zeros when the evaluation examples is not a multiple of the evaluation batch size. Only output tensors from the TPU cores that have real examples is considered while computing the top-1 accuracy metric.
[**Optimize gradient summation**]{}: we use the 2-D gradient summation technique presented in [@resnet_18] to aggregate gradients on the TPU-v3 torus network. We observed MLPerf TensorFlow benchmarks with non-contiguous gradient tensors had limited gradient summation throughput. We optimized the 2-D scheme by pipelining gathers from non-contiguous tensors from HBM to on device memory with summation of network packets in the reduction operation. In the broadcast phase the scatters of the result buffers to non-contiguous storage is pipelined with data transfer on the network. This aggressive pipelining of the gradient summation results in over 1.5x speedup of gradient summation throughput in the ResNet-50 model on TPU-v3 pods.
![Weight update sharding on TPUv3 pods[]{data-label="fig:weight_update"}](spatial.png){width="100.00000%"}
![Weight update sharding on TPUv3 pods[]{data-label="fig:weight_update"}](weight-update1.png){width="100.00000%"}
[**Model parallelism**]{}: as the batch sizes are small in some of the MLPerf models, we use model parallelism to enable higher parallelism in those benchmarks. We use the following two model parallelism techniques to achieve higher scaling in the MLPerf benchmarks:
- Spatial Partitioning. In this technique MLPerf computation kernels are partitioned along both batch and spatial dimensions to increase parallelism and enable execution on a larger number of TPU-v3 accelerator cores. Halo exchange communication operations are added to synchronize TPU-v3 cores that execute spatially partitioned workloads (Figure \[fig:spatial\]).
- Weight update sharding. When the number of examples per TPU-v3 accelerator core is small, we observe the optimizer weight update computation results in significant overheads. For example, with ResNet-50 on 2048 TPU-v3 cores, the LARS optimizer weight update overhead is about 6% of the total device step time. In the MLPerf Transformer model, the ADAM optimizer weight update time is about 45% of the step time. So, we distribute the weight update computation across TPU-v3 cores, and then use an optimized all-gather to broadcast the new weights to all the TPU-v3 cores (Figure \[fig:weight\_update\]).
Benchmark Analysis
==================
In this section, we present case studies for five MLPerf-0.6 benchmarks. In addition to the techniques presented above, we also explore specialized optimizations for these MLPerf models.
[**ResNet-50**]{}: MLPerf uses the ResNet-50 model [@DBLP:journals/corr/HeZRS15] on the ImageNet-1K [@ILSVRC15] dataset to benchmark image classification. ResNet-50 is one of the most widely used models for benchmark ML and MLPerf uses a specific variant of ResNet-50 termed “version 1.5” [@DBLP:journals/corr/GoyalDGNWKTJH17] to indicate a slight modification to the model architecture from the original which is commonly found in practice. In order to scale the ResNet-50 MLPerf benchmark to the 2048 core TPU-v3 pod system, we used batch parallelism along with the distributed evaluation, distributed batch normalization, weight update sharding and gradient summation optimizations.
$$\lambda = \epsilon \times ||w|| / (||g|| + \beta \times ||w||)$$ $$v = m \times v + (g + \beta \times w)$$ $$w = w - \lambda \times v$$
$$\lambda = \epsilon \times ||w|| / (||g|| + \beta \times ||w||)$$ $$v = m \times v + \lambda \times (g + \beta \times w)$$ $$w = w - v$$
The MLPerf-0.6 reference for Resnet-50 uses the adaptive learning rate scaling LARS optimizer [@lars_32k]. It enables training to target accuracy in 72 epochs at batch size 32768. The reference LARS optimizer uses the weight update equation shown in Figure \[eqn:tf\_ref\]. Here, $\lambda$ is the learing rate, $g$ is the gradient tensor, $w$ is the weight tensor, $\beta$ is the weight decay, $m$ is the momentum hyper parameter and $\epsilon$ is the LARS coefficient. This LARS optimizer presented in literature [@lars_32k] uses a weight update equation shown in Figure \[eqn:lars\_paper\]. Notable difference is that the momentum parameter is scaled by the learning rate in the MLPerf reference. A systematic study of the LARS optimizer is beyond the scope of this paper. However, we find the MLPerf ResNet-50 model converges in 70.6 epochs via the optimizer update equation shown in Figure \[eqn:lars\_paper\]. Further, tuning the momentum hyper-parameter enables training in only 64 epochs with a [**record benchmark time of 67.1 seconds**]{}. Table 1 summarizes the benchmark times for the MLPerf-0.6 Resnet-50 experiments. Note, tuning the momentum parameter is not permitted by the MLPerf-0.6 submission rules in the closed division category.
[lccccl]{}\
Optimizer & Base LR & Warmup Epochs & Momentum & Train Epochs & Benchmark\
Scaled momentum & 31.2 & 25 & 0.9 & 72.8 & 76.9 [^2]\
Unscaled momentum & 31.2 & 25 & 0.9 & 70.6 & 72.4\
Unscaled momentum & 29.0 & 18 & 0.929 & 64 & 67.1\
[**SSD:**]{} Single Shot Detection [@ssd_15] is one of two object detection models in the MLPerf benchmark; SSD is intended to reflect a simpler and lower latency model for interactive use cases such as in end-point and non-server situations. Notably, SSD uses a pre-trained ResNet-34 backbone as part of the architecture. SSD is trained and evaluated on the COCO dataset [@microsoft_coco].
Note the computational overhead of the SSD model is small compared with the ResNet-50 model. So, we explore both data and model parallelism to scale SSD to TPU-v3 pods. We use spatial partitioning to parallelize SSD on up to 4 TPU accelerator cores. Achieving high speedup from spatial partitioning is challenging due to the following:
- Higher communication overheads: spatial partitioning results in communication overheads from halo exchange between spatial partitioned neighbors. In addition, it results in all-reduce calls for distributed batch normalization executed on large number of workers.
- Load imbalance: In our current XLA implementation of spatial partitioning, some TF operations are not sharded and executed on spatial worker 0 resulting in a load-imbalance.
- Relatively small spatial dimensions: The spatial dimensions in SSD is decreased from 300x300 in the first layer to 1x1 in the last layer. The deeper layers of SSD have smaller spatial dimensions and larger feature dimensions. This results in limited parallelism from spatial partitioning of the deeper layers.
[**Mask-RCNN**]{} [@DBLP:journals/corr/HeGDG17] is the more complex of the two object detection benchmarks in MLPerf. Besides object detection, Mask-RCNN also performs instance segmentation, which assigns a semantic label as well as an instance index to each pixel in the image. Unlike SSD, which is a one stage detector, Mask-RCNN has two stages: one for proposing instance candidates and the other for fine-tuning the proposals. Also, Mask-RCNN uses a larger image size than SSD even though they both train in the COCO dataset. Furthermore, Mask-RCNN uses a Resnet-50 backbone plus Feature Pyramid Network contrasted with SSD’s use of Resnet-34. Scaling Mask-RCNN is particularly challenging as this model did not converge to the target evaluation accuracy on a global batch size larger than 128. This prevents Mask-RCNN from scaling to a large number of cores beyond 128 by just reducing per-core batch size. We use a combination of data and model parallelism to scale Mask-RCNN beyond 64 TPU cores. We use spatial partitioning to to parallelize the first stage of Mask-RCNN. In the second stage, we apply graph partitioning by placing independent ops on up to four different cores.
[**Transformer**]{} [@DBLP:journals/corr/VaswaniSPUJGKP17] represents state-of-the-art language translation in the MLPerf suite and is one of two translation models. Trained on the WMT English to German dataset [@wmt_17], Transformer uses an attention-based model which differentiates it from the other language model in MLPerf, GNMT.
To scale Transformer to a full TPU-v3 pod, we used data parallelism along with the distributed and in-memory evaluation, weight update sharding, and gradient summation optimizations. We use a global batch size of 2048 (batch 1 per core), that is dramatically higher than the reference default batch size. To enable large batch training [@DBLP:journals/corr/KeskarMNST16], we tuned hyper parameters to reduce the number of epochs to convergence. We found increasing the learning rate and tuning warmup steps insufficient to train the transformer model with a large batch size. In addition, the beta1 and beta2 hyper parameters of the Adam optimizer had to be tuned along with a lower learning rate to converge the MLPerf Transformer model to the target accuracy.
As transformers typically have attention layers that are large fully connected layers, they have significantly higher number of parameter weights. Moreover, the overhead of weight updates in distributed training is significant. The weight update sharding technique in the XLA compiler solves this by reducing the overhead weight update operation. The fast 2-D gradient summation technique optimizes gradient aggregation throughput on the TPU-v3 pods.
As the training time becomes smaller on large TPU pod slices, we observed the eval and infrastructure overheads dominate the end-to-end convergence time. To reduce infrastructure overheads, distributed and in-memory evaluation and nested train-and-eval loop techniques are adopted. Further, redundant gather operations are removed from the model. Bfloat16 mixed precision is used to reduce the memory pressure from matrix multiplication operations. In addition, the maximum sequence length is reduced from 256 to 97 to reduce evaluation overheads on TPU cores. Note, 97 is the length of the largest example in the evaluation dataset.
[**GNMT**]{} [@GNMT_16] is the other language translation benchmarks in MLPerf that is differentiated by its use of recurrent neural network (RNN). While GNMT achieves a lower target accuracy than Transformer, the use of a RNN may allow the performance insights to other RNN models that are generally used by machine learning community. Like Transformer, GNMT uses WMT English to German for training and evaluation.
The most expensive computation of GNMT is the gate function computation in the cell function of the RNN loop. GNMT uses standard LSTM cells, which concatenate the input feature and the hidden state of the previous step, and perform dot-product on the concatenated feature to produce the 4096 output features. For the first uni-directional layer in encoder, the output of the bidirectional layers are concatenated to form the input. For the decoder layers, attention feature is also concatenated with the previous layer’s output to form the input.
Each RNN layer iterates until all sequence non-padded tokens have been processed with the entire batch. Because of synchronous training, each training step will wait until the longest sequence to finish before the gradient can be accumulated across all workers. To achieve good load-balance, we use a window based bucketization scheme to ensure that the sequences in each batch have similar length. For multi-host training, global bucketization is enabled by using a single host to produce the input for all workers. This is only possible because the GNMT inputs are small and preprocessing is inexpensive. However, when scaling to very large systems where we have 1024 workers, the single host input pipeline becomes the bottleneck. We use a round-robin algorithm to distribute the input pipeline to multiple hosts to parallelize the workload while maintaining good load balance.
When the per-core batch\_size is small, the LSTM cell computation is memory bound. As the largest converging global batch\_size is fixed, per-core batch\_size is small on a large scale system. Minimizing the input\_feature is an effective solution to reduce the memory bandwidth requirements for this model. In an LSTM based RNN loop, the previous step’s hidden state is the next step’s input to form a loop carried dependency. But the projection on the input feature can happen in parallel. So we hoisted the input feature projection out of the RNN loop so that we can process many step’s input features in parallel to maximize the effective batch size. Inside the RNN loop, we only do projection on hidden state, the output of which is added to the projected input to derive the output. This optimization is mathematically equivalent with the traditional LSTM, but much more efficient for small per-core batch\_size. For the backward path, we do similar optimization to move the gradient computation part out of the RNN loop. Instead of computing gradient for every time step and accumulate it inside the loop, we save the input to an array of full time range and only update this array inside the RNN loop. After the RNN loop finishes, we compute the accumulated gradients all at once to maximize the effective batch size.
Results
=======
![Training epochs to converge when scaling to a larger batch size.[]{data-label="fig:epochs"}](batch-size.png){width="100.00000%"}
![Training epochs to converge when scaling to a larger batch size.[]{data-label="fig:epochs"}](epochs.png){width="100.00000%"}
![Speedup with model parallelism[]{data-label="fig:ssd_speedup"}](benchmark.png){width="100.00000%"}
![Speedup with model parallelism[]{data-label="fig:ssd_speedup"}](model-speedup.png){width="100.00000%"}
Figure \[fig:batch\_size\] shows the batch sizes used in the Google MLPerf-0.6 submissions. Note, with the exception of ResNet-50, in all other MLPerf-0.6 models batch size only increases two times or less. In the absence of batch parallelism, it is challenging to scale ML workloads to a large number of accelerator cores. In addition, we find the number of epochs to converge the model to target accuracy increases for larger batch sizes. A comparison number of epochs to converge vs batch size for the MLPerf modes is presented in Figure \[fig:epochs\]. For example, in SSD, we need 22% more epochs to reach target accuracy or mAP 0.23 for SSD when increasing batch size from 256 to 1024 and an additional 27% more epochs at batch size 2048. Figure \[fig:benchmark\] presents completion times for the five MLPerf benchmarks. In ResNet-50, GNMT and transformer we use data parallelism, while in SSD and Mask-RCNN use both data and model parallelism to achieve the largest scale. With the SSD model, we achieve a speedup of 1.6x on 4 TPU accelerator cores with model-parallelism (Figure \[fig:ssd\_speedup\]), enabling scaling to 2048 TPU cores. With Mask-RCNN on 128 and 256 cores, model parallelism is enabled across 2 and 4 cores, respectively. Speedup from model parallelism in Mask-RCNN is also shown in Figure \[fig:ssd\_speedup\].
Although the MLPerf benchmarks are batch limited, the techniques presented in this paper enable strong scaling to 2048 TPU-v3 cores. The Google MLPerf-0.6 submissions report [**record performance**]{} for the ResNet-50, SSD and Transformer benchmarks in closed division category.
Future Work
===========
Given that MLPerf is a recent benchmark suite (less than 2 years old) and the Google TPU is still a relatively new hardware accelerator, we believe there is significant work in this space. MLPerf will continue to evolve and grow as a benchmark to reflect state-of-the-art in the industry. There will still be significant work to understand large scale models using TPU-v3 Pods by refining model parallelism techniques and continuing to leverage compiler based optimizers such as XLA.
MLPerf will continue to see significant evolution in models and datasets. While a recommendation task, such as Neural Collaborative Filtering (NCF), was absent from MLPerf-0.6, there is ongoing work to bring a recommendation model into the MLPerf suite. Furthermore, a speech model and dataset, such as speech-to-text, is a likely future addition to MLPerf. We look forward to showing TPU’s scalability on an even more diverse set of models in the future.
[^1]: MLPerf also benchmarks ML inference performance and the first inference submission is expected in late 2019.
[^2]: Google MLPerf-0.6 Submission.
|
---
author:
- 'W. J. Fischer'
- 'S. T. Megeath'
- Babar Ali
- 'J. J. Tobin'
- 'M. Osorio'
- 'L. E. Allen'
- 'E. Kryukova'
- 'T. Stanke'
- 'A. M. Stutz'
- 'E. Bergin'
- 'N. Calvet'
- 'J. Di Francesco'
- 'E. Furlan'
- 'L. Hartmann'
- 'T. Henning'
- 'O. Krause'
- 'P. Manoj'
- 'S. Maret'
- 'J. Muzerolle'
- 'P. Myers'
- 'D. Neufeld'
- 'K. Pontoppidan'
- 'C. A. Poteet'
- 'D. M. Watson'
- 'T. Wilson'
title: |
[*Herschel*]{}/PACS Imaging of Protostars in the\
HH 1–2 Outflow Complex[^1]
---
Introduction
============
The Orion molecular clouds are the most active region of star formation within 500 pc of the Sun, where the [*Spitzer Space Telescope*]{} identified over 400 likely protostars in the Orion A and B clouds (Megeath et al., in prep). The region is home to both clustered and distributed star formation and hosts both high- and low-mass protostars. The [*Herschel Space Observatory*]{}’s capabilities in the far infrared are crucial for sampling the expected peak of the spectral energy distributions (SEDs) of protostars, which are dominated by thermal emission from a cold ($\sim$10 K) envelope. Measuring the peak of the SED allows firm estimates of the bolometric luminosities and envelope densities of the protostellar systems.
With the Photodetector Array Camera and Spectrometer (PACS; @pog10) aboard [*Herschel*]{} [@pil10], we have obtained 70 and 160 $\mu$m images of a field in the Lynds 1641 region. The field contains the intermediate-mass Herbig B9e star V380 Ori [@hil92], 28 infrared excess sources identified by observations with [*Spitzer*]{} (Megeath et al., in prep.), and a variety of outflow phenomena including the well-known HH 1 and 2 [e.g., @bal02] and $\ge8$ protostellar outflows [e.g., @sta02]. This is the science demonstration field for the [*Herschel*]{} Orion Protostar Survey (HOPS), a 200-hour open-time key program that will obtain PACS imaging of 133 fields, $5'$ to $8'$ in diameter, containing 278 protostars and PACS spectroscopy of a subset of 37 protostars.
Here, we present photometry of the four protostars in the [*Herschel*]{} field at 70 and 160 $\mu$m and combine these data with [*Spitzer*]{} and ground-based data. With the radiative transfer code of @whi03, we generate model SEDs and find that the four protostars exhibit a large range of luminosities ($12<L/L_{\sun}<84$) and envelope densities spanning over two orders of magnitude. This implies that two protostars have dense, infalling envelopes, while the other two have only residual envelopes.
Observations and Data Reduction
===============================
An $8'$ square field with central coordinates $\alpha=5^h36^m22^s.05$, $\delta=-6^\circ45'41''.23$ (J2000) was observed on 2009 October 9 (observing day 148; observation IDs 1342185551 and 1342185552) in the 70 $\mu$m (“blue") and 160 $\mu$m (“red”) bands available with PACS, which have angular resolutions of 5.2 and 12, respectively. We observed our target field with homogeneous coverage using two orthogonal scanning directions and a scan speed of $20''$/s. Each scan was repeated 5 times for a total observation time of 1468 s per scan direction. The effective sampling rate of the detectors is 10 Hz. The data were processed from raw telemetry to final images with the [*Herschel*]{} Common Software System (HCSS) version 3.0 build 919, using version 4 of the flux calibration files.[^2] We followed the standard processing steps for PACS data described by @pog10 with these exceptions: We identified cosmic rays for each spatial sky pixel as those values which were larger than 10 standard deviations from the mean signal. Several extraneous calibration measurements were interspersed with the HOPS target observations. These were masked and removed from the data cube with an additional 430 readouts following each of these calibration measurements to mask signal drifts induced by the calibration source.
After the initial processing, the two orthogonal scan observations were combined for the final mapmaking step. Two different mapping approaches are used for this purpose: Method 1 is used exclusively for point source photometry, while Method 2 is used to display images.
Method 1: Mapping with local sky subtraction: First, we remove the signal drifts (whether correlated or due to the $1/f$ noise) by subtracting a local “sky” value from each readout from each bolometer pixel. The local sky is estimated as the median value within a window of size $\pm20$ readouts. The final mosaic is then created by spatially averaging all overlapping bolometer pixels using the HCSS routine “photProject.”Ê To protect the integrity of the point source PSF, all readouts within 20 of a point source are ignored during the sky median calculation. This processing preserves all point and compact sources in the image and provides the proper photometry comparison between HOPS target objects and the flux calibration standards, which use the same reduction scheme. However, this processing removes all emission at spatial scales larger than the median window size.
Method 2: Mapping without local sky subtraction: Method 1 is necessary only for accurate photometry of point sources. ÊWe also create maps by removing only the pixel-to-pixel electronic offsets in PACS images, using the median value of the entire time stream of a single pixel to estimate its offset signal value. Unlike Method 1, this approach does not remove the (spatially) extended emission. However, it also does not mitigate the $1/f$ drifts, which add so-called “striping” or “banding” in the final maps. As for Method 1, we use the “photProject” HCSS routine to spatially coadd individual array readouts for mapmaking.
A $K_s$ image of the field was acquired with NEWFIRM, the NOAO Extremely Wide Field Infrared Imager, on the KPNO 4 m telescope, and the data were reduced with the NOAO NEWFIRM Pipeline [@swa09]. The on-source time was 11 minutes over most of the field of view. Images at 350 and 870 $\mu$m were acquired at APEX with SABOCA and LABOCA, respectively. The observing and data reduction procedures for the APEX images are described in @sta10.
Results
=======
Imaging and Photometry
----------------------
Figure \[f.3color\] shows a composite of the final map created using Method 2 for the 70 and 160 $\mu$m PACS channels. Figures \[f.blue\] and \[f.red\], available in electronic form only, show the separate 70 and 160 $\mu$m images and are annotated with source names.
The bright blue source in the north of the field is the reflection nebula NGC 1999; the dark tri-lobed feature seen toward this nebula is discussed in @sta10. In the center of the image is a triangular arrangement of protostars. Here we use their designations for the HOPS program: 165, 166, 168, and 203. HOPS 166 (HH 147 MMS; Chini et al. 2001) is the relatively isolated source at the northeastern corner of the triangle, HOPS 168 (HH 1–2 MMS 2) is at the western corner, and HOPS 165 and 203 are the pair of overlapping sources (separated by $13''$) at the southern corner. HOPS 203 (HH 1–2 MMS 1), the brighter of the pair in the PACS bands, is the source of the HH 1–2 outflow and contains the radio sources VLA 1 and 2 [@rod90]. @chi01 report an additional source HH 1–2 MMS 3, 22 southwest of HOPS 168, that corresponds to extended emission at 160 $\mu$m with no apparent point source at 70 $\mu$m. Falling nearly along the line between HOPS 168 and 203 is the C-S star, a classical T Tauri star [@coh79]. To the southeast of HOPS 203 at $\alpha=5^h36^m25^s.3$, $\delta=-6^\circ47'18''$ is a knot of emission presumably shock heated by the HH 2 outflow. At 160 $\mu$m, only HOPS 166, 168, and 203 appear, while HOPS 165 is not detected. The 160 $\mu$m band also traces cold dust in the surrounding cloud material, showing an irregular, filamentary structure.
PACS photometry of the four protostars appears in columns 9 and 10 of Table \[t.photo\]. We obtained simple aperture photometry for the relatively isolated protostars: HOPS 166 and 168 in the blue and red bands and HOPS 203 in the red band. In these cases, we used a 16 aperture with subtraction of the median signal in a background annulus extending from 18 to 22. The results were corrected according to measurements of the encircled energy fraction provided by the PACS consortium (priv. comm.).
For the HOPS 165/203 pair at 70 $\mu$m, point-spread function (PSF) fitting was required to separate the fluxes of the two protostars. We fit the fainter HOPS 165 with a PSF constructed from observations of Vesta (PACS consortium, priv. comm.). Aperture photometry for HOPS 165 was performed on the best-fit PSF, and aperture photometry for HOPS 203 was performed on the data after subtraction of the HOPS 165 model. At 160 $\mu$m, we report an upper limit for HOPS 165; this is the largest flux density for which a model PSF can be added at the source position before it appears as an asymmetry in the HOPS 203 image.
According to @pog10, the calibration accuracy for PACS is within 10% in the blue band and better than 20% in the red. The formal uncertainties associated with each source (i.e., the RMS of the signal in the sky annulus) are much less, $\le1$%, except for the case of HOPS 165, where fitting a point-spread function to a faint source yields a 10% uncertainty.
The PACS photometry data are supplemented by [*Spitzer*]{} IRAC and MIPS photometry (Megeath et al., in prep.), [*Spitzer*]{} IRS spectroscopy, and APEX SABOCA and LABOCA sub-mm photometry [@sta10]. For HOPS 166, near-infrared J/H/K photometry was available from the Two Micron All Sky Survey.[^3] The [*Spitzer*]{} positions and 3.6 – 870 $\mu$m photometry for the HOPS protostars appear in Table \[t.photo\]. Systematic uncertainties are given in a note to the table.
-------- ------------ -------------- --------- --------- --------- --------- -------- -------- --------- --------- ---------
HOPS RA (J2000) Dec. (J2000) \[3.6\] \[4.5\] \[5.8\] \[8.0\] \[24\] \[70\] \[160\] \[350\] \[870\]
Source (h m s) ($^\circ$ ) (Jy) (Jy) (Jy) (Jy) (Jy) (Jy) (Jy) (Jy) (Jy)
165 5 36 23.54 $-$6 46 14.6 0.014 0.052 0.11 0.13 0.64 1.1 $<$0.3 $<$3 ...
166 5 36 25.13 $-$6 44 41.8 0.66 0.83 0.97 1.17 4.65 10.9 11.1 4.6 0.33
168 5 36 18.93 $-$6 45 22.7 0.0077 0.030 0.041 0.038 3.66 87.3 87.7 24 0.94
203 5 36 22.84 $-$6 46 06.2 ... ... ... 0.0091 0.54 26.6 75.7 28 1.2
-------- ------------ -------------- --------- --------- --------- --------- -------- -------- --------- --------- ---------
SED Modeling
------------
We use a Monte Carlo radiative transfer code [@whi03] to calculate model SEDs for the four protostars. The code features a central star and flared disk, which emit photons that can then be scattered or absorbed and re-emitted by dust in either the disk or an envelope. The envelope density is defined by the rotating collapse solution of @ter84, plus a bipolar, evacuated cavity.
We use the same dust model as @tob08, which contains larger dust grains than a standard ISM dust model. The grain size distribution is defined by a power law $n\left(a\right)\propto a^{-3.5}$, with $0.005~\mu{\rm m} \le a \le 1~\mu{\rm m}$. We use dust grains composed of graphite $\zeta_{\rm graph}=0.0025$, silicates $\zeta_{\rm sil}=0.004$, and water ice $\zeta_{\rm ice}=0.0005$; abundances ($\zeta$) are relative to gas and imply a gas to dust ratio of 133. Our sub-mm opacities exceed those of the well-known Milky Way Case B ($R_V=5.5$) mixture of @wei01 by a factor that reaches a maximum of 5 at 600 $\mu$m.
The model parameters are set to typical values for low-mass protostars; we fit the SEDs by varying seven of them: the system luminosity $L$, the reference envelope density $\rho_1$ [@ken93], the outer radius of the envelope $R_{\rm env}$, the opening angle of the envelope cavity $\theta_{\rm cav}$, the mass of the disk $M_{\rm disk}$, the inclination angle $i$, and the foreground extinction $A_V$. (Foreground extinction is applied with the laws of @mcc09, suitable for star-forming regions.) In fitting the sources, we emphasize the mid to far IR over the near IR, since the near IR is highly dependent on the scattering properties of the dust, the geometry of the inner disk, and the geometry of the outflow cavity. Thus, when fitting a source we first adjust the luminosity and density to get the best fit to the mid to far IR, then we find the best combination of cavity opening angle, inclination, and (if necessary) foreground reddening to fit the 10 $\mu$m absorption feature and the near-IR emission. In general, the fits are insensitive to $R_{\rm env}$ and $M_{\rm disk}$. However, for HOPS 165, it was necessary to adjust these two parameters. The best-fit parameters were determined by visual comparison of the models to the observed SEDs and are listed in Table \[t.model\].
The models are compared to the photometry and spectra in Figure \[f.seds\]. For these models, $4\times10^7$ photons were run through the Monte Carlo code. The code generates output for apertures ranging from 1 to 16 in one-arcsecond steps, and the choice of aperture for the plotted SED varies with wavelength, as given in the note to Table \[t.photo\]. An interpolation scheme bridges the gaps between disparate apertures. We assume a distance of 420 pc [@men07].
\[t.model\]
-------- -------------- ----------------------- --------------- -------------------- ------------------ ------------ -------
HOPS $L$ $\rho_1^{\mathrm{a}}$ $R_{\rm env}$ $\theta_{\rm cav}$ $M_{\rm disk}$ $i$ $A_V$
Source ($L_{\sun}$) (g cm$^{-3}$) (AU) ($^\circ$) ($M_{\sun}$) ($^\circ$)
165 12 $7.5\times10^{-16}$ $10^3$ 30 $1\times10^{-4}$ 20 35
166 23 $1.5\times10^{-15}$ $10^4$ 25 $5\times10^{-2}$ 40 4
168 84 $3.0\times10^{-13}$ $10^4$ 40 $5\times10^{-2}$ 75 0
203 23 $2.6\times10^{-13}$ $10^4$ 40 $5\times10^{-2}$ 75 0
-------- -------------- ----------------------- --------------- -------------------- ------------------ ------------ -------
: Adopted Model Parameters
The envelope density at 1 AU in the limit of no rotation.
Discussion
==========
The [*Herschel*]{} observations have detected four protostars in the HH 1–2 region. ÊAll were previously identified with [*Spitzer*]{} (Megeath et al., in prep), but [*Herschel*]{} has cleanly separated the sources for the first time in the far IR, allowing accurate photometry. From the luminosities and densities in Table \[t.model\], we estimate the infall rate from the envelope onto the central star-disk system, the luminosity due to accretion onto the star, and the evolutionary state for each protostar. @ken93 show that for a protostellar envelope, the infall rate is $\dot{M}_{\rm env}=1.9\times10^{-7}~(\rho_1/10^{-15}~{\rm g~cm}^{-3})~(M_*/M_{\sun})^{1/2}~M_{\sun}~{\rm yr}^{-1}$. The accretion luminosity can then be written as $L_{\rm acc}=GM_*\dot{M}_{\rm disk}/R_*=5.9~(\rho_1/10^{-15}~{\rm g~cm}^{-3})~(M_*/M_{\sun})^{3/2}~(R_*/R_{\sun})^{-1}~(\dot{M}_{\rm disk}/\dot{M}_{\rm env})~L_{\sun}$, where $\dot{M}_{\rm disk}$ is the accretion rate from the disk onto the star. To Êestimate $\dot{M}_{\rm env}$ and $L_{\rm acc}$, we (initially) assume $\dot{M}_{\rm disk} = \dot{M}_{\rm env}$, and we adopt stellar radii, luminosities, and masses from the @sie00 online models of pre–main-sequence stars at an age of $3\times10^5~{\rm yr}$. The final conclusions are not sensitive to the exact stellar parameters chosen.
HOPS 166 is modeled as a luminous star-disk system with a low-density envelope seen through a few magnitudes of visual extinction. Ê(The modeled inclination of $40^\circ$ is considered a lower limit; @eis94 find that the outflow associated with this source is close to the plane of the sky.) The best-fitting central star from the Siess et al. models has a mass of $2.2~M_{\sun}$, implying an envelope infall rate of $4\times10^{-7}~M_{\sun}~{\rm yr}^{-1}$ and an accretion luminosity that is 20% of the total luminosity. The low accretion luminosity and low envelope mass (inferred from the model parameters) of only $0.02~M_{\sun}$ imply that HOPS 166 is in the late stages of protostellar evolution and that the central star has accreted most of its mass. (@chi01 classified this source as a deeply embedded Class 0 object based on SCUBA and IRAM mapping at 450, 850, and 1300 $\mu$m.)
In contrast, HOPS 168 is much more embedded and luminous than HOPS 166. Its envelope mass is 2.7 $M_{\sun}$. The implied stellar mass is $0.3~M_{\sun}$, the envelope infall rate is $3\times10^{-5}~M_{\sun}~{\rm yr}^{-1}$, and the accretion luminosity is more than 95% of the total. A star more massive than $0.3~M_{\sun}$ is possible if $\dot{M}_{\rm disk}<\dot{M}_{\rm env}$, meaning infalling matter is piling up on the disk, leading to episodic accretion [e.g., @vor05]. For example, the central star could have a mass as high as $1.8~M_{\sun}$ if $\dot{M}_{\rm disk}=0.1~\dot{M}_{\rm env}$.
The two remaining protostars, HOPS 165 and HOPS 203, are separated by only 13, or a projected 5500 AU. The SED of HOPS 165 drops off precipitously beyond 30 $\mu$m. Ê This requires a very small, tenuous envelope and a low-mass disk. ÊThe flux from the moderately luminous star-disk system is seen behind 35 magnitudes of visual extinction. Our interpretation is that HOPS 165 must be seen through the dense envelope of the nearby HOPS 203 ($M_{\rm env}=2.4~M_{\sun}$). ÊHOPS 203 itself is a 3 binary [@rod90].Ê If the proximity of HOPS 165 is not due to chance, this region is home to a hierarchical multiple system of three protostars within a projected radius of 5500 AU. Accordingly, the small envelope size of the HOPS 165 model may result from its proximity to HOPS 203. ÊThe implied stellar mass of HOPS 165 is $1.4~M_{\sun}$, the envelope infall rate is $2\times10^{-7}~M_{\sun}~{\rm yr}^{-1}$, and the accretion luminosity is 10% of the total. On the other hand, the implied stellar mass of HOPS 203 is $0.1~M_{\sun}$, the envelope infall rate is $2\times10^{-5}~M_{\sun}~{\rm yr}^{-1}$, and the accretion luminosity is more than 95% of the total. Again, the central star may have a higher mass if $\dot{M}_{\rm disk}<\dot{M}_{\rm env}$. We assume that the accretion is dominated by one member of the 3 binary, but the results will not change significantly if both accrete equally. The 160 $\mu$m PACS measurement for HOPS 203 exceeds the fit by a factor of 2; this may be due to cold envelope material in our aperture that is not accounted for in our models.
We conclude that two of the protostars (HOPS 168 and 203) are in an active state of mass infall and accretion, while the other two (HOPS 165 and 166) have only residual envelopes. This finding demonstrates [*Herschel*]{}’s unique and critical contribution to the audit of the flow of mass from the outer protostellar envelope onto the central protostar.
This work is based on observations made with the [*Herschel Space Observatory*]{}, a European Space Agency Cornerstone Mission with significant participation by NASA, and with the [*Spitzer Space Telescope*]{}, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA. Support for the [*Herschel*]{} and [*Spitzer*]{} analysis was provided by NASA through awards issued by JPL/Caltech. We are grateful to Barbara Whitney and her collaborators for making their radiative transfer code available to the community.
Bally, J., Heathcote, S., Reipurth, B., et al. 2002, , 123, 2627
Chini, R., Ward-Thompson, D., Kirk, J. M., et al. 2001, , 369, 155
Cohen, M., & Schwartz, R. D. 1979, , 233, L77
Eislöffel, J., Mundt, R., & Böhm, K.-H. 1994, , 108, 1042
Hillenbrand, L. A., Strom, S. E., Vrba, F. J., & Keene, J. 1992, , 397, 613
Kenyon, S. J., Calvet, N., & Hartmann, L. 1993, , 414, 676
McClure, M. 2009, , 693, L81
Menten, K. M., Reid, M. J., Forbrich, J., & Brunthaler, A. 2007, , 474, 515
Pilbratt, G., et al. 2010, , this volume
Poglitsch, A., et al. 2010, , this volume
Rodriguez, L. F., Ho, P. T. P., Torrelles, J. M., Curiel, S., & Canto, J. 1990, , 352, 645
Siess, L., Dufour, E., & Forestini, M. 2000, , 358, 593
Stanke, T., McCaughrean, M. J., & Zinnecker, H. 2002, , 392, 239
Stanke, T., Stutz, A., Tobin, J. J., et al. 2010, , this volume
Swaters, R. A., Valdes, F., & Dickinson, M. E. 2009, in ASP Conf. Ser. 411, Astronomical Data Analysis Software and Systems XVIII, ed. D. A. Bohlender, D. Durand, & P. Dowler (San Francisco, CA: ASP), 506
Terebey, S., Shu, F. H., & Cassen, P. 1984, , 286, 529
Tobin, J. J., Hartmann, L., Calvet, N., & D’Alessio, P. 2008, , 679, 1364
Vorobyov, E. I., & Basu, S. 2005, , 633, L137
Weingartner, J. C., & Draine, B. T. 2001, , 548, 296
Whitney, B. A., Wood, K., Bjorkman, J. E., & Wolff, M. J. 2003, , 591, 1049
[^1]: [*Herschel*]{} is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA. This work includes data acquired with the Atacama Pathfinder Experiment (APEX; E-082.F-9807, E-284.C-5015). APEX is a collaboration between the Max-Planck-Institut für Radioastronomie, the European Southern Observatory, and the Onsala Space Observatory.
[^2]: HCSS is a joint development by the [*Herschel*]{} Science Ground Segment Consortium, consisting of ESA, the NASA [*Herschel*]{} Science Center, and the HIFI, PACS, and SPIRE consortia.
[^3]: The Two Micron All Sky Survey (2MASS) is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by NASA and the National Science Foundation.
|
---
abstract: |
We discuss constraints to some nonminimally (NMC) coupled curvature-matter models of gravity by means of Solar System experiments.
First we discuss a NMC gravity model which constitutes a natural extension of $1/R^n$ gravity to the nonminimally coupled case. Such a NMC gravity model is able to predict the observed accelerated expansion of the Universe. Differently from the $f(R)=1/R^n$ gravity case, which is not compatible with Solar System observations, it turns out that this NMC model is a viable theory of gravity.
Then we consider a further NMC gravity model which admits Minkowski spacetime as a background, and we derive the $1/c$ expansion of the metric. The nonrelativistic limit of the model is not Newtonian, but contains a Yukawa correction. We look for trajectories around a static, spherically symmetric body. Since in NMC gravity the energy-momentum tensor of matter is not conserved, then the trajectories deviate from geodesics. We use the NMC gravity model to compute the perihelion precession of planets and we constrain the parameters of the model from radar observations of Mercury.
address:
- |
Istituto per le Applicazioni del Calcolo, CNR, Via dei Taurini 19,\
Roma, 00185, Italy\
E-mail: [email protected]
- |
Departamento de Física e Astronomia, Universidade do Porto, Rua do Campo Alegre 687,\
Porto, 4169-007, Portugal\
$^*$E-mail: [email protected]
- |
INFN, Laboratori Nazionali di Frascati (LNF), Via E. Fermi 40,\
Frascati, 00044 Roma, Italy\
E-mail: [email protected]
author:
- Riccardo March
- 'Orfeu Bertolami$^*$ and Jorge Páramos'
- 'Simone Dell’Agnello'
title: |
Nonminimally coupled curvature-matter gravity models\
and Solar System constraints
---
Introduction
============
We consider the possibility of constraining some nonminimally coupled (NMC) curvature-matter models of gravity [@BBHL] by means of Solar System experiments. The action functional involves two functions $f^1(R)$ and $f^2(R)$ of the Ricci curvature $R$. The function $f^1(R)$ is a nonlinear term which is analogous to $f(R)$ gravity, and the function $f^2(R)$ yields a NMC between the matter Lagrangian density and curvature. For other NMC gravity theories and their applications, see for instance [@PO1; @PO2; @PO3].
NMC gravity has been applied to several astrophysical and cosmological problems such as dark matter [@dm1BP; @dm2BFP], cosmological perturbations [@pertBFP], post-inflationary reheating [@reheating] or the current accelerated expansion of the Universe [@BFP].
First we discuss the application of a perturbative method due to Chiba, Smith and Erickcek [@CSE] to the NMC gravity model by Bertolami, Frazao and Paramos [@BFP], which constitutes a natural extension of $1/R^n$ gravity to the non-minimally coupled case. Such a NMC gravity model is able to predict the observed accelerated expansion of the Universe. Differently from the $f(R)=R+1/R^n$ gravity case, which predicts the value $\gamma=1/2$ for the PPN parameter $\gamma$, so that the $f(R)$ model is not compatible with Solar System observations, it turns out [@BMP] that the NMC gravity model cannot be constrained, for specific choices of the functions $f^1(R)$ and $f^2(R)$, by the perturbative method considered by Chiba [*et al.*]{} [@CSE], so that it remains, in this respect, a viable theory of gravity.
Then we consider a further NMC gravity model [@CPM; @MPBDeA], which admits Minkowski spacetime as a background, and we derive the $1/c$ expansion of the metric assuming the functions $f^1(R)$ and $f^2(R)$ analytic at $R=0$. The nonrelativistic limit of the model is not Newtonian, but contains a Yukawa correction. A parameterized post-Newton plus Yukawa (PPNY) approximation of the NMC model of gravity can be computed. We consider the metric around a static, spherically symmetric body and we look for trajectories of a test body around the spherical body. Since in NMC gravity the energy-momentum tensor of matter is not conserved, then the trajectories deviate from geodesics. We use the NMC gravity model to compute the perihelion precession of planets. Eventually we constrain the parameters of the model from radar observations of Mercury, including data from the NASA orbiter MESSENGER (MErcury Surface, Space ENvironment, GEochemistry and Ranging) spacecraft.
The NMC gravity action functional
=================================
The action functional of NMC gravity is given by [@BBHL] $$S = \int \left[\frac{1}{2}f^1(R) + [1 + f^2(R)] \mathcal{L}_m \right]\sqrt{-g} d^4x,$$ where $f^1(R),f^2(R)$ are functions of the spacetime curvature $R$, $g$ is the metric determinant, $\mathcal{L}_m=-\rho c^2$ is the Lagrangian density of matter, and $\rho$ is mass density.
The function $f^2(R)$ yields a NMC between geometry and matter, and the class of $f(R)$ gravity theories is recovered in the case $f^2(R)=0$. General Relativity (GR) is recovered by taking: $$f^1(R) = 2\kappa(R-2\Lambda), \quad f^2(R) = 0, \quad \kappa = c^4/16\pi G,$$ where $G$ is Newton’s gravitational constant and $\Lambda$ is the Cosmological Constant.
The first variation of the action functional with respect to the metric yields the field equations $$\label{field-equations}
\left(f^1_R + 2f^2_R \mathcal{L}_m \right) R_{\mu\nu} - \frac{1}{2} f^1 g_{\mu\nu} =
\nabla_{\mu\nu} \left(f^1_R + 2f^2_R \mathcal{L}_m \right)
+ \left(1 + f^2 \right) T_{\mu\nu},$$ where $f^i_R = df^i\slash dR$ and $\nabla_{\mu\nu}=\nabla_\mu \nabla_\nu -g_{\mu\nu}g^{\sigma\eta}\nabla_\sigma\nabla_\eta$. Such equations will be solved by perturbative methods.
A model for the accelerated expansion of the Universe
=====================================================
We consider the NMC gravity model proposed by Bertolami, Frazao and Paramos [@BFP] to account for the observed accelerated expansion of the Universe: $$\label{NMC-model1}
f^1(R) = 2\kappa R, \qquad f^2(R) = \left( \frac{R}{R_n} \right)^{-n}, \quad n>0,$$ where $n$ is an integer and $R_n$ is a constant. This NMC gravity model constitutes a natural extension to the non-minimally coupled case of the $1/R^n$ model proposed by Carroll [*et al.*]{} [@CDTT] as an instance of $f(R)$ model.
Matter is described as a perfect fluid with negligible pressure [@BLP] with Lagrangian density $\mathcal{L}_m = -\rho c^2$. We assume that the metric, which describes the spacetime around the Sun, is a perturbation of a flat Friedmann-Robertson-Walker (FRW) metric with scale factor $a(t)$: $$\label{metric}
ds^2 = -\left[1 + 2\Psi(r,t) \right] dt^2 + a^2(t)\left(\left[1 + 2\Phi(r,t)\right] dr^2
+ r^2 d\Omega^2 \right),$$ where $|\Psi(r,t)| \ll 1$ and $|\Phi(r,t)| \ll 1$. The NMC gravity model Eq. (\[NMC-model1\]) yields a cosmological solution with a negative deceleration parameter $q <0$, and the scale factor $a(t)$ of the background metric follows the temporal evolution $a(t) = a_0 \left( t \slash t_0 \right)^{2(1+n)/3}$, where $t_0$ is the current age of the Universe [@BFP].
In the perturbative approach developed by Chiba [*et al.*]{} [@CSE] for $f(R)$ gravity, the Ricci curvature of the perturbed spacetime is expressed as the sum $$R(r,t) = R_0(t) + R_1(r,t),$$ where $R_0$ denotes the scalar curvature of the background FRW spacetime and $R_1$ is the perturbation due to the Sun. The extension of the perturbative method of Chiba [*et al.*]{} [@CSE] to NMC gravity consists in the following steps [@BMP]. We assume that functions $f^1(R)$ and $f^2(R)$ admit a Taylor expansion around $R=R_0$, and we linearize the field equations (\[field-equations\]) under two conditions:
- terms nonlinear in $R_1$ can be neglected in the Taylor expansion of $f^1,f^2$;
- the following inequality $$\label{R1-cond}
\left\vert R_1(r,t) \right\vert \ll R_0(t),$$ is satisfied both around and inside the Sun.
We compute the functions $\Psi$ and $\Phi$ of the metric Eq. (\[metric\]), then we find an expression of the parameter $\gamma$ of the PPN (Parameterized Post-Newtonian) formalism [@Will]. Eventually the validity of the condition (\[R1-cond\]) is checked a posteriori.
The condition (\[R1-cond\]) means that the curvature $R$ of the perturbed spacetime remains close to the cosmological value $R_0$ inside the Sun. In GR such a property of the curvature is not satisfied inside the Sun. However, for some $f(R)$ theories condition (\[R1-cond\]) can be satisfied and that leads to a violation of a constraint on PPN parameter $\gamma$ from Solar System tests of gravity. For instance, the $1\slash R^n$ ($n>0$) gravity model [@CDTT] satisfies condition (\[R1-cond\]) [@CSE; @HMV].
The perturbative solution of the field equations (\[field-equations\]) yields the following expression for the PPN parameter $\gamma=-\Phi(r)/\Psi(r)$ [@BMP]: $$\gamma = \frac{1}{2} \, \left[\frac{1 + f^2_0 + 4 f^2_{R0}R_0 + 12 \square f^2_{R0}}
{1 + f^2_0 + f^2_{R0}R_0 + 3\square f^2_{R0}} \right],$$ where $f^2_0=f^2(R_0)$ and $f^2_{R0} =d f^2/dR(R_0)$. When $f^2(R)=0$ we find the known result $\gamma = 1\slash 2$ which holds for $f(R)$ gravity theories which satisfy the condition $\left\vert R_1 \right\vert \ll R_0$ [@CSE]. The $1\slash R^n$ ($n>0$) gravity theory [@CDTT], where $f(R)$ is proportional to $\left( R + {\rm constant}\slash R^n \right)$, is one of such theories that, consequently, have to be ruled out by Cassini measurement [@Cassini].
For the NMC gravity model (\[NMC-model1\]), though $\left\vert R_1 \right\vert \ll R_0$ for $n \gg 1$, the solution for $R_1$ inside the Sun shows that non-linear terms in the Taylor expansion of $f^2(R)$ cannot be neglected [@BMP]: $$f^2(R) = f_0^2\bigg[ 1 - n \frac{ R_1}{R_0} + \frac{n(n+1)}{2} \left(\frac{R_1}{R_0}\right)^2 - \frac{1}{6}n(n+1)(n+2) \left(\frac{R_1}{R_0}\right)^3 \bigg] + O\left( \left( \frac{R_1}{R_0} \right)^4 \right).$$ Hence assumption (i) is contradicted, implying the lack of validity of the perturbative regime. Eventually, by such a contradiction argument the model (\[NMC-model1\]) cannot be constrained by the extension to a NMC of the perturbative method by Chiba [*et al.*]{} [@CSE], so that the model (\[NMC-model1\]) remains, in this respect, a viable theory of gravity [@BMP].
Planetary precession
====================
We now consider a NMC gravity model where the functions $f^1(R)$ and $f^2(R)$ are assumed analytic at $R=0$ [@MPBDeA], so that they admit the Taylor expansions: $$f^1(R) = 2\kappa \sum_{i=1}^\infty a_i R^i, \qquad a_1=1,
\qquad
f^2(R) = \sum_{j=1}^\infty q_j R^j.$$ If $a_i=0$ for any $i>1$ and $q_j=0$ for any $j$, then the action of GR is recovered.
The model admits Minkowski spacetime as a background, and the $1/c$ expansion of the metric can be computed [@MPBDeA], assuming a general distribution of matter with mass density, pressure and velocity. The nonrelativistic limit of the model turns out to be non-Newtonian, but contains also a Yukawa correction. The coefficients $a_2,a_3,q_1,q_2$ are used to compute the metric at the order $O(1/c^4)$ for the $0-0$ component, and are considered as parameters of the NMC gravity model. A parameterized post-Newton plus Yukawa (PPNY) approximation of the NMC model of gravity can be computed [@MPBDeA].
Here we report the result [@MPBDeA] for the metric in vacuum around a static, spherically symmetric body (Sun) with uniform mass density ($g_{0i}=0$): $$\begin{aligned}
\label{metric-Sun}
g_{00} &=& -1 + 2 \frac{GM_S}{rc^2}\left( 1 + \alpha e^{-r/\lambda} \right)
+ \frac{2}{c^4}F(r), \nonumber\\
g_{ij} &=& \left[ 1 + 2 \frac{GM_S}{rc^2}\left( 1 - \alpha e^{-r/\lambda} \right) \right] \delta_{ij},\end{aligned}$$ where $M_S$ is the mass of the spherical body, $F(r)$ is a radial potential, and $\lambda,\alpha$ are the range and strength of the Yukawa potential which depend on the parameters of the NMC gravity model [@MPBDeA]: $$\label{Yukawa}
\lambda=\sqrt{6a_2}, \qquad
\alpha=\frac{1}{3}(1-\theta)+\frac{GM_S}{c^2R_S}\theta\left[ \theta\left(\frac{\mu}{2}-1\right)-
\frac{2}{3}\nu \right]\left(\frac{\lambda}{R_S}\right)^2 + \dots,$$ where $R_S$ is the radius of the spherical body, $\theta,\mu,\nu$ are the following dimensionless ratios: $\theta=q_1/a_2$, $\mu=a_3/a_2^2,\nu=q_2/a_2^2$, and dots $\dots$ denote smaller contributions [@MPBDeA]. Formula (\[Yukawa\]) has been obtained for $\lambda\gg R_S$.
Using the metric (\[metric-Sun\]) the effect of NMC gravity on the orbit of a planet is computed. In NMC gravity the energy-momentum tensor is not covariantly conserved[@BBHL]: $$\nabla_\mu T^{\mu\nu} = \frac{f^2_R }{ 1 + f_2} ( g^{\mu\nu} \mathcal{L}_m - T^{\mu\nu} ) \nabla_\mu R
\neq 0 \qquad\mbox{if }f^2(R)\neq 0,$$ consequently, the trajectories deviate from geodesics: $$\label{geodesic}
\frac{d^2 x^\alpha}{ds^2} + \Gamma^\alpha_{\mu\nu} \frac{dx^\mu}{ds} \frac{dx^\nu}{ds} = \frac{f^2_R(R)}{ 1+f^2(R)} g^{\alpha\beta} R_{,\beta}.$$ Moreover, geodesics are different from GR. The formula for perihelion precession of a planet has been computed [@MPBDeA] for $\lambda\gg L$, where $L$ is the [*semilatus rectum*]{} of the unperturbed orbit. Here we report the leading term in the formula [@MPBDeA]: $$\begin{aligned}
\label{precession}
\delta\phi_P &=&\frac{6\pi GM_S}{Lc^2}+(1-\theta)^2\frac{\pi}{3}\left(\frac{L}{\lambda}\right)^2 e^{-L/\lambda} \nonumber\\
&+& (1-\theta)\frac{\pi GM_S}{3Lc^2}\theta\left[3\theta\left(\frac{\mu}{2}-1\right)-2\nu\right]
\left(1-\frac{L}{\lambda}\right)\left(\frac{L}{R_S}\right)^3+\dots,\end{aligned}$$ where the terms in the first row are the GR precession and the nonrelativistic Yukawa precession, respectively, and the term in the second row is the leading contribution from the NMC relativistic correction. Dots $\dots$ denote smaller contributions [@MPBDeA]. Eq. (\[precession\]) reduces to the GR expression if $\theta=1$.
Using Eq. (\[precession\]), bounds on PPN parameters from the Cassini experiment [@Cassini] and fits to planetary data, including data from Messenger spacecraft [@Messenger] orbiting around Mercury, it follows that the additional perihelion precession due to NMC deviations from GR, in the case of Mercury orbit, is bounded by [@MPBDeA] $$- 5.87537 \times 10^{-4} < \delta \phi_P - 42.98'' < 2.96635 \times 10^{-3}.$$ These inequalities define an admissible region in the four-dimensional parameter space with dimensionless coordinates $\theta,\mu,\nu,R_S/\lambda$. Exclusion plots obtained by slicing the admissible region with two-dimensional planes can be drawn [@MPBDeA].
The admissible region in three-dimensional parameter subspace with coordinates $(\theta,\mu,\nu)$, for $0<|1-\theta|\ll 1$ and a given $\lambda\gg L$, can be approximated by the region enclosed within the degenerate quadric surfaces $$\nu=\frac{3}{4}\mu - \frac{3}{2} -9\left(\frac{R_S}{L}\right)^3 \frac{\varepsilon_i}{\left(1-L/\lambda\right)(1-\theta)}, \qquad i=1,2,$$ where $$\varepsilon_1\,\frac{6\pi GM_S}{Lc^2} = - 5.87537 \times 10^{-4}, \qquad \varepsilon_2\,\frac{6\pi GM_S}{Lc^2} = 2.96635 \times 10^{-3}.$$ The intersection of the three-dimensional admissible subregion with a plane $\theta=constant$, with $0<|1-\theta|\ll 1$, is a strip enclosed between two lines in the $(\mu,\nu)$ plane. The intersections with the planes $\mu=constant$ and $\nu=constant$ are regions enclosed by pairs of hyperbolae[@MPBDeA].
Eventually, the BepiColombo mission to Mercury should allow for a reduction on the above bounds by approximately one order of magnitude [@BepiColombo].
Acknowledgments {#acknowledgments .unnumbered}
===============
The work of R.M. and S.DA is, respectively, partially and fully supported by INFN (Istituto Nazionale di Fisica Nucleare, Italy), as part of the MoonLIGHT-2 experiment in the framework of the research activities of the Commissione Scientifica Nazionale n. 2 (CSN2).
[0]{} O. Bertolami, C.G. Böhmer, T. Harko and F.S.N. Lobo, [*Phys. Rev. D*]{} [**75**]{}, 104016 (2007). D. Puetzfeld and Y.N. Obukhov, [*Phys. Rev. D*]{} [**87**]{}, 044045 (2013). D. Puetzfeld and Y.N. Obukhov, [*Phys Lett A*]{} [**377**]{}, 2447 (2013). D. Puetzfeld and Y.N. Obukhov, [*Phys. Rev. D*]{} [**88**]{}, 064025 (2013). O. Bertolami and J. Páramos, [*JCAP*]{} [**03**]{}, 009 (2010). O. Bertolami, P. Frazão and J. Páramos, [*Phys. Rev. D*]{} [**86**]{}, 044034 (2012). O. Bertolami, P. Fraz\~ ao and J. Páramos, [*JCAP*]{} [**05**]{}, 029 (2013). O. Bertolami, P. Frazão and J. Páramos, [*Phys. Rev. D*]{} [**83**]{}, 044010 (2011). O. Bertolami, P. Fraz\~ ao and J. Páramos, [*Phys. Rev. D*]{} [**81**]{}, 104046 (2010). T. Chiba, T.L. Smith and A.L. Erickcek, [*Phys. Rev. D*]{} [**75**]{}, 124014 (2007). O. Bertolami, R. March and J. Páramos, [*Phys. Rev. D*]{} [**88**]{}, 064019 (2013). N. Castel-Branco, J. Páramos and R. March, [*Phys. Lett. B*]{} [**735**]{}, 25 (2014). R. March, J. Páramos, O. Bertolami and S. Dell’Agnello, [*Phys. Rev. D*]{} [**95**]{}, 024017 (2017). S.M. Carroll, V. Duvvuri, M. Trodden and M.S. Turner, [*Phys. Rev. D*]{} [**70**]{}, 043528 (2004). O. Bertolami, F.S.N. Lobo and J. Páramos, [*Phys. Rev. D*]{} [**78**]{}, 064036 (2008). C.M. Will, [*Theory and Experiment in Gravitational Physics, Revised Ed.*]{}, (Cambridge University Press, 1993). K. Henttunen, T. Multam" aki and I. Vilja [*Phys. Rev. D*]{} [**77**]{}, 024040 (2008). B. Bertotti, L. Iess and P. Tortora, [*Nature*]{} [**425**]{}, 374 (2003). A. Fienga [*et al.*]{}, [*Celest. Mech. Dyn. Astr.*]{} [**111**]{}, 363 (2011). F. De Marchi, G. Tommei, A. Milani and G. Schettino, [*Phys. Rev. D*]{} [**93**]{}, 123014 (2016).
|
---
abstract: 'We investigate mathematically a nonlinear approximation type approach recently introduced in [@ammar-mokdad-chinesta-keunings-06] to solve high dimensional partial differential equations. We show the link between the approach and the *greedy algorithms* of approximation theory studied [*e.g.*]{} in [@devore-temlyakov-96]. On the prototypical case of the Poisson equation, we show that a variational version of the approach, based on minimization of energies, converges. On the other hand, we show various theoretical and numerical difficulties arising with the non variational version of the approach, consisting of simply solving the first order optimality equations of the problem. Several unsolved issues are indicated in order to motivate further research.'
author:
- |
C. Le Bris$^1$, T. Lelièvre $^1$ & Y. Maday $^2$\
[$^1$ CERMICS, École des Ponts,]{}\
[6 & 8, avenue Blaise Pascal, 77455 Marne-La-Vallée Cedex 2, FRANCE, and]{}\
[INRIA Rocquencourt, MICMAC project-team,]{}\
[Domaine de Voluceau, B.P. 105, 78153 Le Chesnay Cedex, FRANCE]{}\
[{lebris,lelievre}@cermics.enpc.fr]{}\
[$^2$ Laboratoire J.L.-Lions, Université Pierre et Marie Curie,]{}\
[Boite courrier 187, F-75252 Paris, FRANCE]{}\
[[email protected]]{}
title: 'Results and questions on a nonlinear approximation approach for solving high-dimensional partial differential equations'
---
Introduction
============
Our purpose here is to investigate mathematically a numerical approach recently introduced in [@ammar-mokdad-chinesta-keunings-06] to solve high dimensional partial differential equations.
The approach is a nonlinear approximation type approach that consists in expanding the solution of the equation in tensor products of functions sequentially determined as the iterations of the algorithm proceed. The original motivation of the approach is the wish of its authors to solve high-dimensional Fokker-Planck type equations arising in the modelling of complex fluids. Reportedly, the approach performs well in this case, and, in addition, extends to a large variety of partial differential equations, static or time-dependent, linear or nonlinear, elliptic or parabolic, involving self-adjoint or non self- adjoint operators provided the data enjoy some appropriate separation property with respect to the different coordinates (this property is made precise in Remark \[rem:RHS\] below). We refer the reader to [@ammar-mokdad-chinesta-keunings-06] for more details.
In the present contribution focused on mathematical analysis, we restrict ourselves to the simplest possible case, namely the solution of the Poisson equation set with Dirichlet homogeneous boundary conditions on a two dimensional parallelepipedic domain $\Omega=\Omega_x
\times \Omega_y$ with $\Omega_x \subset
\R$ and $\Omega_y \subset
\R$ bounded. In short, the approach under consideration then determines the solution $u$ to $$\label{eq:11}
-\Delta u(x,y)=f(x,y)$$ as a sum $$\label{eq:22}
u(x,y)=\sum_{n\geq 1}r_n(x)\,s_n(y),$$ by iteratively determining functions $r_n(x)$, $s_n(y)$, $n\geq 1$ such that for all $n$, $r_n(x)\,s_n(y)$ is the best approximation (in a sense to be made precise below) of the solution $v(x,y)$ to $ -\Delta
v(x,y)=f(x,y)+\Delta \left(\sum_{k\leq n-1}r_k(x)\,s_k(y)\right)
$ in terms of one single tensor product $r(x)s(y)$. We show that it is possible to give a sound mathematical ground to the approach *provided* we consider a variational form of the approach that manipulates minimizers of energies instead of solutions to equations. In order to reformulate the approach in such a variational setting, our arguments thus crucially exploit the fact that the Laplace operator is self-adjoint. It is to be already emphasized that, because of the nonlinearity of the tensor product expansion (\[eq:22\]), the variational form of the approach is *not* equivalent to the form (\[eq:11\])-(\[eq:22\]) (which is exactly the Euler-Lagrange equations associated to the energy considered in the variational approach). Our analysis therefore does not apply to the actual implementation of the method as described in [@ammar-mokdad-chinesta-keunings-06]. At present time, we do not know how to extend our arguments to cover the practical situation, even in the simple case of the Poisson problem. The consideration of some particular pathological cases, theoretically and numerically, shows that the appropriate mathematical setting is unclear. Likewise, it is unclear to us how to provide a mathematical foundation of the approach for non variational situations, such as an equation involving a differential operator that is not self-adjoint.
On the other hand, the analysis provided here straightforwardly extends to the case of a $N$-dimensional Poisson problem with $N\geq 3$ (unless explicitly mentioned). Likewise, our analysis extends to the case of elliptic linear partial differential equations set on a cylinder in $\R^N$, with appropriate boundary conditions. The only, although substantial, difficulty that may appear when the dimension $N$ grows is the algorithmic complexity of the approach, since a set of $N$ coupled non-linear equations has to be solved (see Remark \[rem:HD\]). At least, the number of unknowns involved in the systems to be solved does not grow exponentially, as it would be the case for a naive approach (like for a finite differences method on a tensorized grid). This is not the purpose of the present article to further elaborate on this.
Our article is organized as follows. Section \[sec:presentation\] introduces the approach. The variational version of the approach (along with a relaxed variant of it) is described in Section \[sec:algo\]. Elementary properties follow in Sections \[sec:well\] and \[sec:EL\]. The non variational version is presented in Section \[sec:nonvar\]. In Section \[sec:convergence\] we show the convergence of the variational approach and give an estimate of the rate of convergence. Our arguments immediately follow from standard arguments of the literature of *nonlinear approximation theory*, and especially from those of [@devore-temlyakov-96]. The particular approach under consideration is indeed closely related to the so-called *greedy algorithms* introduced in approximation theory. We refer to [@barron-cohen-dahmen-devore-08; @davis-mallat-avellaneda-97; @temlyakov-08] for some relevant contributions, among many. The purpose of Section \[sec:discussion\] is to return to the original non variational formulation of the approach. For illustration, we first consider the case when the Laplace operator $-\Delta$ in (\[eq:11\]) is replaced by the identity operator. The approach then reduces to the determination of the *Singular Value Decomposition* (also called *rank-one decomposition*) of the right-hand side $f$. This simple situation allows one to understand various difficulties inherent to the non variational formulation of the approach. We then discuss the actual case of the Laplace operator, and present some intriguing numerical experiments, in particular when a non-symmetric term (namely there an advection term) is added.
As will be clear from the sequel, our current mathematical understanding of the numerical approach is rather incomplete. Our results do not cover real practice. Some ingredients from the literature of nonlinear approximation theory nevertheless already allow for understanding some basics of the approach. It is the hope of the authors that, laying some groundwork, the present contribution will sparkle some interest among the experts, and allows in a not too far future for a complete understanding of the mathematical nature of the approach. Should the need arise, it will also indicate possible improvements of the approach so that it is rigorously founded mathematically and, eventually, performs even better that the currently existing reports seemingly show.
[**Acknowledgments**]{}: The authors wish to thank A. Ammar and F. Chinesta for introducing them to their series of works initiated in [@ammar-mokdad-chinesta-keunings-06], A. Cohen for stimulating discussions, and A. Lozinski for pointing out reference [@devore-temlyakov-96]. This work was completed while the first author (CLB) was a long-term visitor at the Institute for Mathematics and its Applications, Minneapolis. The hospitality of this institution is gratefully acknowledged.
Presentation of the algorithms {#sec:presentation}
==============================
Consider a function $f \in L^2(\Omega)$ where $\Omega=\Omega_x
\times \Omega_y$ with $\Omega_x \subset
\R$ and $\Omega_y \subset
\R$ two bounded domains. To fix ideas, one may take $\Omega_x=\Omega_y=(0,1)$. Consider on $\Omega$ the following homogeneous Dirichlet problem: $$\label{eq:lapl}
\text{Find $g \in H^1_0(\Omega) $ such that }\left\{
\begin{array}{rl}
-\Delta g = f &\text{ in $\Omega$},\\
g=0& \text{ on $\partial \Omega$}.
\end{array}
\right.$$ It is well known that solving is equivalent to solving the variational problem: $$\label{eq:lapl_var}
\text{Find $g \in H^1_0(\Omega)$ such that } g=\arg\min_{u \in H^1_0(\Omega)} \left(
\frac{1}{2}\int_\Omega |\nabla u|^2 - \int_\Omega f u \right).$$ In the following, for any function $u \in H^1_0(\Omega)$, we denote $$\label{eq:ener}
{\mathcal E}(u)=\frac{1}{2}\int_\Omega |\nabla u|^2 - \int_\Omega f u.$$ Notice that $$\label{eq:ener_polar}
{\mathcal E}(u)=\frac{1}{2}\int_\Omega |\nabla (u - g)|^2 -
\frac{1}{2}\int_\Omega |\nabla g|^2$$ where $g$ is defined by , so that minimizing ${\mathcal
E}$ is equivalent to minimizing $\int_\Omega |\nabla (u - g)|^2$ with respect to $u$. We endow the functional space $H^1_0(\Omega)$ with the scalar product: $$\langle u , v \rangle = \int_\Omega \nabla u \cdot \nabla v,$$ and the associated norm $$\| u \|^2= \langle u , u \rangle = \int_\Omega |\nabla u|^2.$$
Two algorithms {#sec:algo}
--------------
We now introduce two algorithms to solve . The first algorithm is the *Pure Greedy Algorithm*:
set $f_0=f$, and at iteration $n \ge 1$,
1. Find $r_n \in H^1_0(\Omega_x)$ and $s_n \in H^1_0(\Omega_y)$ such that $$\label{eq:LRA_var}
(r_n,s_n)=\arg \min_{(r,s) \in H^1_0(\Omega_x) \times H^1_0(\Omega_y)}
\left(
\frac{1}{2}\int_\Omega |\nabla (r \otimes s)|^2 - \int_\Omega f_{n-1}
\,r \otimes s \right).$$
2. Set $f_{n}=f_{n-1} + \Delta (r_n \otimes s_n)$.
3. If $\|f_{n}\|_{H^{-1}(\Omega)} \ge \varepsilon$, proceed to iteration $n+1$. Otherwise, stop.
Throughout this article, we denote by $r \otimes s$ the tensor product: $r \otimes s(x,y)=r(x)
s(y)$. Notice that $$f_n=f + \Delta \left( \sum_{k=1}^{n} r_k \otimes s_k \right).$$ The fonction $f_n$ belongs to $H^{-1}(\Omega)$ and the tensor product $r
\otimes s$ is in $H^1_0(\Omega)$ if $r \in H^1_0(\Omega_x)$ and $s \in
H^1_0(\Omega_y)$ (see Lemma \[lem:H10\] below), so that the integral $\int_\Omega f_{n-1} \,r \otimes s$ in is well defined.
A variant of this algorithm is the *Orthogonal Greedy Algorithm*:
set $f_0^o=f$, and at iteration $n \ge 1$,
1. Find $r_n^o \in H^1_0(\Omega_x)$ and $s_n^o \in H^1_0(\Omega_y)$ such that $$\label{eq:LRA_varo}
(r_n^o,s_n^o)=\arg\min_{(r,s) \in H^1_0(\Omega_x) \times H^1_0(\Omega_y)} \left(
\frac{1}{2}\int_\Omega |\nabla (r \otimes s)|^2 - \int_\Omega f_{n-1}^o \,r \otimes s \right).$$
2. Solve the following Galerkin problem on the basis $(r_1^o \otimes
s_1^o, \ldots, r_n^o \otimes s_n^o )$: find $(\alpha_1, \ldots,
\alpha_n) \in \R^n$ such that $$\label{eq:LRA_galo}
(\alpha_1, \ldots, \alpha_n)=\arg\min_{(\beta_1, \ldots,
\beta_n) \in \R^n} \left(
\frac{1}{2}\int_\Omega \left|\nabla \left( \sum_{k=1}^n \beta_k r_k^o \otimes
s_k^o \right) \right|^2 - \int_\Omega f \,\sum_{k=1}^n \beta_k r_k^o
\otimes
s_k^o \right) .$$
3. Set $f_{n}^o=f + \Delta \left( \sum_{k=1}^n \alpha_k r_k^o \otimes s_k^o \right)$.
4. If $\|f_{n}^o\|_{H^{-1}(\Omega)} \ge \varepsilon$, proceed to iteration $n+1$. Otherwise, stop.
Let us also introduce $g_n$ satisfying the Dirichlet problem: $$\label{eq:g_n}
\left\{
\begin{array}{rl}
-\Delta g_n = f_n & \text{ in $\Omega$},\\
g_n=0 & \text{ on $\partial \Omega$}.
\end{array}
\right.$$ Notice that $$\label{eq:g_np1}
g_{n}=g_{n-1} - r_n \otimes s_n.$$ so that $g_n=g - \sum_{k=1}^{n} r_k \otimes s_k$. Likewise, we introduce $g_n^o=g - \sum_{k=1}^{n} r_k^o \otimes s_k^o$, which satisfies $-\Delta g_n^o = f_n^o$ in $\Omega$ and $g_n^o=0$ on $\partial \Omega$. Proving the convergence of the algorithms amounts to proving that $g_n$ and $g_n^o$ converge to $0$.
The terminology *Pure Greedy Algorithm* and *Orthogonal Greedy Algorithm* is borrowed from approximation theory (see [@barron-cohen-dahmen-devore-08; @davis-mallat-avellaneda-97; @devore-temlyakov-96; @temlyakov-08]). Such algorithms have been introduced in a more general framework, namely for an arbitrary Hilbert space and an arbitrary set of functions (not only tensor products). Recall for consistency that, in short, the purpose of such nonlinear approximations techniques is to find the best possible approximation of a given function as a sum of elements of a prescribed *dictionary*. The latter does not need to have a vectorial structure. In the present case, the dictionary is the set of simple products $r(x)s(y)$ for $r$ varying in $H^1_0(\Omega_x)$ and $s$ varying in $H^1_0(\Omega_y)$ (All this will be formalized with the introduction of the space ${\mathcal L}^1$ in Section \[sec:convergence\] below). The metric chosen to define the approximation is the natural metric induced by the differential operator, here the $H^1$ norm. The algorithm proposed by Ammar [*et al*]{}. [@ammar-mokdad-chinesta-keunings-06] is actually related to the Orthogonal Greedy Algorithm: it consists in replacing the optimization procedure by the associated Euler-Lagrange equations. We shall give details on this in Section \[sec:EL\] below. For the moment, we concentrate ourselves on the variational algorithms above.
The iterations are well defined {#sec:well}
-------------------------------
We will need the following three lemmas.
\[lem:H10\] For any measurable functions $r : \Omega_x \to \R$ and $s : \Omega_y \to
\R$ such that $r \otimes s \neq 0$ $$r \otimes s \in H^1_0(\Omega) \iff r \in H^1_0(\Omega_x) \text{ and }
s \in H^1_0(\Omega_y).$$
\[lem:distrib\] Let $T \in {\mathcal D}'(\Omega)$ be a distribution such that, for any functions $(\phi,\psi) \in {\mathcal C}^\infty_c(\Omega_x) \times {\mathcal C}^\infty_c(\Omega_y)$, $$(T, \phi \otimes \psi)_{({\mathcal D}'(\Omega),{\mathcal
D}(\Omega))}=0$$ then $T=0$ in ${\mathcal D}'(\Omega)$. Moreover, for any two sequences of distributions $R_n \in {\mathcal D}'(\Omega_x)$ and $S_n\in {\mathcal D}'(\Omega_y)$ such that $\lim_{n \to \infty} R_n = R$ in ${\mathcal D}'(\Omega_x)$ and $\lim_{n \to \infty} S_n=S$ in ${\mathcal D}'(\Omega_y)$, $\lim_{n \to \infty} R_n \otimes S_n = R
\otimes S$ in ${\mathcal D}'(\Omega)$.
\[lem:E\_neg\] Let us consider a function $f \in L^2(\Omega)$. If $f \neq 0$, then $\exists (r,s) \in H^1_0(\Omega_x) \times H^1_0(\Omega_y)$ such that $${\mathcal E}(r \otimes s)< 0,$$ where ${\mathcal E}$ is defined by .
Lemma \[lem:distrib\] is well-known in distribution theory. We now provide for consistency a short proof of Lemmas \[lem:H10\] and \[lem:E\_neg\], respectively.
[**Proof of Lemma \[lem:H10\]**]{} Notice that $$\int_\Omega |\nabla (r \otimes s) |^2=\int_{\Omega_x} |r '|^2 \int_{\Omega_y} |s|^2 + \int_{\Omega_x}
|r|^2 \int_{\Omega_y} |s'|^2$$ where $'$ denotes henceforth the differentiation with respect to a one-dimensional argument. Thus, it is clear that if $r \in
H^1_0(\Omega_x)$ and $s \in H^1_0(\Omega_y)$, then $r \otimes s \in
H^1_0(\Omega)$. Now, when $r \otimes s \in
H^1_0(\Omega)$, we have $\int_{\Omega_x} |r '|^2 \int_{\Omega_y}
|s|^2 < \infty$ and $\int_{\Omega_x}
|r|^2 \int_{\Omega_y} |s'|^2 < \infty$. This implies $r \in
H^1_0(\Omega_x)$ and $s \in H^1_0(\Omega_y)$, since $r \neq 0$ and $s
\neq 0$. $\diamondsuit$
[**Proof of Lemma \[lem:E\_neg\]**]{} Fix $f \in L^2(\Omega)$ and assume that for all $(r,s) \in
H^1_0(\Omega_x) \times H^1_0(\Omega_y)$, ${\mathcal E}(r \otimes s)\ge
0$. Then, for a fixed $(r,s) \in H^1_0(\Omega_x) \times
H^1_0(\Omega_y)$, we have, for all $\epsilon \in \R$, $$\frac{\epsilon^2}{2} \int | \nabla (r \otimes s) |^2 \ge \epsilon \int
f r \otimes
s.$$ By letting $\epsilon \to 0$, this shows that $f \in \{r \otimes s, \, (r,s)\in
L^2(\Omega_x)\times L^2(\Omega_y)\}^\perp$ which implies $f=0$ (by Lemma \[lem:distrib\]) and concludes the proof. $\diamondsuit$
The above lemmas allow us to prove.
\[prop:LRA\_var\_well\_posed\] For each $n$, there exists a solution to problems and .
Without loss of generality, we may only argue on problem and assume that $n=1$ and $f_0=f \neq 0$. First, using , it is clear that $$\begin{aligned}
\frac{1}{2}\int_\Omega |\nabla (r \otimes s)|^2 - \int_\Omega f \,r \otimes s
&= \frac{1}{2}\int_\Omega |\nabla (r
\otimes s - g)|^2 -
\frac{1}{2}\int_\Omega |\nabla g|^2\\
&\ge - \frac{1}{2}\int_\Omega |\nabla g|^2.\end{aligned}$$ Thus, we can introduce $m=\inf_{(r,s) \in H^1_0(\Omega_x) \times H^1_0(\Omega_y)} \left(
\frac{1}{2}\int_\Omega |\nabla (r \otimes s)|^2 - \int_\Omega f \,r
\otimes s \right)$ and a minimizing sequence $(r^k,s^k)$ such that $\lim_{k \to \infty} {\mathcal E}(r^k \otimes s^k) = m$. Notice that we may suppose, again without loss of generality (up to a multiplication of $s^k$ by a constant), that $$\int_\Omega |r^k|^2 = 1.$$
Since ${\mathcal E}(u) \ge \frac{1}{4} \int_\Omega |\nabla u|^2 -
\int_\Omega |\nabla g|^2$, the sequence $(r^k \otimes s^k)$ is bounded in $H^1_0(\Omega)$: there exists some $C >0$ such that, for all $k \ge 1$, $$\label{eq:subseq_bounded}
\int_{\Omega_x} |(r^k) '|^2 \int_{\Omega_y} |s^k|^2 + \int_{\Omega_x}
|r^k|^2 \int_{\Omega_y} |(s^k)'|^2\le C.$$ From this we deduce the existence of $w \in H^1_0(\Omega)$, $r \in
L^2(\Omega_x)$ and $s \in H^1_0(\Omega_y)$ such that (up to the extraction of a subsequence):
- $r^k \otimes s^k$ converges to $w$ weakly in $H^1_0(\Omega)$, and strongly in $L^2(\Omega)$,
- $r^k$ converges to $r$ weakly in $L^2(\Omega_x)$,
- $s^k$ converges to $s$ weakly in $H^1_0(\Omega_y)$, and strongly in $L^2(\Omega_y)$.
Since $r^k \otimes s^k$ converges to $w$ weakly in $H^1_0(\Omega)$ and ${\mathcal E}$ is convex and continuous, we have ${\mathcal E}(w) \le
\liminf_{k \to \infty} {\mathcal E} (r^k \otimes s^k)$. This yields ${\mathcal E}(w) \le m$. Moreover, by Lemma \[lem:E\_neg\], we know $m<0$. Therefore, $$\label{eq:w_neg}
{\mathcal E}(w) < 0.$$
The convergences $r^k \to r$ and $s^k \to s$ in the distributional sense imply the convergence $r^k \otimes s^k \to r \otimes s$ in the distributional sense (see Lemma \[lem:distrib\]), and therefore $w = r \otimes s$. Thus, if $w
\neq 0$, Lemma \[lem:H10\] concludes the proof, showing that indeed $r \in H^1_0(\Omega_x)$. Now, we cannot have $w=0$, since this would imply ${\mathcal E}(w) = 0$, which would contradict . This concludes the proof.
The optimization step admits also a solution by standard arguments and we therefore have proven:
\[lem:algo\_well\_defined\] At each iteration of the Pure Greedy Algorithm, problem (\[eq:LRA\_var\]) admits (at least) a minimizer $(r_n,s_n)$. Likewise, at each iteration of the Orthogonal Greedy Algorithm, problem (\[eq:LRA\_varo\]) admits (at least) a minimizer $(r_n^o,s_n^o)$.
It is important to note that, in either case, uniqueness of the iterate is unclear. Throughout the text, we will thus be refering to [*the*]{} functions $(r_n,s_n)$ (resp. $(r_n^o,s_n^o)$) although we do not know whether they are unique. However, our arguments and results are valid for *any such* functions.
Euler-Lagrange equations {#sec:EL}
------------------------
Our purpose is now to derive the Euler-Lagrange equations of the problems considered, along with other important properties of the sequences $(r_n,s_n)$ and $(r_n^o,s_n^o)$. We only state the results for $(r_n,s_n)$. Similar properties hold for $(r_n^o,s_n^o)$, replacing $f_n$ and $g_n$ by $f_n^o$ and $g_n^o$.
The first order optimality conditions write:
The functions $(r_n,s_n) \in H^1_0(\Omega_x)\times H^1_0(\Omega_y)$ satisfying are such that: for any functions $(r,s) \in H^1_0(\Omega_x)\times H^1_0(\Omega_y)$ $$\label{eq:LRA_EL_FV}
\int_{\Omega} \nabla (r_n \otimes s_n) \cdot \nabla (r_n \otimes s + r \otimes
s_n) = \int_{\Omega} f_{n-1} (r_n \otimes s + r \otimes s_n).$$ This can be written equivalently as $$\label{eq:LRA_EL}
\left\{
\begin{array}{l}
\displaystyle - \left(\int_{\Omega_y} |s_n|^2\right)\, r_n'' + \left(\int_{\Omega_y}
|s_n'|^2\right)\, = r_n \int_{\Omega_y} f_{n-1}\, s_n,\\
\\
\displaystyle - \left(\int_{\Omega_x} |r_n|^2 \right)\, s_n'' + \left(\int_{\Omega_x}
|r_n'|^2\right)\, = s_n \int_{\Omega_x} f_{n-1}\, r_n,
\end{array}
\right.$$ or, in terms of $g_n$, as: $$\label{eq:ortho}
\langle g_{n} , (r \otimes s_n + r_n \otimes s) \rangle = 0.$$
Equation is obtained differentiating . Namely, for any $(r,s) \in H^1_0(\Omega_x)\times H^1_0(\Omega_y)$ and any $\epsilon \in \R$, we have $$\begin{aligned}
\label{eq:eps}
&&\frac{1}{2}\int_\Omega |\nabla \left( (r_n + \epsilon r) \otimes (s_n +
\epsilon s) \right)|^2 - \int_\Omega f_{n-1} \, (r_n + \epsilon r)
\otimes (s_n + \epsilon s)\nonumber\\
&&\quad \quad \quad \quad \ge\frac{1}{2}\int_\Omega |\nabla (r_n
\otimes s_n) |^2 - \int_\Omega f_{n-1} \, r_n \otimes s_n.\end{aligned}$$ It holds: $$\begin{aligned}
\frac{1}{2}&\int_\Omega |\nabla \left( (r_n + \epsilon r) \otimes (s_n +
\epsilon s) \right)|^2 - \int_\Omega f_{n-1} \,(r_n + \epsilon r)
\otimes (s_n + \epsilon s)\\
&=\frac{1}{2}\int_\Omega |\nabla (r_n \otimes s_n) + \epsilon \nabla (r
\otimes s_n + r_n \otimes s) + \epsilon^2 \nabla (r \otimes s)|^2 - \int_\Omega f_{n-1} \,(r_n + \epsilon r)
\otimes (s_n + \epsilon s)\\
&=\frac{1}{2}\int_\Omega |\nabla (r_n
\otimes s_n)|^2 - \int_\Omega f_{n-1} \, r_n \otimes s_n \\
&\quad + \epsilon \left( \int_\Omega \nabla( r_n \otimes s_n )\cdot \nabla (r
\otimes s_n + r_n \otimes s) - \int_\Omega f_{n-1} (r_n \otimes s + r
\otimes s_n )\, \right)\\
& \quad + \epsilon^2 \left(\frac{1}{2}\int_\Omega |\nabla (r
\otimes s_n + r_n \otimes s)|^2 + \int_\Omega \nabla (r_n \otimes s_n)
\cdot \nabla (r \otimes s) - \int_\Omega f_{n-1} r \otimes s \right)
+ O(\epsilon^3)\\
&= \frac{1}{2}\int_\Omega |\nabla (r_n
\otimes s_n)|^2 - \int_\Omega f_{n-1} \, r_n \otimes s_n + \epsilon I_1
+ \epsilon^2 I_2 + O(\epsilon^3).\end{aligned}$$ Using , we get, for any $\epsilon \in \R$, $$\label{eq:I1I2}
\epsilon I_1 + \epsilon^2 I_2 + O(\epsilon^3) \ge 0,$$ which implies that $I_1$ is zero, that is, .
Equation is the strong formulation of . On the other hand, is an immediate consequence of the following simple computations: $$\begin{aligned}
\langle g_{n} , (r \otimes s_n + r_n \otimes s) \rangle
&= \langle g_{n-1} - r_n \otimes s_n , (r \otimes s_n + r_n \otimes s)
\rangle\\
&= \int_\Omega \nabla(g_{n-1} - r_n \otimes s_n) \cdot \nabla(r \otimes s_n +
r_n \otimes s)\\
&= - \int_\Omega \Delta g_{n-1} (r \otimes s_n +
r_n \otimes s) -\int_\Omega \nabla (r_n \otimes s_n) \cdot \nabla(r \otimes s_n +
r_n \otimes s)\\
&=0,\end{aligned}$$ since $-\Delta g_{n-1} = f_{n-1}$ in $\Omega$ and $g_{n-1}=0$ on $\partial \Omega$.
Note that, taking $r=r_n$ and $s=0$ in the Euler-Lagrange equations yields $$\label{eq:ortho2}
\langle r_n \otimes s_n , g_{n-1} \rangle = \|r_n \otimes s_n\|^2,$$ since $g_n= g_{n-1} - r_n \otimes s_n$. This will be useful below.
Let us now state two other properties of $(r_n,s_n)$. The second order optimality conditions write:
\[lem:EL2\] The functions $(r_n,s_n) \in H^1_0(\Omega_x)\times H^1_0(\Omega_y)$ satisfying are such that: for any functions $(r,s) \in H^1_0(\Omega_x)\times H^1_0(\Omega_y)$ $$\label{eq:LRA_EL2_FV}
\frac{1}{2}\int_\Omega |\nabla (r
\otimes s_n + r_n \otimes s)|^2 + \int_\Omega \nabla (r_n
\otimes s_n) \cdot \nabla (r \otimes s) - \int_\Omega f_{n-1} r \otimes s \ge 0,$$ which is equivalent to: for any functions $(r,s) \in H^1_0(\Omega_x)\times H^1_0(\Omega_y)$ $$\label{eq:LRA_EL2_FV'}
\left(\int_\Omega \nabla ( r_n \otimes s_n - g_{n}) \cdot \nabla (r \otimes s) \right)^2 \le \int_\Omega |\nabla (r
\otimes s_n)|^2\ \int_\Omega |\nabla (r_n \otimes s)|^2.$$
Returning to Equation , we see that $I_1=0$ and $I_2 \ge
0$, which is exactly . For any $\lambda
\in \R$, taking $(\lambda r, s)$ as a test function in shows $$\frac{1}{2}\int_\Omega | \lambda \nabla ( r
\otimes s_n) + \nabla ( r_n \otimes s)|^2 + \int_\Omega \lambda \nabla (r_n
\otimes s_n) \cdot \nabla ( r \otimes s) - \int_\Omega f_{n-1} \lambda
r \otimes s
\ge 0.$$ This equivalently reads $$\begin{aligned}
\frac{\lambda^2}{2} &\int_\Omega |\nabla (r \otimes s_n)|^2 + \lambda
\left(\int_\Omega \left( \nabla (r
\otimes s_n) \cdot \nabla (r_n \otimes s) + \nabla (r_n
\otimes s_n) \cdot \nabla (r \otimes s )\right) - \int_\Omega f_{n-1}
r
\otimes s \right) \\
& + \frac{1}{2}\int_\Omega |\nabla (r_n \otimes s)|^2
\ge 0,\end{aligned}$$ hence $$\begin{aligned}
&\left(\int_\Omega \left( \nabla (r
\otimes s_n) \cdot \nabla (r_n \otimes s) + \nabla (r_n
\otimes s_n) \cdot \nabla (r \otimes s) \right) -
\int_\Omega f_{n-1} r \otimes s \right)^2 \\
&\quad \quad \le \int_\Omega |\nabla (r
\otimes s_n)|^2\ \int_\Omega |\nabla (r_n \otimes s)|^2.\end{aligned}$$ This yields .
We will also need the following optimality property of $(r_n,s_n)$:
\[lem:ProdScal\] The functions $(r_n,s_n)$ satisfying are such that: $\forall (r,s) \in H^1_0(\Omega_x)\times H^1_0(\Omega_y)$ $$\|r_n \otimes s_n\|=\frac{\langle r_n \otimes s_n , g_{n-1} \rangle}{\|r_n \otimes s_n\|}
\ge \frac{\langle r \otimes s , g_{n-1} \rangle}{\|r \otimes s\|}.$$
We may assume without loss of generality that $n=1$. The first equality is . To prove the inequality, let us introduce the supremum: $$M=\sup_{(u,v) \in H^1_0(\Omega_x)\times H^1_0(\Omega_y),
\| u \otimes v \|=1} \langle u \otimes v , g \rangle.$$ Using , we have $$\label{eq:3}
\|r_1 \otimes s_1\| = \frac{\langle r_1 \otimes s_1 ,g \rangle}{\|r_1
\otimes s_1\|} \le M,$$ by definition of $M$. On the other hand, consider $(u^k,v^k)_{k \ge 0}$ a maximizing sequence associated to the supremum $M$: $\| u^k \otimes v^k \|=1$ and $\lim_{k \to \infty}
\langle u^k \otimes v^k , g \rangle= M$. We have, using , for all $k \ge 0$, $$\begin{aligned}
\|g-r_1 \otimes s_1\|^2
& \le \| g - \langle g , u^k \otimes v^k \rangle \, u^k \otimes v^k \|^2\\
&= \| g \|^2 - \langle g , u^k \otimes v^k \rangle^2,\end{aligned}$$ and, letting $k \to \infty$, $$\label{eq:4}
\|g-r_1 \otimes s_1\|^2 \le \| g \|^2 - M^2.$$ Combining and , we get $$\begin{aligned}
\|g-r_1 \otimes s_1\|^2
& \le \| g \|^2 - M^2 \\
& \le \| g \|^2 - \|r_1 \otimes s_1 \|^2 \\
& = \| g \|^2 - 2 \langle g , r_1 \otimes s_1 \rangle + \|r_1 \otimes
s_1\|^2\\
&= \| g - r_1 \otimes s_1\|^2\end{aligned}$$ so that all the inequalities are actually equalities. By using the fact that, by , $M \ge 0$, we thus have $$M=\|r_1 \otimes s_1 \|=\frac{\langle r_1 \otimes s_1 ,g \rangle}{\|r_1
\otimes s_1\|}.$$ This concludes the proof.
Some preliminary remarks on the non variational approach implemented {#sec:nonvar}
--------------------------------------------------------------------
Before we get to the proof of the convergence of the approach in the next section, let us conclude Section \[sec:presentation\] by some comments that relates the theoretical framework developed here to the practice.
It is important to already note, although we will return to this in Section \[sec:discussion\] below, that the Euler-Lagrange equation is indeed the form of the algorithm manipulated in practice by the authors of [@ammar-mokdad-chinesta-keunings-06]. The above variational setting is somewhat difficult to implement in practice. It requires to solve for the minimizers of (and respectively), which can be extremely demanding computationally. In their implementation of the approach (developed independently from the above nonlinear approximation theoretic framework), Ammar [*et al.*]{} therefore propose to search for the iterate $(r_n,s_n)$ (and respectively $(r_n^o,s_n^o)$) not as a minimizer to optimization problems and , but as a solution to the associated Euler-Lagrange equations (first order optimality conditions). The Pure Greedy algorithm is thus replaced by: set $f_0=f$, and at iteration $n \ge 1$,
1. Find $r_n \in H^1_0(\Omega_x)$ and $s_n \in H^1_0(\Omega_y)$ such that, for all functions $(r,s) \in H^1_0(\Omega_x) \times H^1_0(\Omega_y)$, (or its equivalent form ) holds.
2. Set $f_{n}=f_{n-1} +\Delta ( r_n \otimes s_n)$.
3. If $\|f_{n}\|_{H^{-1}(\Omega)} \ge \varepsilon$, proceed to iteration $n+1$. Otherwise, stop.
The Orthogonal Greedy Algorithm is modified likewise.
As already explained in the introduction, and in sharp contrast to the situation encountered for linear problems, being a solution to the Euler-Lagrange equation does not guarantee being a minimizer in this nonlinear framework. We will point out difficulties originating from this in Section \[sec:discussion\].
In addition to the above theoretical difficulty, and in fact somehow entangled to it, we have to mention that of course, the Euler-Lagrange equations , as a nonlinear system, need to be solved iteratively. In [@ammar-mokdad-chinesta-keunings-06], a simple fixed point procedure is employed: choose $(r_n^0,s_n^0) \in H^1_0(\Omega_x)\times H^1_0(\Omega_y)$ and, at iteration $k \ge 0$, compute $(r_n^k,s_n^k) \in H^1_0(\Omega_x)\times H^1_0(\Omega_y)$ solution to: $$\label{eq:LRA_FP}
\left\{
\begin{array}{l}
\displaystyle - \int_{\Omega_y} |s_n^k|^2 (r_n^{k+1})'' + \int_{\Omega_y}
|(s_n^k)'|^2 r_n^{k+1} = \int_{\Omega_y} f_{n-1} s_n^k,\\
\\
\displaystyle - \int_{\Omega_x} |r_n^{k+1}|^2 (s_n^{k+1})'' + \int_{\Omega_x}
|(r_n^{k+1})'|^2 s_n^{k+1} = \int_{\Omega_x} f_{n-1} r_n^{k+1},
\end{array}
\right.$$ until convergence is reached. We will also discuss below the convergence properties of this procedure on simple examples.
\[rem:RHS\] In practice (bearing in mind that the approach has been designed to solve high-dimensional problems), in order for the right-hand side terms in to be computable, the function $f$ needs to be expressed as a sum of tensor products. Otherwise, computing high dimensional integrals would be necessary, and this is a task of the same computational complexity as the original Poisson problem. The function $f$ thus needs to enjoy some appropriate separation property with respect to the different coordinates.
If $f$ is not given in such a form, it may be possible to first apply the Singular Value Decomposition algorithm to get a good estimate of $f$ as a sum of tensor products (see Section \[sec:SVD\]).
\[rem:HD\] In dimension $N \ge 2$ (on a parallelepipedic domain $\Omega=\Omega_{x_1} \times \ldots \times \Omega_{x_N}$), the Euler-Lagrange equations become: find functions $(r^1_n,\ldots,r^N_n) \in
H^1_0(\Omega_{x_1})\times \ldots \times H^1_0(\Omega_{x_N})$ such that: for any functions $(r^1,\ldots,r^N) \in H^1_0(\Omega_{x_1})\times \ldots \times H^1_0(\Omega_{x_N})$, $$\begin{aligned}
\int_{\Omega} &\nabla (r^1_n \otimes \ldots \otimes r^N_n) \cdot \sum _{k=1}^N \nabla (r^1_n \otimes \ldots \otimes r^{k-1}_n \otimes r^k \otimes r^{k+1}_n \otimes \ldots \otimes r^N_n) \nonumber\\
& = \int_{\Omega} f_{n-1} \sum _{k=1}^N (r^1_n \otimes \ldots \otimes r^{k-1}_n \otimes r^k \otimes r^{k+1}_n \otimes \ldots \otimes r^N_n).\label{eq:LRA_EL_FV_HD}\end{aligned}$$ This is a nonlinear system of $N$ equations, which only involves one-dimensional integrals by Fubini theorem, provided that the data $f$ is expressed as a sum of tensor products (see Remark \[rem:RHS\]).
\[rem:disc\] We presented the algorithms without space discretization, which is required for the practical implementation. In practice, finite element spaces $V^h_x$ (resp. $V^h_y$) are used to discretized $H^1_0(\Omega_x)$ (resp. $H^1_0(\Omega_y)$), where $h>0$ denotes a space discretization parameter. The space discretized version of thus writes: find $(r^h_n,s^h_n) \in V^h_x \times V^h_y$ such that, for any functions $(r^h,s^h) \in V^h_x \times V^h_y$ $$\label{eq:LRA_EL_FV_disc}
\int_{\Omega} \nabla (r^h_n \otimes s^h_n) \cdot \nabla (r^h_n \otimes s^h + r^h \otimes
s^h_n) = \int_{\Omega} f^h_{n-1} (r^h_n \otimes s^h + r^h \otimes s^h_n).$$
Convergence {#sec:convergence}
===========
To start with, we prove that the approach converges. Then we will turn to the rate of convergence.
Convergence of the method
-------------------------
\[theo:CV\_PGA\] [**\[Pure Greedy Algorithm\]**]{}
Consider the Pure Greedy Algorithm, and assume first that $(r_n,s_n)$ satisfies the Euler-Lagrange equations . Denote by $$E_n=\frac{1}{2}\int_\Omega |\nabla (r_n \otimes s_n)|^2 - \int_\Omega f_{n-1} \,r_n \otimes s_n$$ the energy at iteration $n$. We have $$\label{eq:serCV}
\displaystyle\sum_n \int_\Omega |\nabla (r_n \otimes
s_n)|^2 = - 2 \sum_n E_n < \infty.$$ Assume in addition that $(r_n,s_n)$ is a minimizer of . Then, $$\label{eq:gCV}
\lim_{n \to \infty} g_n = 0 \text{ in $H^1_0(\Omega)$.}$$
Immediate consequences of and are $$\lim_{n \to \infty} E_n = \lim_{n \to \infty} \| r_n \otimes
s_n\| = 0,$$ and $$\lim_{n \to \infty} f_n = 0 \text{ in $H^{-1}(\Omega)$}.$$
Let us first suppose that $(r_n,s_n)$ only satisfies the Euler-Lagrange equations . We notice that, using $$\begin{aligned}
\|g_{n-1}\|^2
&=\|g_{n} + r_n \otimes s_n\|^2 \nonumber\\
&=\|g_{n}\|^2 + \|r_n \otimes s_n\|^2. \label{eq:pyth}\end{aligned}$$ Thus, $\|g_n\|^2$ is a nonnegative non increasing sequence. Hence it converges. This implies that $\sum_n |\nabla (r_n \otimes
s_n)|^2 < \infty$.
Next, notice that $$\begin{aligned}
E_n
&= \frac{1}{2}\int_\Omega |\nabla (r_n \otimes s_n)|^2 - \int_\Omega f_{n-1}
\,r_n \otimes s_n\\
&=\frac{1}{2}\int_\Omega |\nabla (r_n \otimes s_n)|^2 - \int_\Omega \nabla
g_{n-1} \cdot
\nabla (r_n \otimes s_n)\\
&=- \frac{1}{2}\int_\Omega |\nabla (r_n \otimes s_n)|^2,\end{aligned}$$ since by , $\int_\Omega \nabla
g_{n-1} \cdot
\nabla (r_n \otimes s_n)=\int_\Omega |\nabla (r_n \otimes
s_n)|^2$. This proves the first part of the theorem. At this stage, we have only used that $(r_n,s_n)$ satisfies the Euler-Lagrange equations .
To conclude that $\lim_{n \to \infty} f_n= 0$, we now need to assume that $(r_n,s_n)$ indeed satisfies the minimization problem . We know that $\|g_n\|^2$ is a bounded sequence, and therefore, we may assume (up to the extraction of a subsequence) that $g_n$ converges weakly in $H^1_0(\Omega)$ to some $g_\infty \in H^1_0(\Omega)$. For any $n \ge 1$ and for any functions $(r,s) \in H^1_0(\Omega_x) \times H^1_0(\Omega_y)$, $$\frac{1}{2}\int_\Omega |\nabla (r \otimes s)|^2 - \int_\Omega \nabla g_{n-1}
\cdot \nabla (r
\otimes s )\ge E_n.$$ By passing to the limit this inequality, we have $$\frac{1}{2}\int_\Omega |\nabla (r \otimes s)|^2 - \int_\Omega \nabla g_\infty \cdot
\nabla (r \otimes s )\ge 0.$$ This implies that for any functions $(r,s) \in H^1_0(\Omega_x) \times
H^1_0(\Omega_y)$, $$\int_\Omega \nabla g_\infty \cdot \nabla (r \otimes s)= 0.$$ Thus, by Lemma \[lem:distrib\], $-\Delta g_\infty= 0$ in the distributional sense, which, since $g_\infty \in H^1_0(\Omega)$, implies $g_\infty=0$. This shows that there is only one possible limit for the subsequence $g_n$ and thus that the whole sequence itself weakly converges to $0$.
The convergence of $g_n$ to $0$ is actually strong in $H^1_0(\Omega)$. The argument we use here is taken from [@jones-87]. Using Lemma \[lem:ProdScal\], we have: for any $n \ge m \ge 0$ $$\begin{aligned}
\| g_{n} - g_m \|^2
&=\| g_{n} \|^2 + \| g_m \|^2 - 2 \left\langle g_n , \left(g_n + \sum_{k=m+1}^n r_k
\otimes s_k\right) \right\rangle\\
&=\| g_{n} \|^2 + \| g_m \|^2 - 2 \| g_n \|^2 - 2 \sum_{k=m+1}^n
\langle g_n, r_k \otimes s_k \rangle\\
&\le - \| g_{n} \|^2 + \| g_m \|^2 + 2 \sum_{k=m+1}^n
\|r_k \otimes s_k\| \|r_{n+1} \otimes s_{n+1}\|.\end{aligned}$$ Define $\phi(1)=1$, $\phi(2)=\arg \min_{n>\phi(1)} \{\|r_n \otimes s_n\| \le \|r_{\phi(1)}
\otimes s_{\phi(1)}\| \}$, and, by induction, $$\phi(k+1)=\arg \min_{n>\phi(k)} \{\|r_n \otimes s_n\| \le \|r_{\phi(k)}
\otimes s_{\phi(k)}\| \}.$$ Notice that $\lim_{k \to \infty} \phi(k)=\infty$ since $\lim_{k \to
\infty} \|r_k \otimes s_k\|=0$. For example, if $(\|r_k \otimes s_k\|)_{k \ge 1}$ is a decreasing sequence, then $\phi(k)=k$. Now, we have: for any $l \ge k \ge 0$ $$\begin{aligned}
\| g_{\phi(l)-1} - g_{\phi(k)-1} \|^2
&\le - \| g_{\phi(l)-1} \|^2 + \| g_{\phi(k)-1} \|^2 + 2
\sum_{i=\phi(k)}^{\phi(l)-1} \|r_i \otimes s_i\| \|r_{\phi(l)} \otimes
s_{\phi(l)}\|\\
&\le - \| g_{\phi(l)-1} \|^2 + \| g_{\phi(k)-1} \|^2 + 2
\sum_{i=\phi(k)}^{\phi(l)-1} \|r_i \otimes s_i\|^2.\end{aligned}$$ Since $\sum_{k \ge 1} \|r_k \otimes s_k\|^2 < \infty$ and $(\|g_n\|)_{n
\ge 1}$ is converging, the previous inequality shows that the subsequence $(g_{\phi(l)-1})_{l \ge 0}$ is a Cauchy sequence, and therefore strongly converges to $0$ (recall it is already known that $g_n$ weakly converges to $0$). Since $\|g_n\|$ is itself a converging sequence, this shows that $$\lim_{n \to \infty} \|g_n\| = 0.$$
A similar result holds for the Orthogonal Greedy Algorithm.
\[theo:CV\_OGA\] [**\[Orthogonal Greedy Algorithm\]**]{}
Consider the Orthogonal Greedy Algorithm, and assume first that $(r_n^o,s_n^o)$ only satisfies the Euler-Lagrange equations associated with (thus with $(r_n,s_n,f_{n-1})=(r_n^o,s_n^o,f_{n-1}^o)$ in ). Denote by $$E_n^o=\frac{1}{2}\int_\Omega |\nabla (r_n^o \otimes s_n^o)|^2 - \int_\Omega f_{n-1}^o \,r_n^o \otimes s_n^o$$ the energy at iteration $n$. We have $$\label{eq:serCVo}
\displaystyle\sum_n \int_\Omega |\nabla (r_n^o \otimes
s_n^o)|^2 =- 2 \sum_n E_n^o < \infty.$$ Assume in addition that $(r_n^o,s_n^o)$ is indeed a minimizer to the optimization problem . Then, $$\label{eq:gCVo}
\lim_{n \to \infty} g_n^o = 0 \text{ in $H^1_0(\Omega)$.}$$
Immediate consequences of and are $$\lim_{n \to \infty} E_n^o = \lim_{n \to \infty} \| r_n^o \otimes
s_n^o\| = 0,$$ and $$\lim_{n \to \infty} f_n^o = 0 \text{ in $H^{-1}(\Omega)$}.$$
Let us first assume that $(r_n^o,s_n^o)$ only satisfies the Euler-Lagrange equations (with $(r_n,s_n,f_{n-1})=(r_n^o,s_n^o,f_{n-1}^o)$ in ). Notice that by and : $$\begin{aligned}
\|g_{n}^o\|^2
&=\bigg\|g - \sum_{k=1}^n \alpha_k r_k^o \otimes s_k^o \bigg\|^2\\
&\le \|g_{n-1}^o - r_n^o \otimes s_n^o \|^2\\
&=\|g_{n-1}^o\|^2 - \|r_n^o \otimes s_n^o \|^2.\end{aligned}$$ Thus, $\|g_{n}^o\|^2$ is a nonnegative non increasing sequence. Hence it converges. This implies that $\sum_{k \ge 1} \|r_k^o \otimes s_k^o \|^2 < \infty$, and proves the first part of the theorem, using the same arguments as in the proof of Theorem \[theo:CV\_PGA\].
Let us now assume in addition that $(r_n^o,s_n^o)$ is a minimizer to . For fixed $r$ and $s$, we derive from : $$- \frac{1}{2}\int_\Omega |\nabla( r_n^o \otimes s_n^o)|^2 = \frac{1}{2}\int_\Omega |\nabla (r_n^o \otimes s_n^o)|^2 - \int_\Omega f_{n-1}^o
\,r_n^o \otimes s_n^o \le \frac{1}{2}\int_\Omega |\nabla (r \otimes s)|^2
- \int_\Omega f_{n-1}^o \,r \otimes s.$$ Letting $n$ go to infinity, and using the same arguments as in the proof of Theorem \[theo:CV\_PGA\], this implies that $g_n^o$ weakly converges to $0$ in $H^1_0(\Omega)$. The proof of the strong convergence of $g_n^o$ to zero is then easy since, using the Euler Lagrange equations associated to : $$\|g_n^o\|^2 = \langle g_n^o , g \rangle,$$ and the right-hand side converges to $0$.
Rate of convergence of the method
---------------------------------
We now present an estimate of the rate of convergence for both the Pure and the Orthogonal Greedy Algorithms. These results are borrowed from [@devore-temlyakov-96]. We begin by only citing the result for Pure Greedy Algorithm. On the other hand, with a view to showing the typical mathematical ingredients at play, we outline the proof of convergence of the Orthogonal Greedy Algorithm, contained in the original article [@devore-temlyakov-96].
We first need to introduce a functional space adapted to the convergence analysis (see [@barron-cohen-dahmen-devore-08; @devore-temlyakov-96]).
We define the ${\mathcal L}^1$ space as $${\mathcal L}^1=\left\{g=\sum_{k \ge 0} c_k u_k \otimes v_k,
\text{ where $u_k \in H^1_0(\Omega_x)$, $v_k \in H^1_0(\Omega_y)$, $\|u_k \otimes v_k\|=1$ and $\sum_{k \ge 0} |c_k| < \infty$} \right\},$$ and we define the ${\mathcal L}^1$-norm as $$\|g\|_{{\mathcal L}^1}= \inf \left\{ \sum_{k \ge 0} |c_k|, g=\sum_{k \ge 0}
c_k u_k \otimes v_k, \text{ where $\|u_k \otimes v_k\|=1$} \right\},$$ for $g \in {\mathcal L}^1$.
The following properties may readily be established:
- The space ${\mathcal L}^1$ is a Banach space.
- The space ${\mathcal L}^1$ is continuously embedded in $H^1_0(\Omega)$.
Notice that, in the definition of ${\mathcal L}^1$, the function $g=\sum_{k \ge 0} c_k u_k \otimes v_k$ is indeed well defined in $H^1_0(\Omega)$ as a normally convergent series. This also shows that ${\mathcal L}^1 \subset H^1_0(\Omega)$, and this imbedding is continuous by the triangle inequality $\|\sum_{k \ge 0} c_k u_k \otimes v_k\| \le \sum_{k \ge 0} |c_k|$.
We do not know if there exists a simple characterization of functions in ${\mathcal L}^1$. Let us however give simple examples of such functions.
For any $m>2$, $H^m(\Omega) \cap H^1_0(\Omega) \subset {\mathcal L}^1$.
Without loss of generality, consider the case $\Omega_x=\Omega_y=(0,1)$. Using the fact that $\left\lbrace \phi_k \otimes \phi_l, k,l \ge 1 \right \rbrace$, where $\phi_k(x) = \sqrt{2} \sin ( k \pi x)$, is an orthonormal basis of $L^2(\Omega)$, we can write any function $g \in L^2(\Omega)$ as the series $g=\sum_{k,l \ge 1} g_{k,l} \phi_k \otimes
\phi_l$, where $g_{k,l} = \int_{\Omega} g \, \phi_k \otimes \phi_l$. It is well known that $$g \in H^1_0(\Omega) \iff \sum_{k,l \ge 1} |g_{k,l}|^2 (k^2+l^2) < \infty$$ and, more generally, for any $m \ge 1$, $$g \in H^m(\Omega) \cap H^1_0(\Omega) \iff \sum_{k,l \ge 1} |g_{k,l}|^2
(k^2+l^2)^m < \infty.$$ On the other hand, $$\begin{aligned}
\|g\|_{\mathcal L^1}
&=\bigg\|\sum_{k,l \ge 1} g_{k,l} \phi_k \otimes \phi_l\bigg\|_{\mathcal L^1} \\
&=\bigg\|\sum_{k,l \ge 1} g_{k,l} \| \phi_k \otimes \phi_l\| \frac{\phi_k \otimes \phi_l}{\| \phi_k \otimes \phi_l\|}\bigg\|_{\mathcal L^1} \\
&\le \sum_{k,l \ge 1} |g_{k,l}| \pi \sqrt{k^2 + l^2},\end{aligned}$$ since $\| \phi_k \otimes \phi_l\|= \pi \sqrt{k^2 + l^2}$. Thus, by the Hölder inequality, we have, for any $m > 2$, if $g \in H^m(\Omega) \cap H^1_0(\Omega)$, $$\begin{aligned}
\|g\|_{\mathcal L^1}
&\le \pi \sum_{k,l \ge 1} |g_{k,l}| (k^2 + l^2)^{m/2} (k^2 + l^2)^{(1-m)/2}\\
&\le \pi \left( \sum_{k,l \ge 1} |g_{k,l}|^2 (k^2+l^2)^m \right)^{1/2} \left( \sum_{k,l \ge 1} (k^2+l^2)^{1-m} \right)^{1/2}\\
& < \infty,\end{aligned}$$ since $\sum_{k,l \ge 1} (k^2+l^2)^{1-m}< \infty$ as soon as $m>2$.
\[rem:L1\] More generally, in dimension $N \ge 2$, the same proof shows that: for any $m>1+N/2$, $H^m(\Omega) \cap H^1_0(\Omega) \subset {\mathcal L}^1$.
Let us now give the rate of convergence of the Pure Greedy Algorithm. For the details of the proof, we again refer to [@devore-temlyakov-96]. The proof is based on the fundamental lemma:
\[lem:DVT\] Let us assume that $g \in {\mathcal L}^1$. Then, for any $n \ge 0$, $g_n
\in {\mathcal L}^1$ and we have: $$\|r_{n+1} \otimes s_{n+1}\| = \frac{\langle g_{n} , r_{n+1} \otimes s_{n+1} \rangle}{\| r_{n+1}
\otimes s_{n+1} \|} \ge \frac{\|g_n\|^2}{\|g_n\|_{{\mathcal L}^1}}.$$
The following technical result (easily obtained by induction) is also needed.
\[lem:suite\] Let $(a_n)_{n \ge 1}$ be a sequence of non-negative real numbers and $A$ a positive real number such that $a_1 \le A$ and $a_{n+1} \le a_n \left(1- \frac{a_n}{A}\right)$. Then, $\forall n \ge 1$, $$a_n \le \frac{A}{n}.$$
Using Lemma \[lem:DVT\] and Lemma \[lem:suite\], it is possible to show:
For $g \in {\mathcal L}^1$, we have $$\label{eq:RC}
\| g_n \| \le \|g\|^{2/3} \|g\|_{{\mathcal L}^1}^{1/3} n^{-1/6}.$$
A better rate of convergence can be proven for the Orthogonal Greedy Algorithm. For the Orthogonal Greedy Algorithm, the following Lemma plays the role of Lemma \[lem:DVT\].
\[lem:DVTo\] Assume that $g \in {\mathcal L}^1$. Then, for any $n \ge 0$, $g_n^o
\in {\mathcal L}^1$ and we have: $$\|r_{n+1}^o \otimes s_{n+1}^o\| = \frac{\langle g_{n}^o , r_{n+1}^o \otimes s_{n+1}^o \rangle}{\| r_{n+1}^o
\otimes s_{n+1}^o \|} \ge \frac{\|g_n^o\|^2}{\|g\|_{{\mathcal L}^1}}.$$
Since $g_n=g - \sum_{k=1}^{n} \alpha_k r_k \otimes s_k$, it is clear that $g_n
\in {\mathcal L}^1$. The equality $\|r_{n+1}^o \otimes s_{n+1}^o\|
=\frac{\langle g_{n}^o , r_{n+1}^o \otimes s_{n+1}^o \rangle}{\|
r_{n+1}^o \otimes s_{n+1}^o \|}$ is obtained as a consequence of the Euler-Lagrange equations associated to the optimization problem on $(r_{n+1}^o,s_{n+1}^o)$ (see ).
Since $g \in {\mathcal L}^1$, for any $\varepsilon >0$, we can write $g=\sum_{k \ge 0} c_k u_k
\otimes v_k$ with $\|u_k \otimes v_k\|=1$, and $\sum_{k \ge 0} |c_k| \le
\|g\|_{{\mathcal L}^1} + \varepsilon$. By , we have $\langle g- g_n^o , g_n^o \rangle=0$, and therefore, using Lemma \[lem:ProdScal\]: $$\begin{aligned}
\|g_n^o\|^2
&= \langle g_n^o , g \rangle \\
&= \left\langle g_n^o , \sum_{k \ge 0} c_k u_k \otimes v_k \right\rangle \\
&= \sum_{k \ge 0} c_k \langle g_n^o ,u_k \otimes v_k \rangle \\
& \le \sum_{k \ge 0} |c_k| \frac{\langle g_n^o ,r_{n+1}^o \otimes s_{n+1}^o \rangle}{\| r_{n+1}^o
\otimes s_{n+1}^o \|} \\
& = (\|g\|_{{\mathcal L}^1} + \varepsilon) \frac{\langle g_n^o ,r_{n+1}^o \otimes s_{n+1}^o \rangle}{\| r_{n+1}^o
\otimes s_{n+1}^o \|},\end{aligned}$$ from which we conclude letting $\varepsilon$ vanish.
For $g \in {\mathcal L}^1$, we have $$\label{eq:RCo}
\| g_n^o \| \le \|g\|_{{\mathcal L}^1} \, n^{-1/2}.$$
We have, using and Lemma \[lem:DVTo\]: $$\begin{aligned}
\|g_{n+1}^o \|^2
&=\bigg\|g - \sum_{k=1}^{n+1} \alpha_k r_k^o \otimes s_k^o \bigg\|^2\\
& \le \|g_{n}^o - r_{n+1}^o \otimes s_{n+1}^o\|^2 \nonumber \\
&=\|g_{n}^o\|^2 - \|r_{n+1}^o \otimes s_{n+1}^o\|^2 \nonumber \\
&=\|g_{n}^o\|^2 \left( 1 - \frac{\|r_{n+1}^o \otimes s_{n+1}^o\|^2 }{\|g_{n}^o\|^2} \right) \\
&\le \|g_{n}^o\|^2 \left( 1 - \frac{\|g_n^o\|^2}{\|g\|_{{\mathcal L}^1}^2}
\right).\end{aligned}$$ The conclusion is reached applying Lemma \[lem:suite\] with $a_{n}=\|g_{n-1}^o\|^2$ and $A=\|g\|_{{\mathcal L}^1}^2$.
The rate of convergence of the Pure Greedy Algorithm in may be improved to $n^{-11/62}$ [@konyagin-temlyakov-99]. For both algorithms, it is known that there exists dictionaries and right-hand sides $f$ (even simple ones, like a sum of only two elements of the dictionary) such that the rate of convergence $n^{-1/2}$ is attained (see [@livshitz-temlyakov-03; @devore-temlyakov-96; @barron-cohen-dahmen-devore-08]). In that sense, the Orthogonal Greedy Algorithm realizes the optimal rate of convergence. Notice that this rate of convergence does not depend on the dimension of the problem. However, the assumption $g
\in {\mathcal L}^1$ seems to be more and more demanding, in terms of regularity, as the dimension increases (see Remark \[rem:L1\]).
Discussion and open problems {#sec:discussion}
============================
We begin this section by considering the case when the Laplace operator is replaced by the identity operator. We examine on this simplified case the discrepancy between the variational approach consisting in minimizing the energy and the non variational approach solving the Euler-Lagrange equation.
The Singular Value Decomposition case {#sec:SVD}
-------------------------------------
The algorithms we have presented above are closely related to the Singular Value Decomposition (SVD, also called [*rank one decomposition*]{}). More precisely, omitting the gradient in the optimization problem yields: find $r_n \in L^2(\Omega_x)$ and $s_n \in L^2(\Omega_y)$ such that $$\label{eq:SVD_var}
(r_n,s_n) = \arg\min_{(r,s) \in L^2(\Omega_x)\times L^2(\Omega_y)} \int_\Omega |g_{n-1} - r \otimes s|^2,$$ with the recursion relation $$g_n=g_{n-1} - r_n \otimes s_n,$$ and $g_0=g$.
In view of the exact same arguments as in the previous sections, the series $\sum_{n \ge 1} r_n \otimes s_n$ can be shown to converge to $g$ in $L^2(\Omega)$. This problem has a well-known companion discrete problem, namely the SVD decomposition of a matrix (see for example [@trefethen-bau-97]). This corresponds to the case $\Omega_x=\{1, \ldots ,p\}$, $\Omega_y=\{1, \ldots ,q\}$, the integral $\int_\Omega$ is replaced by the discrete sum $\sum_{(i,j) \in {1, \ldots ,p}\times{1, \ldots ,q}}$, $G$ is a matrix in $\R^{p \times q}$ and $(R_n,S_n)$ are two (column) vectors in $\R^p \times \R^q$. In this case the tensor product $R_n \otimes S_n$ is simply the matrix $R_n (S_n)^T$. The matrices $G_n \in \R^{p \times q}$ are then defined by recursion: $G_0=G$ and $G_n=G_{n-1} - R_n (S_n)^T$.
### Orthogonality property
An important property of the sequence $(r_n,s_n)$ generated by the algorithm in the SVD case is the orthogonality relation: if $n \neq m$ $$\label{eq:SVD_ortho}
\int_{\Omega_x} r_n r_m= \int_{\Omega_y} s_n s_m=0.$$ In order to check this, let us first write the Euler-Lagrange equations in the SVD case (compare with ): for any functions $(r,s) \in L^2(\Omega_x)\times L^2(\Omega_y)$, $$\label{eq:SVD_EL_FV}
\int_\Omega r_n \otimes s_n (r_n \otimes s + r \otimes s_n) = \int_\Omega g_{n-1} (r_n \otimes s + r \otimes s_n).$$ This also reads (compare with ): $$\left\{
\begin{array}{l}
\displaystyle\left(\int_{\Omega_y} |s_n|^2\right)\, r_n = \int_{\Omega_y}
g_{n-1}\, s_n,\\
\\
\displaystyle\left(\int_{\Omega_x}
|r_n|^2 \right) \, s_n =\int_{\Omega_x} g_{n-1}\, r_n.
\end{array}
\right.$$ It is immediate to see that for $n=1$ and $n=2$ implies, $$\int_\Omega (r_2 \otimes s_2) (r_2 \otimes s_1) = \int_\Omega (r_2 \otimes s_2) (r_1 \otimes s_2) = 0.$$ Likewise, it can be shown, for any $n \ge 2$ and any $l \in \{2, \ldots
n \}$ $$\label{eq:SVD_vrai_ortho}
\int_\Omega \sum_{k=l}^n (r_k \otimes s_k)
\, (r_n \otimes s_{l-1})=\int_\Omega \sum_{k=l}^n (r_k \otimes s_k) \, (r_{l-1} \otimes s_n)=0.$$ The orthogonality property is then easy to check using the Fubini Theorem and arguing by induction.
A simple consequence of the orthogonality of the functions obtained by the algorithm is that, in the discrete version (SVD of a matrix $G \in
\R^{p \times q}$) the algorithm converges in a finite number of iterations (namely $\max(p,q)$). As usual in this situation, practice may significantly deviate from the above theory if round-off errors due to floating-point computations are taken into account. This is especially true if the matrix is ill conditioned.
### Consequences of the orthogonality property
The orthogonality property has several consequences: Assume the SVD to be nondegenerate in the sense $$\label{eq:nondeg}
g=\sum_{n \ge 1} \lambda_{n} \,u_n \otimes v_n,$$ with $$\label{eq:nondeg2}
\int_{\Omega_x} u_n u_m=\int_{\Omega_y} v_n v_m=\delta_{n,m},\forall n,m,\ \hbox{\rm and}\,
\left(\lambda_n\right)_{n \ge 1}\hbox{\rm positive, strictly decreasing,}$$ where $\delta_{n,m}$ is the Kronecker symbol. Then
- \(i) The Pure Greedy Algorithm and the Orthogonal Greedy Algorithm are equivalent to one another in the SVD case.
- \(ii) The SVD decomposition $g=\sum_{n \ge 1} r_n \otimes s_n$ is unique.
- \(iii) At iteration $n$, $\sum_{k=1}^n r_k \otimes s_k$ is the minimizer of $\int_\Omega |g - \sum_{k=1}^n \phi_k \otimes \psi_k|^2$ over all possible $(\phi_k,\psi_k)_{1 \le k \le n} \in \left( L^2(\Omega_x)\times L^2(\Omega_y) \right)^n$.
In addition, simple arguments show that,
- \(iv) The only solutions to the Euler Lagrange equations are the null solution $(0,0)$ and the tensor products $\lambda_n u_n
\otimes v_n$ (for all $n \ge 1$) in the SVD decomposition of $g$.
- \(v) The solutions to the Euler-Lagrange equations which maximize the $L^2$-norm $\left(\int_{\Omega} |r \otimes s|^2\right)^{1/2}$ are exactly the solutions to the variational problem .
- \(vi) In dimension $N=2$, the solutions to the Euler-Lagrange equation that satisfy the second order optimality conditions are exactly the solutions of the original variational problem .
Notice that there is no loss of generality in assuming $\lambda_n >0$, and $(\lambda_n)_{n \ge 1}$ decreasing in (\[eq:nondeg\]) (up to a change of the $(u_n,v_n)$). The fundamental assumption in nondegeneracy is thus that $\lambda_n \neq \lambda_m$ if $n \neq m$. When the decomposition has some degeneracy ([*i.e.*]{} several $n$ correspond to the same $\lambda_n$ in (\[eq:nondeg\])) then properties (i)-(iii)-(v)-(vi) still hold true. On the other hand, in (ii) the SVD is only unique up to rotations within eigenspaces and property (iv) must be modified accordingly. In short, the only other solutions beyond those mentioned above consist of tensor products of linear combinations of functions within a given eigenspace. We skip such technicalities. The degenerate case indeed does not differ much from the non degenerate case above in the sense that a complete understanding of the algorithm, both in its variational and in its non variational forms, is at hand.
Let us briefly outline the proofs of assertions (iv)-(v)-(vi).
We first prove assertion (iv). It is sufficient to consider the first iteration of the algorithm. Using the SVD decomposition of $g$, the Euler-Lagrange equations write: for any functions $(r,s) \in L^2(\Omega_x)\times L^2(\Omega_y)$, $$\int_\Omega r_1 \otimes s_1 (r_1 \otimes s + r \otimes s_1) = \sum_{n \ge 1} \lambda_n \int_\Omega u_n \otimes v_n (r \otimes s_1 + r_1 \otimes
s).$$ Using the orthogonality property, and successively $(r,s)=(0,v_n)$ and $(r,s)=(u_n,0)$ as test functions, we get $$\left\{
\begin{array}{l}
\displaystyle \int_{\Omega_x} |r_1|^2 \int_{\Omega_y} s_1 v_n = \lambda_n \int_{\Omega_x} r_1 u_n,\\
\\
\displaystyle \int_{\Omega_y} |s_1|^2 \int_{\Omega_x} r_1 u_n = \lambda_n \int_{\Omega_y} s_1 v_n,
\end{array}
\right.$$ which yields: $\forall n \ge 1$ $$\int_{\Omega_y} s_1 v_n \int_{\Omega_x} r_1 u_n \left(\int_{\Omega_x} |r_1|^2 \int_{\Omega_y} |s_1|^2 - (\lambda_n)^2 \right)=0.$$ Since for $n \neq m$, $\lambda_n \neq \lambda_m$, this shows that either $r_1 \otimes s_1 = 0$, or there exists a unique $n_0$ such that $\lambda_{n_0} = \sqrt{ \int_{\Omega} |r_1 \otimes s_1|^2}$ and $\forall n \neq n_0$, $\int_{\Omega_y} s_1 v_n = \int_{\Omega_x} r_1 u_n=0$ (because the product $\int_{\Omega_y} s_1 v_n \int_{\Omega_x} r_1 u_n$ cancels and thus each of the term cancels because of the Euler Lagrange equations). Since by the Euler-Lagrange equations, $r_1$ (resp. $s_1$) can be decomposed on the set of orthogonal functions $(u_n,\, n \ge 1)$ (resp. $(v_n,\, n \ge 1)$), we get $r_1 \otimes s_1 = \lambda_{n_0} u_{n_0} \otimes v_{n_0}$, which concludes the proof of assertion (iv). Assertion (v) is readily obtained using (iv) and the orthogonality property. Notice that assertion (ii) is a consequence of assertions (iv)-(v). To prove assertion (vi), we recall that the second order optimality condition writes (see Lemma \[lem:EL2\], adapted to the SVD case): $\forall (r,s) \in L^2(\Omega_x) \times L^2(\Omega_y)$, $$\label{eq:EL2_SVD}
\left(\int_\Omega ( r_n \otimes s_n - g_{n}) r \otimes s \right)^2 \le \int_\Omega | r
\otimes s_n|^2\ \int_\Omega |r_n \otimes s|^2.$$ It is again enough to consider the case $n=1$. Let us consider a solution of the Euler-Lagrange equation: $r_1 \otimes s_1 = \lambda_{n_0} u_{n_0} \otimes v_{n_0}$, and let us take as test functions in $(r,s)=(u_n,v_n)$, for all $n \ge 1$. We obtain that for all $n \ge 1$, $(\lambda_{n})^2 \le (\lambda_{n_0})^2$ which concludes the proof of assertion (vi). Notice that in dimension $N \ge 3$, assertion (vi) seemingly does not hold: the solutions to the Euler-Lagrange equation that satisfy the second order optimality conditions may not necessarily be global minimizers.
### Link between the Euler-Lagrange equations and the variational problem
Properties (iv)-(v)-(vi) above tend to indicate that, at least in the SVD case, the consideration of the solutions to the Euler-Lagrange equations is somehow close to the consideration of the minimization problems. Indeed, if we assume that at each iteration, non zero solutions of the Euler-Lagrange equations are obtained (of course under the assumption $g_{n-1} \neq 0$ in ), then the non variational form of the algorithm, if it converges, will eventually provide the correct decomposition. We however would like to mention two practical difficulties.
First, it is not clear in practice how to compute the norm $\|g_n\|$ to check the convergence, since this is in general a high dimensional integral. A more realistic convergence criterion would read: $\|r_n \otimes s_n\|$ [*is small compared to*]{} $\left\|\sum_{k=1}^{n-1} r_k \otimes
s_k\right\|$. However, using this criterion, it is possible to erroneously conclude that the algorithm has converged, while a term with an arbitrarily large contribution has been missed. Indeed, consider again, to convey the idea, the case (\[eq:nondeg\])-(\[eq:nondeg2\]). Assume that the tensor product $\lambda_2u_2\otimes v_2$ is picked at first iteration (*instead of* the tensor product $\lambda_1u_1\otimes v_1$ which would be selected by the *variational* version of the algorithm). Assume similarly that $\lambda_3u_3\otimes v_3$ is picked at second iteration, and so on and so forth. In such a situation, one would then decide the series $\displaystyle\sum_{n\geq 2}\lambda_nu_n\otimes v_n$ solves the problem, while obsviously it does not. We will show below (see Section \[sec:resol-EL\]) that in the simple fixed-point procedure we have described above to solve the nonlinear Euler-Lagrange equations, the fact that $\lambda_1 u_1 \otimes v_1$ is missed, and never obtained as a solution, may indeed happen as soon as the initial condition of the iterative procedure has a zero component on the eigenspace associated to $\lambda_1$.
Second, without an additional assumption reminiscent of the minimizing character of the solution, iteratively solving the Euler-Lagrange equations may result in picking the tensor products $\lambda_n u_n
\otimes v_n$ in an order not appropriate for computational efficiency. Such an assumption is present in assertions (v) and (vi). For the illustration, let us indeed consider a SVD decomposition $$g=\sum_{n\ge 1}\,\lambda_n u_n\otimes v_n$$ for some functions $u_n$ and $v_n$ that become highly oscillatory when $n$ grows. It is clear that we may obtain an error in $H^1$ norm that is arbitrarily large at each iteration of the algorithm. In particular, it may happen (in particular if smooth functions are chosen as initial guesses for the nonlinear iteration loop solving the Euler-Lagrange equation) that the highly oscillatory products are only selected in the latest iterations, although they contribute to the error in a major way. A poor efficiency of the algorithm follows. Inevitably, reaching computational efficiency therefore requires to account for some additional assumptions to select the appropriate solutions among the many solutions of the Euler-Lagrange equations.
In the spirit of the above discussion, one can notice that
- \(vii) The null solution $(0,0)$ to the Euler-Lagrange equation (\[eq:SVD\_EL\_FV\]) is generically not isolated within the set of all solutions.
Indeed, consider a SVD $\displaystyle
g=\sum_{n \ge 1} \lambda_n \,u_n\otimes v_n$, such that $u_n$ and $v_n$ are non-zero functions for all $n\ge 1$ (and $\lambda_n >0$). Then, any $(\lambda_n u_n,v_n)$ is a solution of the Euler Lagrange equation at the first iteration, and the norm of the $(\lambda_n u_n,v_n)$ which is selected may be arbitrarily small since the series $\sum_{n \ge 1} \lambda_n \,u_n\otimes v_n$ converges, and therefore $\|
\lambda_n u_n\otimes v_n \|$ goes to zero. A similar argument applies to all iterations of the algorithm. Therefore, a criterion of convergence of the type $\|r_n \otimes s_n\|$ [*is small compared to*]{} $\left\|\sum_{k=1}^{n-1} r_k \otimes s_k\right\|$ may again yield an erroneous conclusion and lead to a prematurate termination of the iterations.
Note of course that the relaxation step performed in the orthogonal version of the algorithm does not solve any of the above difficulties.
### Resolution of the Euler-Lagrange equations {#sec:resol-EL}
A last comment we would like to make on the SVD case again concerns the practical implementation of the solution procedure for the Euler-Lagrange equations. Consider the discrete case for clarity. The fixed point procedure then simply writes (for a fixed $n$): at iteration $k \ge 0$, compute two vectors $(R_n^k,S_n^k) \in \R^p \times \R^q$ such that: $$\label{eq:SVD_FP}
\left\{
\begin{array}{l}
(S_n^k)^T S_n^k R_n^{k+1} = G_{n-1} S_n^k,\\[4pt]
(R_n^{k+1})^T R_n^{k+1} S_n^{k+1} = (G_{n-1})^T R_n^{k+1}.
\end{array}
\right.$$ One can check that this procedure is similar to the power method to compute the largest eigenvalues (and associated eigenvectors) of the matrix $(G_{n-1})^T G_{n-1}$. Let us explain this. The recursion writes: $$S^{k+1}=(G^T G) S^k \frac{\|S^k\|^2}{\|GS^k\|^2},$$ where $\|\cdot\|$ here denotes the Euclidean norm and where we have omitted the subscripts $n$ and $n-1$ for clarity. To study the convergence of this algorithm one can assume that $G$ is actually a diagonal matrix up to a change of coordinate. Indeed, let us introduce the SVD decomposition of $G$: $G=U \Sigma V^T$ where $U$ and $V$ are two orthogonal matrices, and $\Sigma$ is a diagonal matrix with non-negative coefficients. Without loss of generality, we may assume that $q \le p$, $U \in
\R^{p\times q}$, $\Sigma \in \R^{q\times q}$, $V \in \R^{q\times q}$ and $\Sigma_{1,1} \ge \Sigma_{2,2} \ge \ldots \ge \Sigma_{q,q}$. For simplicity, assume that $\Sigma_{1,1} > \Sigma_{2,2} > 0$. Then, setting $\tilde{S}^k=V^TS^k$, the recursion reads $\tilde{S}^{k+1}=(\Sigma^T \Sigma) \tilde{S}^k \frac{\|\tilde{S}^k\|^2}{\|\Sigma
\tilde{S}^k\|^2}$ and the convergence is easy to study. One can check that if the initial condition $S^0$ has a non-zero component along the vector associated to the largest value $\Sigma_{1,1}$, then $S^k$ converges to this vector. The convergence is geometric, with a rate related to $\frac{\Sigma_{2,2}}{\Sigma_{1,1}}$ (at least if the initial condition $S^0$ has a non-zero component along the vector associated to $\Sigma_{2,2}$, otherwise $\Sigma_{2,2}$ should be replaced by the appropriate largest $\Sigma_{k,k}$, with $k > 1$). Of course, if the initial condition is not well chosen (namely, if $S^0$ has a zero component along the vector associated to $\Sigma_{1,1}$), then this algorithm cannot converge to the solution of the variational version of the algorithm.
We would like to mention that this method to compute the SVD of a matrix is actually known to poorly perform in practice. More precisely, the approach is very sensitive to numerical perturbations, see [@trefethen-bau-97 Lecture 31]) since the condition number of $(G_{n-1})^T
G_{n-1}$ is typically large. Alternative methods exist that compute the SVD decomposition, and it would be interesting to use these techniques as guidelines to build more efficient procedures to solve the nonlinear Euler-Lagrange equations .
Euler-Lagrange approach for the Poisson problem
-----------------------------------------------
We now return to the solution of the Poisson problem. Our purpose is to see which of the above mentioned difficulties survive in this case. We shall also see new difficulties appear.
We first observe, on a general note, that a property similar to holds in the Poisson case, namely: $$\label{eq:LRA_vrai_ortho}
\int_\Omega \nabla \left( \sum_{k=l}^n r_k \otimes s_k \right) \cdot
\nabla (r_n \otimes s_{l-1})=\int_\Omega \nabla \left( \sum_{k=l}^n
r_k \otimes s_k \right) \cdot \nabla (r_{l-1} \otimes s_n)=0.$$ This, however, does not seem to imply any simple orthogonality property as . In particular, in the Poisson case, it is generally wrong that, for $n \neq m$, $\int_{\Omega_x} \nabla (r_n
\otimes s_n) \cdot \nabla (r_m \otimes s_m)=0$.
Next, we remark that none of the properties (i)-(ii)-(iii) holds in the Poisson case. Likewise, we are not able to characterize the list of solutions to the Euler-Lagrange equations as we did in (iv)-(v)-(vi).
This is for the generic situation, but in order to better demonstrate the connections between the SVD case above and the Poisson case, let us show that, in fact, the Poisson case necessarily embeds all the difficulties of the SVD case. For this purpose, we consider the original algorithm (for the Poisson problem) performed for a particular right-hand-side $f=-\Delta g$, namely $$\label{eq:hyp_ortho_g}
\left\{
\begin{array}{l}
\text{$g=\sum_{k=1}^N \alpha_k \phi_k \otimes \psi_k$ where $\alpha_k
\in \R$,}\\[4pt]
\text{$\phi_k$ (resp. $\psi_k$) are eigenfunctions of}\\[4pt]
\text{the homogeneous Dirichlet operator
$-\partial_{xx}$ (resp. $-\partial_{yy}$)}\\[4pt]
\text{and satisfy $\forall k, l, \, \int \phi_k \phi_l = \int
\psi_k \psi_l = \delta_{k,l}$,}
\end{array}
\right.$$ where $\delta_{k,l}$ is again the Kronecker symbol. Then, it can be shown that, as in the SVD case, $r_k \otimes s_k = \alpha_k \phi_k
\otimes \psi_k$ are indeed solution to the Euler-Lagrange equations . This suffices to show the non uniqueness of the solution. Furthermore, and in sharp contrast to (iv), there even exist solutions to the Euler Lagrange equations that are not of the above form.
Here is an example of the latter claim. Consider the case $\phi_1=\psi_1$, associated with an eigenvalue $\lambda_1$ and $\phi_2=\psi_2$, associated with an eigenvalue $\lambda_2\neq \lambda_1$. We suppose $\alpha_k=0$ for $k \ge 2$. We are looking for $r$ and $s$ solution to the Euler-Lagrange equations $$\left\{
\begin{array}{l}
\displaystyle - \int |s|^2 r'' + \int
|s'|^2 r =\int f s,\\
\\
\displaystyle- \int |r|^2 s'' + \int
|r'|^2 s = \int f r.
\end{array}
\right.$$ Then, it can be checked that $r=r_1 \phi_1 + r_2 \phi_2$ and $s=s_1 \psi_1 + s_2 \psi_2$ are solution to the Euler-Lagrange equations, with the following set of parameters: $r_1=1$, $r_2=1/2$, $s_1=2$, $s_2=1$, $\alpha_1=\frac{9 \lambda_1 + \lambda_2}{4
\lambda_1}$ and $\alpha_2=\frac{2 \lambda_1 + 3 \lambda_2}{2
\lambda_2}$. Likewise, it is immediate to see that (vii) still holds. In view of the above remarks, it seems difficult to devise (and, even more difficult, to prove the convergence of) efficient iterative procedures to correctly solve the Euler-Lagrange equation.
Some numerical experiments and the non self-adjoint case
--------------------------------------------------------
We now show some numerical tests. Even though the algorithms presented above have been designed for solving problems in high dimension, we restrict ourselves to the two-dimensional case. For numerical results in higher dimension, we refer to [@ammar-mokdad-chinesta-keunings-06]. Moreover, we consider the discrete case mentioned in Section \[sec:SVD\], which writes (compare with ): for a given symmetric positive definite matrix $D \in \R^{d \times d}$ (which plays the role of the one-dimensional operator $-\partial_{xx}$), and a given matrix $F \in \R^{d \times d}$ (which plays the role of the right-hand side $f$): $$\label{eq:PbCont}
\text{Find $G \in \R^{d \times d}$ such that } DG + GD = F.$$ Here, the dimension $d$ typically corresponds to the number of points used to discretize the one-dimensional functions $r_n$ or $s_n$. To this problem is associated the variational problem (compare with ) $$\label{eq:FV}
\text{Find $G \in \R^{d \times d}$ such that } G= \arg\min_{U \in \R^{d
\times d}} \left(\frac{DU+UD}{2} -F \right):U,$$ where, for two matrices $A,B \in \R^{d \times d}$, $A:B = \sum_{1 \le i,j \le d} A_{i,j} B_{i,j}$. The matrix $G$ is built as a sum of rank one matrices $R_k S_k^T$ with $(R_k, S_k) \in (\R^{d})^2$, using the following Pure Greedy Algorithm (compare with the algorithm presented in Section \[sec:algo\]):
Set $F_0=F$ and at iteration $n \ge 1$,
1. Find $R_n$ and $S_n$ two vectors in $\R^d$ such that: $$\label{eq:FV_LRA}
(R_n,S_n)= \arg\min_{(R,S) \in (\R^{d})^2} \left(\frac{D (RS^T) +(RS^T)
D}{2} -F_{n-1} \right):(RS^T).$$
2. Set[^1] $F_{n}=F_{n-1}-(D R_n S_n^T+ R_n S_n^T D)$.
3. If $\|F_{n}\|>\varepsilon$, proceed to iteration $n+1$. Otherwise stop.
As explained in Section \[sec:EL\], Step 1 of the above algorithm is replaced in practice by the resolution of the associated Euler-Lagrange equations. This consists in finding two vectors $R_n$ and $S_n$ in $\R^d$ solution to the nonlinear equations: $$\label{eq:EL_LRA}
\left\{
\begin{array}{l}
\|S_n\|^2 \, D R_n + \|S_n\|_D^2 \, R_n =F_{n-1} S_n, \\[5pt]
\|R_n\|^2 \, D S_n + \|R_n\|_D^2 \, S_n =F_{n-1}^T R_n,
\end{array}
\right.$$ where, for any vectors $R \in \R^d$, we set $\|R\|_D^2=R^T D R$. This nonlinear problem is solved by a simple fixed point procedure (as ). We have observed in practice that choosing a random vector as an initial condition for the fixed point procedure is more efficient than taking a given deterministic vector (like $(1,\ldots,1)^T$). This is of course related to the convergence properties of the fixed point procedure we discussed in Section \[sec:resol-EL\].
### Convergence of the method
In this section, we take $D$ diagonal, with $(1,2,\ldots,d)$ on the diagonal, and a random matrix $F$. The parameter $\varepsilon$ is $10^{-6}$. We observe that the algorithm always converges. This means that, in practice, the solutions of the Euler-Lagrange equations selected by the fixed point procedure are appropriate.
On Figure \[fig:NRJ\], we plot the energy $\left(\frac{DU_n + U_nD}{2}
-F\right) : U_n$, where $U_n=\sum_{k=1}^n R_k S_k^T$. We observe that the energy rapidly decreases and next reaches a plateau. This is a general feature that we observe on all the tests we perfomed.
![Evolution of the energy as a function of iterations ($d=10$, $D=diag([linspace(1,2,d)])$, $\varepsilon=10^{-6}$).[]{data-label="fig:NRJ"}](./NRJ.eps){width="10cm"}
In Table \[tab:d\], we give the number of iterations necessary for convergence, as a function of $d$. We observe a linear dependency, which unfortunately we are unable to explain theoretically.
$d$ 10 20 30
---------------------- ------- ------- -------
Number of iterations 22-23 45-46 69-70
: Number of iterations typically needed for convergence as a function of $d$, for various random matrices $F$ ($D=diag([linspace(1,2,d)])$, $\varepsilon=10^{-6}$).[]{data-label="tab:d"}
### The non self-adjoint case
In [@ammar-mokdad-chinesta-keunings-06], it is actually proposed to use the Orthogonal Greedy Algorithm for non self adjoint operators.
Consider, for the prototypical case of an advection diffusion equation: $$\label{eq:adv_diff}
\text{Find $g \in H^1_0(\Omega) $ such that }\left\{
\begin{array}{rl}
a \cdot \nabla g -\Delta g =f &\text{ in $\Omega$},\\
g=0& \text{ on $\partial \Omega$},
\end{array}
\right.$$ where $a : \Omega \to \R^2$ is a given smooth velocity field. When $a=
\nabla V$ for some real-valued function $V$, problem (\[eq:adv\_diff\]) is equivalent to minimizing the energy $$\frac{1}{2} \int_\Omega |\nabla u|^2 \exp(-V) - \int f u \exp(-V).$$ When this is not the case, it is not in general possible to recast in terms of a minimization problem. However, a variational formulation can be written as: Find $g \in H^1_0(\Omega)$ such that, for all $v \in H^1_0(\Omega)$, $$\int_\Omega (a \cdot \nabla g) v +\nabla g \cdot \nabla v= \int_\Omega f v.$$ It is proposed in [@ammar-mokdad-chinesta-keunings-06] to use this variational formulation in step 1 and 2 of the Orthogonal Greedy Algorithm. The iterations then write:
set $f_0=f$, and at iteration $n \ge 1$,
1. Find $r_n \in H^1_0(\Omega_x)$ and $s_n \in H^1_0(\Omega_y)$ such that, for all functions $(r,s) \in H^1_0(\Omega_x) \times H^1_0(\Omega_y)$, $$\label{eq:advdiff_EL}
\int_\Omega (a \cdot \nabla (r_n \otimes s_n)) (r_n \otimes s + r \otimes s_n) +\nabla (r_n \otimes s_n) \cdot \nabla (r_n \otimes s + r \otimes
s_n)= \int_\Omega f_{n-1} (r_n \otimes s + r \otimes s_n).$$
2. Find $u_n \in {\rm Vect}(r_1 \otimes s_1, \ldots, r_n \otimes s_n)$ such that for all $v \in {\rm Vect}(r_1 \otimes s_1, \ldots, r_n \otimes s_n)$ $$\label{eq:advdiff_gal}
\int_\Omega (a \cdot \nabla u_n) v +\nabla u_n \cdot \nabla v =\int_\Omega f v.$$
3. Set $f_{n}=f_{n-1} - (a \cdot \nabla u_n -\Delta u_n)$.
4. If $\|f_{n}\|_{H^-1(\Omega)} \ge \varepsilon$, proceed to iteration $n+1$. Otherwise, stop.
The corresponding discrete formulation reads: $$\label{eq:PbCont_NA}
\text{Find $G \in \R^{d \times d}$ such that } B G + G B^T = F,$$ where $B$ is not supposed to be symmetric here (compare to ). The numerical method reads:
Set $F_0=F$ and at iteration $n \ge
1$,
1. Find $R_n$ and $S_n$ two vectors in $\R^d$ such that: $$\label{eq:EL_LRA_NA}
\left\{
\begin{array}{l}
\|S_n\|^2 \, B R_n + \|S_n\|_B^2 \, R_n = F_{n-1} S_n, \\[5pt]
\|R_n\|^2 \, B S_n + \|R_n\|_B^2 \, S_n = F_{n-1}^T R_n.
\end{array}
\right.$$
2. Set $F_{n}=F_{n-1}-(B R_n S_n^T+ R_n S_n^T B^T)$.
3. If $\|F_{n}\|>\varepsilon$, proceed to iteration $n+1$. Otherwise stop.
We consider the case when $B= D + A$ with $D$ symmetric positive definite, and $A$ antisymmetric, so that we know there exists a unique solution to . On the numerical tests we have performed, the algorithm seems to converge. In the absence of any energy minimization principle, it is however unclear to us how to prove convergence of this algorithm.
[1]{}
A. Ammar, B. Mokdad, F. Chinesta, and R. Keunings. A new family of solvers for some classes of multidimensional partial differential equations encountered in kinetic theory modeling of complex fluids. , 139:153–176, 2006.
A.R. Barron, A. Cohen, W. Dahmen, and R.A. DeVore. Approximation and learning by greedy algorithms. , 36(1):64–94, 2008.
G. Davis, S. Mallat, and M. Avellaneda. Adaptive greedy approximations. , 13(1):57–98, 1997.
R.A. DeVore and V.N. Temlyakov. Some remarks on greedy algorithms. , 5:173–187, 1996.
L.K. Jones. On a conjecture of [H]{}uber concerning the convergence of projection pursuit regression. , 15(2):880–882, 1987.
S.V. Konyagin and V.N. Temlyakov. Rate of convergence of pure greedy algorithm. , 5:493–499, 1999.
E.D. Livshitz and V.N. Temlyakov. Two lower estimates in greedy approximation. , 19:509–524, 2003.
V.N. Temlyakov. Greedy approximation. , 17:235–409, 2008.
L.N. Trefethen and D. Bau. . SIAM, 1997.
[^1]: In practice, to avoid numerical cancellation, we actually set $F_{n}=F-(D U_n+U_n D)$ where $U_n=\sum_{k=1}^n R_k S_k^T$.
|
---
abstract: |
I review the use of Type Ia supernovae (SNe Ia) for cosmological distance determinations. Low-redshift SNe Ia ($z \lesssim 0.1$) demonstrate that the Hubble expansion is linear, that $H_0 = 65 \pm 2$ (statistical) km s$^{-1}$ Mpc$^{-1}$, and that the properties of dust in other galaxies are similar to those of dust in the Milky Way. The light curves of high-redshift ($z =
0.3$–1) SNe Ia are stretched in a manner consistent with the expansion of space; similarly, their spectra exhibit slower temporal evolution (by a factor of $1 + z$) than those of nearby SNe Ia. The measured luminosity distances of SNe Ia as a function of redshift have shown that the expansion of the Universe is currently accelerating, probably due to the presence of repulsive dark energy such as Einstein’s cosmological constant ($\Lambda$). Combining our data with existing measurements of the cosmic microwave background (CMB) radiation and with the results of large-scale structure surveys, we find a best fit for $\Omega_m$ and $\Omega_\Lambda$ of about 0.3 and 0.7, respectively. Other studies (e.g., masses of clusters of galaxies) also suggest that $\Omega_m
\approx 0.3$. The sum of the densities, $\sim 1.0$, agrees with the value predicted by most inflationary models for the early Universe: the Universe is flat on large scales. A number of possible systematic effects (dust, supernova evolution) thus far do not seem to eliminate the need for $\Omega_\Lambda >
0$. Most recently, analyses of SNe Ia at $z = 1.0-1.7$ provide further support for current acceleration, and give tentative evidence for an early epoch of deceleration. Current projects include the search for additional SNe Ia at $z >
1$ to confirm the early deceleration, and the measurement of a few hundred SNe Ia at $z = 0.2-0.8$ to determine the equation of state of the dark energy, $w = P/(\rho c^2)$.
author:
- |
ALEXEI V. FILIPPENKO\
Department of Astronomy, University of California, Berkeley
---
\[1996/06/01\]
å[[A&A]{}]{}
Evidence from Type Ia Supernovae\
for an Accelerating Universe and\
Dark Energy
=================================
Introduction
------------
Supernovae (SNe) come in two main varieties (see Filippenko 1997b for a review). Those whose optical spectra exhibit hydrogen are classified as Type II, while hydrogen-deficient SNe are designated Type I. SNe I are further subdivided according to the appearance of the early-time spectrum: SNe Ia are characterized by strong absorption near 6150 Å (now attributed to Si II), SNe Ib lack this feature but instead show prominent He I lines, and SNe Ic have neither the Si II nor the He I lines. SNe Ia are believed to result from the thermonuclear disruption of carbon-oxygen white dwarfs, while SNe II come from core collapse in massive supergiant stars. The latter mechanism probably produces most SNe Ib/Ic as well, but the progenitor stars previously lost their outer layers of hydrogen or even helium.
It has long been recognized that SNe Ia may be very useful distance indicators for a number of reasons; see Branch & Tammann (1992), Branch (1998), and references therein. (1) They are exceedingly luminous, with peak $M_B$ averaging $-19.2$ mag if $H_0 = 65$ km s$^{-1}$ Mpc$^{-1}$. (2) “Normal” SNe Ia have small dispersion among their peak absolute magnitudes ($\sigma
\lesssim 0.3$ mag). (3) Our understanding of the progenitors and explosion mechanism of SNe Ia is on a reasonably firm physical basis. (4) Little cosmic evolution is expected in the peak luminosities of SNe Ia, and it can be modeled. This makes SNe Ia superior to galaxies as distance indicators. (5) One can perform [*local*]{} tests of various possible complications and evolutionary effects by comparing nearby SNe Ia in different environments.
Research on SNe Ia in the 1990s has demonstrated their enormous potential as cosmological distance indicators. Although there are subtle effects that must indeed be taken into account, it appears that SNe Ia provide among the most accurate values of $H_0$, $q_0$ (the deceleration parameter), $\Omega_m$ (the matter density), and $\Omega_\Lambda$ \[the cosmological constant, $\Lambda
c^2/(3H_0^2)$\].
There have been two major teams involved in the systematic investigation of high-redshift SNe Ia for cosmological purposes. The “Supernova Cosmology Project” (SCP) is led by Saul Perlmutter of the Lawrence Berkeley Laboratory, while the “High-Z Supernova Search Team” (HZT) is led by Brian Schmidt of the Mt. Stromlo and Siding Springs Observatories. I have been privileged to work with both teams (see Filippenko 2001 for a personal account), but my primary allegiance is now with the HZT.
Homogeneity and Heterogeneity
-----------------------------
Until the mid-1990s, the traditional way in which SNe Ia were used for cosmological distance determinations was to assume that they are perfect “standard candles” and to compare their observed peak brightness with those of SNe Ia in galaxies whose distances had been independently determined (e.g., with Cepheid variables). The rationale was that SNe Ia exhibit relatively little scatter in their peak blue luminosity ($\sigma_B \approx 0.4$–0.5 mag; Branch & Miller 1993), and even less if “peculiar” or highly reddened objects were eliminated from consideration by using a color cut. Moreover, the optical spectra of SNe Ia are usually rather homogeneous, if care is taken to compare objects at similar times relative to maximum brightness (Riess et al. 1997, and references therein). Over 80% of all SNe Ia discovered through the early 1990s were “normal” (Branch, Fisher, & Nugent 1993).
From a Hubble diagram constructed with unreddened, moderately distant SNe Ia ($z \lesssim 0.1$) for which peculiar motions should be small and relative distances (as given by ratios of redshifts) are accurate, Vaughan et al. (1995) find that $$\langle M_B({\rm max})\rangle \ = \ (-19.74 \pm 0.06) + 5\, {\rm log}\, (H_0/50)~{\rm mag}.$$ In a series of papers, Sandage et al. (1996) and Saha et al. (1997) combine similar relations with [*Hubble Space Telescope (HST)*]{} Cepheid distances to the host galaxies of seven SNe Ia to derive $H_0 = 57 \pm 4$ km s$^{-1}$ Mpc$^{-1}$.
Over the past two decades it has become clear, however, that SNe Ia do [*not*]{} constitute a perfectly homogeneous subclass (e.g., Filippenko 1997a,b). In retrospect this should have been obvious: the Hubble diagram for SNe Ia exhibits scatter larger than the photometric errors, the dispersion actually [*rises*]{} when reddening corrections are applied (under the assumption that all SNe Ia have uniform, very blue intrinsic colors at maximum; van den Bergh & Pazder 1992; Sandage & Tammann 1993), and there are some significant outliers whose anomalous magnitudes cannot possibly be explained by extinction alone.
Spectroscopic and photometric peculiarities have been noted with increasing frequency in well-observed SNe Ia. A striking case is SN 1991T; its pre-maximum spectrum did not exhibit Si II or Ca II absorption lines, yet two months past maximum brightness the spectrum was nearly indistinguishable from that of a classical SN Ia (Filippenko et al. 1992b; Phillips et al. 1993). The light curves of SN 1991T were slightly broader than the SN Ia template curves, and the object was probably somewhat more luminous than average at maximum. Another well-observed, peculiar SNe Ia is SN 1991bg (Filippenko et al. 1992a; Leibundgut et al. 1993; Turatto et al. 1996). At maximum brightness it was subluminous by 1.6 mag in $V$ and 2.5 mag in $B$, its colors were intrinsically red, and its spectrum was peculiar (with a deep absorption trough due to Ti II). Moreover, the decline from maximum was very steep, the $I$-band light curve did not exhibit a secondary maximum like normal SNe Ia, and the velocity of the ejecta was unusually low. The photometric heterogeneity among SNe Ia is well demonstrated by Suntzeff (1996) with objects having excellent $BVRI$ light curves.
Cosmological Uses: Low Redshifts
--------------------------------
Although SNe Ia can no longer be considered perfect “standard candles,” they are still exceptionally useful for cosmological distance determinations. Excluding those of low luminosity (which are hard to find, especially at large distances), most of the nearby SNe Ia that had been discovered through the early 1990s were [*nearly*]{} standard (Branch et al. 1993; but see Li et al. 2001b for more recent evidence of a higher intrinsic peculiarity rate). Also, after many tenuous suggestions (e.g., Pskovskii 1977, 1984; Branch 1981), Phillips (1993) found convincing evidence for a correlation between light curve shape and the luminosity at maximum brightness by quantifying the photometric differences among a set of nine well-observed SNe Ia, using a parameter \[$\Delta m_{15}(B)$\] that measures the total drop (in $B$ magnitudes) from $B$-band maximum to $t = 15$ days after $B$ maximum. In all cases the host galaxies of his SNe Ia have accurate relative distances from surface brightness fluctuations or from the Tully-Fisher relation. The intrinsically bright SNe Ia clearly decline more slowly than dim ones, but the correlation is stronger in $B$ than in $V$ or $I$.
Using SNe Ia discovered during the Calán/Tololo survey ($z \lesssim 0.1$), Hamuy et al. (1995, 1996b) confirm and refine the Phillips (1993) correlation between peak luminosity and $\Delta m_{15}(B)$. Apparently the slope is steep only at low luminosities; thus, objects such as SN 1991bg skew the slope of the best-fitting single straight line. Hamuy et al. reduce the scatter in the Hubble diagram of normal, unreddened SNe Ia to only 0.17 mag in $B$ and 0.14 mag in $V$; see also Tripp (1997). Yet another parameterization is the “stretch” method of Perlmutter et al. (1997) and Goldhaber et al. (2001): the $B$-band light curves of SNe Ia appear nearly identical when expanded or contracted temporally by a factor $(1+s)$, where the value of $s$ varies among objects. In a similar but distinct effort, Riess, Press, & Kirshner (1995) show that the luminosity of SNe Ia correlates with the detailed [*shape*]{} of the overall light curve.
By using light curve shapes measured through several different filters, Riess, Press, & Kirshner (1996a) extend their analysis and objectively eliminate the effects of interstellar extinction: a SN Ia that has an unusually red $B-V$ color at maximum brightness is assumed to be [*intrinsically*]{} subluminous if its light curves rise and decline quickly, or of normal luminosity but significantly [*reddened*]{} if its light curves rise and decline more slowly. With a set of 20 SNe Ia consisting of the Calán/Tololo sample and their own objects, they show that the dispersion decreases from 0.52 mag to 0.12 mag after application of this “multi-color light curve shape” (MLCS) method. The results from a recent, expanded set of nearly 50 SNe Ia indicate that the dispersion decreases from 0.44 mag to 0.15 mag (Fig. 1.1). The resulting Hubble constant is $65 \pm 2$ (statistical) km s$^{-1}$ Mpc$^{-1}$, with an additional systematic and zeropoint uncertainty of $\pm 5$ km s$^{-1}$ Mpc$^{-1}$. Riess et al. (1996a) also show that the Hubble flow is remarkably linear; indeed, SNe Ia now constitute the best evidence for linearity. Finally, they argue that the dust affecting SNe Ia is [*not*]{} of circumstellar origin, and show quantitatively that the extinction curve in external galaxies typically does not differ from that in the Milky Way (cf. Branch & Tammann 1992, but see Tripp 1998).
The advantage of systematically correcting the luminosities of SNe Ia at high redshifts rather than trying to isolate “normal” ones seems clear in view of evidence that the luminosity of SNe Ia may be a function of stellar population. If the most luminous SNe Ia occur in young stellar populations (e.g., Hamuy et al. 1996a, 2000; Branch, Baron, & Romanishin 1996; Ivanov, Hamuy, & Pinto 2000), then we might expect the mean peak luminosity of high-$z$ SNe Ia to differ from that of a local sample. Alternatively, the use of Cepheids (Population I objects) to calibrate local SNe Ia can lead to a zeropoint that is too luminous. On the other hand, as long as the physics of SNe Ia is essentially the same in young stellar populations locally and at high redshift, we should be able to adopt the luminosity correction methods (photometric and spectroscopic) found from detailed studies of low-$z$ SNe Ia.
Large numbers of nearby SNe Ia are now being found by my team’s Lick Observatory Supernova Search (LOSS) conducted with the 0.76-m Katzman Automatic Imaging Telescope (KAIT; Li et al. 2000; Filippenko et al. 2001; see http://astro.berkeley.edu/$\sim$bait/kait.html). CCD images are taken of $\sim
1000$ galaxies per night and compared with KAIT “template images” obtained earlier; the templates are automatically subtracted from the new images and analyzed with computer software. The system reobserves the best candidates the same night, to eliminate star-like cosmic rays, asteroids, and other sources of false alarms. The next day, undergraduate students at UC Berkeley examine all candidates, including weak ones, and they glance at all subtracted images to locate SNe that might be near bright, poorly subtracted stars or galactic nuclei. LOSS discovered 20 SNe (of all types) in 1998, 40 in 1999, 38 in 2000, 69 in 2001, and 82 in 2002, making it by far the world’s most successful search for nearby SNe. The most important objects were photometrically monitored through $BVRI$ (and sometimes $U$) filters (e.g., Li et al. 2001a, 2003; Modjaz et al. 2001; Leonard et al. 2002a,b), and unfiltered follow-up observations (e.g., Matheson et al. 2001) were made of most of them during the course of the SN search. This growing sample of well-observed SNe Ia should allow us to more precisely calibrate the MLCS method, as well as to look for correlations between the observed properties of the SNe and their environment (Hubble type of host galaxy, metallicity, stellar population, etc.).
Cosmological Uses: High Redshifts
---------------------------------
These same techniques can be applied to construct a Hubble diagram with high-redshift SNe Ia, from which the value of $q_0 = (\Omega_m/2) -
\Omega_\Lambda$ can be determined. With enough objects spanning a range of redshifts, we can determine $\Omega_m$ and $\Omega_\Lambda$ independently (e.g., Goobar & Perlmutter 1995). Contours of peak apparent $R$-band magnitude for SNe Ia at two redshifts have different slopes in the $\Omega_m$–$\Omega_\Lambda$ plane, and the regions of intersection provide the answers we seek.
### The Search
Based on the pioneering work of Norgaard-Nielsen et al. (1989), whose goal was to find SNe in moderate-redshift clusters of galaxies, the SCP (Perlmutter et al. 1995a, 1997) and the HZT (Schmidt et al. 1998) devised a strategy that almost guarantees the discovery of many faint, distant SNe Ia “on demand,” during a predetermined set of nights. This “batch” approach to studying distant SNe allows follow-up spectroscopy and photometry to be scheduled in advance, resulting in a systematic study not possible with random discoveries. Most of the searched fields are equatorial, permitting follow-up from both hemispheres. The SCP was the first group to convincingly demonstrate the ability to find SNe in batches.
Our approach is simple in principle. Pairs of first-epoch images are obtained with wide-field cameras on large telescopes (e.g., the Big Throughput Camera on the CTIO 4-m Blanco telescope) during the nights around new moon, followed by second-epoch images 3–4 weeks later. (Pairs of images permit removal of cosmic rays, asteroids, and distant Kuiper-belt objects.) These are compared immediately using well-tested software, and new SN candidates are identified in the second-epoch images (Fig. 1.2). Spectra are obtained as soon as possible after discovery to verify that the objects are SNe Ia and determine their redshifts. Each team has already found over 150 SNe in concentrated batches, as reported in numerous [*IAU Circulars*]{} (e.g., Perlmutter et al. 1995b, 11 SNe with $0.16 \lesssim z \lesssim 0.65$; Suntzeff et al. 1996, 17 SNe with $0.09 \lesssim z \lesssim 0.84$). The observed SN Ia rate at $z
\approx 0.5$ is consistent with the low-$z$ SN Ia rate together with plausible star-formation histories (Pain et al. 2002; Tonry et al. 2003), but the error bars on the high-$z$ rate are still quite large.
Intensive photometry of the SNe Ia commences within a few days after procurement of the second-epoch images; it is continued throughout the ensuing and subsequent dark runs. In a few cases [*HST*]{} images are obtained. As expected, most of the discoveries are [*on the rise or near maximum brightness*]{}. When possible, the SNe are observed in filters that closely match the redshifted $B$ and $V$ bands; this way, the K-corrections become only a second-order effect (Kim, Goobar, & Perlmutter 1996; Nugent, Kim, & Perlmutter 2002). We try to obtain excellent multi-color light curves, so that reddening and luminosity corrections can be applied (Riess et al. 1996a; Hamuy et al. 1996a,b).
Although SNe in the magnitude range 22–22.5 can sometimes be spectroscopically confirmed with 4-m class telescopes, the signal-to-noise ratios are low, even after several hours of integration. Certainly Keck or the VLT are required for the fainter objects (22.5–24.5 mag). With the largest telescopes, not only can we rapidly confirm a substantial number of candidate SNe, but we can search for peculiarities in the spectra that might indicate evolution of SNe Ia with redshift. Moreover, high-quality spectra allow us to measure the age of a SN: we have developed a method for automatically comparing the spectrum of a SN Ia with a library of spectra corresponding to many different epochs in the development of SNe Ia (Riess et al. 1997). Our technique also has great practical utility at the telescope: we can determine the age of a SN “on the fly,” within half an hour after obtaining its spectrum. This allows us to decide rapidly which SNe are best for subsequent photometric follow-up, and we immediately alert our collaborators on other telescopes.
### Results
First, we note that the light curves of high-redshift SNe Ia are broader than those of nearby SNe Ia; the initial indications (Leibundgut et al. 1996; Goldhaber et al. 1997), based on small numbers of SNe Ia, are amply confirmed with the larger samples (Goldhaber et al. 2001). Quantitatively, the amount by which the light curves are “stretched” is consistent with a factor of $1 +
z$, as expected if redshifts are produced by the expansion of space rather than by “tired light” and other non-expansion hypotheses for the redshifts of objects at cosmological distances. \[For non-standard cosmological interpretations of the SN Ia data, see Narlikar & Arp (1997) and Hoyle, Burbidge, & Narlikar (2000).\] We also demonstrate this [*spectroscopically*]{} at the $2\sigma$ confidence level for a single object: the spectrum of SN 1996bj ($z = 0.57$) evolved more slowly than those of nearby SNe Ia, by a factor consistent with $1 + z$ (Riess et al. 1997). Although one might be able to argue that something other than universal expansion could be the cause of the apparent stretching of SN Ia light curves at high redshifts, it is much more difficult to attribute apparently slower evolution of spectral details to an unknown effect.
The formal value of $\Omega_m$ derived from SNe Ia has changed with time. The SCP published the first result (Perlmutter et al. 1995a), based on a single object, SN 1992bi at $z = 0.458$: $\Omega_m = 0.2 \pm 0.6 \pm 1.1$ (assuming that $\Omega_\Lambda = 0$). The SCP’s analysis of their first seven objects (Perlmutter et al. 1997) suggested a much larger value of $\Omega_m = 0.88 \pm
0.6$ (if $\Omega_\Lambda = 0$) or $\Omega_m = 0.94 \pm 0.3$ (if $\Omega_{\rm
total} = 1$). Such a high-density universe seemed at odds with other, independent measurements of $\Omega_m$. However, with the subsequent inclusion of just one more object, SN 1997ap at $z = 0.83$ (the highest known for a SN Ia at the time; Perlmutter et al. 1998), their estimates were revised back down to $\Omega_m = 0.2 \pm 0.4$ if $\Omega_\Lambda = 0$, and $\Omega_m = 0.6 \pm 0.2$ if $\Omega_{\rm total} = 1$; the apparent brightness of SN 1997ap had been precisely measured with [*HST*]{}, so it substantially affected the best fits.
Meanwhile, the HZT published (Garnavich et al. 1998a) an analysis of four objects (three of them observed with [*HST*]{}), including SN 1997ck at $z =
0.97$, at that time a redshift record, although they cannot be absolutely certain that the object was a SN Ia because the spectrum is too poor. From these data, the HZT derived that $\Omega_m = -0.1 \pm 0.5$ (assuming $\Omega_\Lambda = 0$) and $\Omega_m = 0.35 \pm 0.3$ (assuming $\Omega_{\rm total} = 1$), inconsistent with the high $\Omega_m$ initially found by Perlmutter et al. (1997) but consistent with the revised estimate in Perlmutter et al. (1998). An independent analysis of 10 SNe Ia using the “snapshot” distance method (with which conclusions are drawn from sparsely observed SNe Ia) gave quantitatively similar conclusions (Riess et al. 1998a). However, none of these early data sets carried the statistical discriminating power to detect cosmic acceleration.
The SCP’s next results were announced at the 1998 January AAS meeting in Washington, DC. A press conference was scheduled, with the stated purpose of presenting and discussing the then-current evidence for a low-$\Omega_m$ universe as published by Perlmutter et al. (1998; SCP) and Garnavich et al. (1998a; HZT). When showing the SCP’s Hubble diagram for SNe Ia, however, Perlmutter also pointed out tentative evidence for [*acceleration*]{}! He stated that the conclusion was uncertain, and that the data were equally consistent with no acceleration; the systematic errors had not yet been adequately assessed. Essentially the same conclusion was given by the SCP in their talks at a conference on dark matter, near Los Angeles, in February 1998 (Goldhaber & Perlmutter 1998).
Although it chose not to reveal them at the same 1998 January AAS meeting, the HZT already had similar, tentative evidence for acceleration in their own SN Ia data set. The HZT continued to perform numerous checks of their data analysis and interpretation, including fairly thorough consideration of various possible systematic effects. Unable to find any significant problems, even with the possible systematic effects, the HZT reported detection of a [*nonzero*]{} value for $\Omega_\Lambda$ (based on 16 high-$z$ SNe Ia) at the Los Angeles dark matter conference in February 1998 (Filippenko & Riess 1998), and soon thereafter submitted a formal paper that was published in September 1998 (Riess et al. 1998b). Their original Hubble diagram for the 10 best-observed high-$z$ SNe Ia is given in Figure 1.3 ([*left*]{}). With the MLCS method applied to the full set of 16 SNe Ia, the HZT’s formal results were $\Omega_m = 0.24 \pm
0.10$ if $\Omega_{\rm total} = 1$, or $\Omega_m = -0.35 \pm 0.18$ (unphysical) if $\Omega_\Lambda = 0$. If one demanded that $\Omega_m = 0.2$, then the best value for $\Omega_\Lambda$ was $0.66 \pm 0.21$. These conclusions did not change significantly when only the 10 best-observed SNe Ia were used (Fig. 1.3, [*left*]{}; $\Omega_m = 0.28 \pm 0.10$ if $\Omega_{\rm total} = 1$).
Another important constraint on the cosmological parameters could be obtained from measurements of the angular scale of the first acoustic peak of the CMB (e.g., Zaldarriaga, Spergel, & Seljak 1997; Eisenstein, Hu, & Tegmark 1998); the SN Ia and CMB techniques provide nearly complementary information. A stunning result was already available by mid-1998 from existing measurements (e.g., Hancock et al. 1998; Lineweaver & Barbosa 1998): the HZT’s analysis of the SN Ia data in Riess et al. (1998b) demonstrated that $\Omega_m + \Omega_\Lambda = 0.94 \pm 0.26$ (Fig. 1.3, [*right*]{}), when the SN and CMB constraints were combined (Garnavich et al. 1998b; see also Lineweaver 1998, Efstathiou et al. 1999, and others).
Somewhat later (June 1999), the SCP published almost identical results, implying an accelerating expansion of the Universe, based on an essentially independent set of 42 high-$z$ SNe Ia (Perlmutter et al. 1999). Their data, together with those of the HZT, are shown in Figure 1.4 ([*left*]{}), and the corresponding confidence contours in the $\Omega_\Lambda$ vs. $\Omega_m$ plane are given in Figure 1.4 ([*right*]{}). This incredible agreement suggested that neither group had made a large, simple blunder; if the result was wrong, the reason must be subtle. Had there been only one team working in this area, it is likely that far fewer astronomers and physicists throughout the world would have taken the result seriously.
Moreover, already in 1998–1999 there was tentative evidence that the “dark energy” driving the accelerated expansion was indeed consistent with the cosmological constant, $\Lambda$. If $\Lambda$ dominates, then the equation of state of the dark energy should have an index $w = -1$, where the pressure ($P$) and density ($\rho$) are related according to $w = P/(\rho c^2)$. Garnavich et al. (1998b) and Perlmutter et al. (1999) already set an interesting limit, $w \lesssim -0.60$ at the 95% confidence level. However, more high-quality data at $z \approx 0.5$ are needed to narrow the allowed range, in order to test other proposed candidates for dark energy such as various forms of “quintessence” (e.g., Caldwell, Davé, & Steinhardt 1998).
Although the CMB results appeared reasonably persuasive in 1998–1999, one could argue that fluctuations on different scales had been measured with different instruments, and that suble systematic effects might lead to erroneous conclusions. These fears were dispelled only 1–2 years later, when the more accurate and precise results of the BOOMERANG collaboration were announced (de Bernardis et al. 2000, 2002). Shortly thereafter the MAXIMA collaboration distributed their very similar findings (Hanany et al. 2000; Balbi et al. 2000; Netterfield et al. 2002; see also the TOCO, DASI, and many other measurements). Figure 1.4 ([*right*]{}) illustrates that the CMB measurements tightly constrain $\Omega_{\rm total}$ to be close to unity; we appear to live in a flat universe, in agreement with most inflationary models for the early Universe! Combined with the SN Ia results, the evidence for nonzero $\Omega_\Lambda$ was fairly strong. Making the argument even more compelling was the fact that various studies of clusters of galaxies (see summary by Bahcall et al. 1999) showed that $\Omega_m \approx 0.3$, consistent with the results in Figures 1.3 and 1.4. Thus, a “concordance cosmology” had emerged: $\Omega_m \approx 0.3$, $\Omega_\Lambda \approx 0.7$ — consistent with what had been suspected some years earlier by Ostriker & Steinhardt (1995; see also Carroll, Press, & Turner 1992).
Yet another piece of evidence for a nonzero value of $\Lambda$ was provided by the Two-Degree Field Galaxy Redshift Survey (2dFGRS; Peacock et al. 2001; Percival et al. 2001; Efstathiou et al. 2002). Combined with the CMB maps, their results are inconsistent with a universe dominated by gravitating dark matter. Again, the implication is that about 70% of the mass-energy density of the Universe consists of some sort of dark energy whose gravitational effect is repulsive. Just as this review was going to press, results from the [ *Wilkinson Microwave Anisotropy Prove (WMAP)*]{} appeared; together with the 2dFGRS constraints, they confirm and refine the concordance cosmology ($\Omega_m =
0.27$, $\Omega_\Lambda = 0.73$, $\Omega_{\rm baryon} = 0.044$, $H_0 = 71 \pm 4$ km s$^{-1}$ Mpc$^{-1}$; Spergel et al. 2003).
The dynamical age of the Universe can be calculated from the cosmological parameters. In an empty Universe with no cosmological constant, the dynamical age is simply the “Hubble time” (i.e., the inverse of the Hubble constant); there is no deceleration. SNe Ia yield $H_0 = 65 \pm 2$ km s$^{-1}$ Mpc$^{-1}$ (statistical uncertainty only), and a Hubble time of $15.1 \pm 0.5$ Gyr. For a more complex cosmology, integrating the velocity of the expansion from the current epoch ($z=0$) to the beginning ($z=\infty$) yields an expression for the dynamical age. As shown in detail by Riess et al. (1998b), by mid-1998 the HZT had obtained a value of 14.2$^{+1.0}_{-0.8}$ Gyr using the likely range for $(\Omega_m, \Omega_\Lambda)$ that they measured. (The precision was so high because their experiment was sensitive to roughly the [*difference*]{} between $\Omega_m$ and $\Omega_\Lambda$, and the dynamical age also varies in approximately this way.) Including the [*systematic*]{} uncertainty of the Cepheid distance scale, which may be up to 10%, a reasonable estimate of the dynamical age was $14.2 \pm 1.7$ Gyr (Riess et al. 1998b). Again, the SCP’s result was very similar (Perlmutter et al. 1999), since it was based on nearly the same derived values for the cosmological parameters. This expansion age is consistent with ages determined from various other techniques such as the cooling of white dwarfs (Galactic disk $> 9.5$ Gyr; Oswalt et al. 1996), radioactive dating of stars via the thorium and europium abundances ($15.2 \pm
3.7$ Gyr; Cowan et al. 1997), and studies of globular clusters (10–15 Gyr, depending on whether [*Hipparcos*]{} parallaxes of Cepheids are adopted; Gratton et al. 1997; Chaboyer et al. 1998). By mid-1998, the ages of the oldest stars no longer seemed to exceed the expansion age of the Universe; the long-standing “age crisis” had evidently been resolved.
Discussion
----------
Although the convergence of different methods on the same answer is reassuring, and suggests that the concordance cosmology is correct, it is important to vigorously test each method to make sure it is not leading us astray. Moreover, only through such detailed studies will the accuracy and precision of the methods improve, allowing us to eventually set better constraints on the equation of state parameter, $w$. Here I discuss the systematic effects that could adversely affect the SN Ia results.
High-redshift SNe Ia are observed to be dimmer than expected in an empty Universe (i.e., $\Omega_m = 0$) with no cosmological constant. At $z \approx
0.5$, where the SN Ia observations have their greatest leverage on $\Lambda$, the difference in apparent magnitude between an $\Omega_m = 0.3$ ($\Omega_\Lambda = 0$) universe and a flat universe with $\Omega_\Lambda =0.7$ is only about 0.25 mag. Thus, we need to find out if chemical abundances, stellar populations, selection bias, gravitational lensing, or grey dust can have an effect this large. Although both the HZT and SCP had considered many of these potential systematic effects in their original discovery papers (Riess et al. 1998b; Perlmutter et al. 1999), and had shown with reasonable confidence that obvious ones were not greatly affecting their conclusions, if was of course possible that they were wrong, and that the data were being misinterpreted.
### Evolution
Perhaps the most obvious possible culprit is [*evolution*]{} of SNe Ia over cosmic time, due to changes in metallicity, progenitor mass, or some other factor. If the peak luminosity of SNe Ia were lower at high redshift, then the case for $\Omega_\Lambda > 0$ would weaken. Conversely, if the distant explosions are more powerful, then the case for acceleration strengthens. Theorists are not yet sure what the sign of the effect will be, if it is present at all; different assumptions lead to different conclusions (Höflich et al. 1998; Umeda et al. 1999; Nomoto et al. 2000; Yungelson & Livio 2000).
Of course, it is extremely difficult, if not effectively impossible, to obtain an accurate, independent measure of the peak luminosity of high-$z$ SNe Ia, and hence to directly test for luminosity evolution. However, we can more easily determine whether [*other*]{} observable properties of low-$z$ and high-$z$ SNe Ia differ. If they are all the same, it is more probable that the peak luminosity is constant as well — but if they differ, then the peak luminosity might also be affected (e.g., Höflich et al. 1998). Drell, Loredo, & Wasserman (2000), for example, argue that there are reasons to suspect evolution, because the average properties of existing samples of high-$z$ and low-$z$ SNe Ia seem to differ (e.g., the high-$z$ SNe Ia are more uniform).
The local sample of SNe Ia displays a weak correlation between light curve shape (or peak luminosity) and host galaxy type, in the sense that the most luminous SNe Ia with the broadest light curves only occur in late-type galaxies. Both early-type and late-type galaxies provide hosts for dimmer SNe Ia with narrower light curves (Hamuy et al. 1996a). The mean luminosity difference for SNe Ia in late-type and early-type galaxies is $\sim 0.3$ mag. In addition, the SN Ia rate per unit luminosity is almost twice as high in late-type galaxies as in early-type galaxies at the present epoch (Cappellaro et al. 1997). These results may indicate an evolution of SNe Ia with progenitor age. Possibly relevant physical parameters are the mass, metallicity, and C/O ratio of the progenitor (Höflich et al. 1998).
We expect that the relation between light curve shape and peak luminosity that applies to the range of stellar populations and progenitor ages encountered in the late-type and early-type hosts in our nearby sample should also be applicable to the range we encounter in our distant sample. In fact, the range of age for SN Ia progenitors in the nearby sample is likely to be [*larger*]{} than the change in mean progenitor age over the 4–6 Gyr lookback time to the high-$z$ sample. Thus, to first order at least, our local sample should correct the distances for progenitor or age effects.
We can place empirical constraints on the effect that a change in the progenitor age would have on our SN Ia distances by comparing subsamples of low-redshift SNe Ia believed to arise from old and young progenitors. In the nearby sample, the mean difference between the distances for the early-type hosts (8 SNe Ia) and late-type hosts (19 SNe Ia), at a given redshift, is 0.04 $\pm$ 0.07 mag from the MLCS method. This difference is consistent with zero. Even if the SN Ia progenitors evolved from one population at low redshift to the other at high redshift, we still would not explain the surplus in mean distance of 0.25 mag over the $\Omega_\Lambda=0$ prediction. Moreover, in a major study of high-redshift SNe Ia as a function of galaxy morphology, the SCP found no clear differences (except for the amount of scatter; see §1.5.2) between the cosmological results obtained with SNe Ia in late-type and early-type galaxies (Sullivan et al. 2003).
It is also reassuring that initial comparisons of high-$z$ SN Ia spectra appear remarkably similar to those observed at low redshift. For example, the spectral characteristics of SN 1998ai ($z = 0.49$) appear to be essentially indistinguishable from those of normal low-$z$ SNe Ia; see Figure 1.5 ([*left*]{}). In fact, the most obviously discrepant spectrum in this figure is the second one from the top, that of SN 1994B ($z = 0.09$); it is intentionally included as a “decoy” that illustrates the degree to which even the spectra of nearby, relatively normal SNe Ia can vary. Nevertheless, it is important to note that a dispersion in luminosity (perhaps 0.2 mag) exists even among the other, more normal SNe Ia shown in Figure 1.5 ([*left*]{}); thus, our spectra of SN 1998ai and other high-$z$ SNe Ia are not yet sufficiently good for independent, [*precise*]{} determinations of peak luminosity from spectral features (Nugent et al. 1995). Many of them, however, are sufficient for ruling out other SN types (Fig. 1.5, [*right*]{}), or for identifying gross peculiarities such as those shown by SNe 1991T and 1991bg; see Coil et al. (2000).
We can help verify that the SNe at $z \approx 0.5$ being used for cosmology do not belong to a subluminous population of SNe Ia by examining restframe $I$-band light curves. Normal, nearby SNe Ia show a pronounced second maximum in the $I$ band about a month after the first maximum and typically about 0.5 mag fainter (e.g., Ford et al. 1993; Suntzeff 1996). Subluminous SNe Ia, in contrast, do not show this second maximum, but rather follow a linear decline or show a muted second maximum (Filippenko et al. 1992a). As discussed by Riess et al. (2000), tentative evidence for the second maximum is seen from the HZT’s existing $J$-band (restframe $I$-band) data on SN 1999Q ($z = 0.46$); see Figure 1.6 ([*left*]{}). Additional tests with spectra and near-infrared light curves are currently being conducted.
Another way of using light curves to test for possible evolution of SNe Ia is to see whether the rise time (from explosion to maximum brightness) is the same for high-redshift and low-redshift SNe Ia; a difference might indicate that the peak luminosities are also different (Höflich et al. 1998). Riess et al. (1999c) measured the risetime of nearby SNe Ia, using data from KAIT, the Beijing Astronomical Observatory (BAO) SN search, and a few amateur astronomers. Though the exact value of the risetime is a function of peak luminosity, for typical low-redshift SNe Ia it is $20.0 \pm 0.2$ days. Riess et al. (1999b) pointed out that this differs by $5.8\sigma$ from the [*preliminary*]{} risetime of $17.5 \pm 0.4$ days reported in conferences by the SCP (Goldhaber et al. 1998a,b; Groom 1998). However, more thorough analyses of the SCP data (Aldering, Knop, & Nugent 2000; Goldhaber et al. 2001) show that the high-redshift uncertainty of $\pm 0.4$ days that the SCP originally reported was much too small because it did not account for systematic effects. The revised discrepancy with the low-redshift risetime is about $2\sigma$ or less. Thus, the apparent difference in risetimes might be insignificant. Even if the difference is real, however, its relevance to the peak luminosity is unclear; the light curves may differ only in the first few days after the explosion, and this could be caused by small variations in conditions near the outer part of the exploding white dwarf that are inconsequential at the peak.
### Extinction
Our SN Ia distances have the important advantage of including corrections for interstellar extinction occurring in the host galaxy and the Milky Way. Extinction corrections based on the relation between SN Ia colors and luminosity improve distance precision for a sample of nearby SNe Ia that includes objects with substantial extinction (Riess et al. 1996a); the scatter in the Hubble diagram is much reduced. Moreover, the consistency of the measured Hubble flow from SNe Ia with late-type and early-type hosts (see §1.5.1) shows that the extinction corrections applied to dusty SNe Ia at low redshift do not alter the expansion rate from its value measured from SNe Ia in low-dust environments.
In practice, the high-redshift SNe Ia generally appear to suffer very little extinction; their $B-V$ colors at maximum brightness are normal, suggesting little color excess due to reddening. The most detailed available study is that of the SCP (Sullivan et al. 2003): they found that the scatter in the Hubble diagram is minimal for SNe Ia in early-type host galaxies, but increases for SNe Ia in late-type galaxies. Moreover, on average the SNe in late-type galaxies are slightly fainter (by $0.14 \pm 0.09$ mag) than those in early-type galaxies. Finally, at peak brightness the colors of SNe Ia in late-type galaxies are marginally redder than those in early-type galaxies. Sullivan et al. (2003) conclude that extinction by dust in the host galaxies of SNe Ia is one of the major sources of scatter in the high-redshift Hubble diagram. By restricting their sample to SNe Ia in early-type host galaxies (presumably with minimal extinction), they obtain a very tight Hubble diagram that suggests a nonzero value for $\Omega_\Lambda$ at the $5\sigma$ confidence level, under the assumption that $\Omega_{\rm total} = 1$. In the absence of this assumption, SNe Ia in early-type hosts still imply that $\Omega_\Lambda > 0$ at nearly the 98% confidence level. The results for $\Omega_\Lambda$ with SNe Ia in late-type galaxies are quantitatively similar, but statistically less secure because of the larger scatter.
Riess, Press, & Kirshner (1996b) found indications that the Galactic ratios between selective absorption and color excess are similar for host galaxies in the nearby ($z \leq 0.1$) Hubble flow. Yet, what if these ratios changed with lookback time (e.g., Aguirre 1999a)? Could an evolution in dust-grain size descending from ancestral interstellar “pebbles” at higher redshifts cause us to underestimate the extinction? Large dust grains would not imprint the reddening signature of typical interstellar extinction upon which our corrections rely.
However, viewing our SNe through such gray interstellar grains would also induce a [*dispersion*]{} in the derived distances. Using the results of Hatano, Branch, & Deaton (1998), Riess et al. (1998b) estimate that the expected dispersion would be 0.40 mag if the mean gray extinction were 0.25 mag (the value required to explain the measured MLCS distances without a cosmological constant). This is significantly larger than the 0.21 mag dispersion observed in the high-redshift MLCS distances. Furthermore, most of the observed scatter is already consistent with the estimated [*statistical*]{} errors, leaving little to be caused by gray extinction. Nevertheless, if we assumed that [*all*]{} of the observed scatter were due to gray extinction, the mean shift in the SN Ia distances would be only 0.05 mag. With the existing observations, it is difficult to rule out this modest amount of gray interstellar extinction.
Gray [*intergalactic*]{} extinction could dim the SNe without either telltale reddening or dispersion, if all lines of sight to a given redshift had a similar column density of absorbing material. The component of the intergalactic medium with such uniform coverage corresponds to the gas clouds producing Lyman-$\alpha$ forest absorption at low redshifts. These clouds have individual H I column densities less than about $10^{15} \, {\rm cm^{-2}}$ (Bahcall et al. 1996). However, they display low metallicities, typically less than 10% of solar. Gray extinction would require larger dust grains which would need a larger mass in heavy elements than typical interstellar grain size distributions to achieve a given extinction. It is possible that large dust grains are blown out of galaxies by radiation pressure, and are therefore not associated with Lyman-$\alpha$ clouds (Aguirre 1999b).
But even the dust postulated by Aguirre (1999a,b) and Aguirre & Haiman (1999) is not [*completely*]{} gray, having a size of about 0.1 $\mu$m. We can test for such nearly gray dust by observing high-redshift SNe Ia over a wide wavelength range to measure the color excess it would introduce. If $A_V =
0.25$ mag, then $E(U-I)$ and $E(B-I)$ should be 0.12–0.16 mag (Aguirre 1999a,b). If, on the other hand, the 0.25 mag faintness is due to $\Lambda$, then no such reddening should be seen. This effect is measurable using proven techniques; so far, with just one SN Ia (SN 1999Q; Fig. 1.6, [*right*]{}), our results favor the no-dust hypothesis to better than 2$\sigma$ (Riess et al. 2000). More work along these lines is in progress.
### The Smoking Gun
Suppose, however, that for some reason the dust is [*very*]{} gray, or our color measurements are not sufficiently precise to rule out Aguirre’s (or other) dust. Or, perhaps some other astrophysical systematic effect is fooling us, such as possible evolution of the white dwarf progenitors (e.g., Höflich et al. 1998; Umeda et al. 1999), or gravitational lensing (Wambsganss, Cen, & Ostriker 1998). The most decisive test to distinguish between $\Lambda$ and cumulative systematic effects is to examine the [*deviation*]{} of the observed peak magnitude of SNe Ia from the magnitude expected in the low-$\Omega_m$, zero-$\Lambda$ model. If $\Lambda$ is positive, the deviation should actually begin to [*decrease*]{} at $z \approx 1$; we will be looking so far back in time that the $\Lambda$ effect becomes small compared with $\Omega_m$, and the Universe is decelerating at that epoch. If, on the other hand, a systematic bias such as gray dust or evolution of the white dwarf progenitors is the culprit, we expect that the deviation of the apparent magnitude will continue growing, unless the systematic bias is set up in such an unlikely way as to mimic the effects of $\Lambda$ (Drell et al. 2000). A turnover, or decrease of the deviation of apparent magnitude at high redshift, can be considered the “smoking gun” of $\Lambda$.
In a wonderful demonstration of good luck and hard work, Riess et al. (2001) report on [*HST*]{} observations of a probable SN Ia at $z \approx 1.7$ (SN 1997ff, the most distant SN ever observed) that suggest the expected turnover is indeed present, providing a tantalizing glimpse of the epoch of deceleration. (See also Benítez et al. 2002, which corrects the observed magnitude of SN 1997ff for gravitational lensing.) SN 1997ff was discovered by Gilliland & Phillips (1998) in a repeat [*HST*]{} observation of the Hubble Deep Field–North, and serendipitously monitored in the infrared with [*HST*]{}/NICMOS. The peak apparent SN brightness is consistent with that expected in the decelerating phase of the concordance cosmological model, $\Omega_m
\approx 0.3$, $\Omega_\Lambda \approx 0.7$ (Fig. 1.7). It is inconsistent with gray dust or simple luminosity evolution, when combined with the data for SNe Ia at $z \approx 0.5$. On the other hand, it is wise to remain cautious: the error bars are large, and it is always possible that we are being fooled by this one object. The HZT and SCP currently have programs to find and measure more SNe Ia at such high redshifts. For example, SN candidates at very high redshifts (e.g., Giavalisco et al. 2002) have been found by “piggybacking” on the Great Observatories Origins Deep Survey (GOODS) being conducted with the Advanced Camera for Surveys aboard [*HST*]{}.
Less ambitious programs, concentrating on SNe Ia at $z \gtrsim 0.8$, have already been completed (HZT; Tonry et al. 2003) or are nearing completion (SCP). Tonry et al. (2003) measured several SNe Ia at $z \approx 1$, and their deviation of apparent magnitude from the low-$\Omega_m$, zero-$\Lambda$ model is roughly the same as that at $z \approx 0.5$, in agreement with expectations based on the results of Riess et al. (2001). Moreover, the new sample of high-redshift SNe Ia presented by Tonry et al., analyzed with methods distinct from (but similar to) those used previously, confirm the result of Riess et al. (1998b) and Perlmutter et al. (1999) that the expansion of the Universe is accelerating. By combining all of the available data sets, Tonry et al. are able to use 230 SNe Ia, and they place the following constraints on cosmological quantities. (1) If the equation of state parameter of the dark energy is $w = -1$, then $H_0 t_0 = 0.96 \pm 0.04$, and $\Omega_\Lambda - 1.4
\Omega_m = 0.35 \pm 0.14.$ (2) Including the constraint of a flat universe, they find that $\Omega_m = 0.28 \pm 0.05$, independent of any large-scale structure measurements. (3) Adopting a prior based on the 2dFGRS constraint on $\Omega_m$ (Percival et al. 2001) and assuming a flat universe, they derive that $-1.48 < w < -0.72$ at 95% confidence. These constraints are similar in precision and in value to very recent conclusions reported using [*WMAP*]{} (Spergel et al. 2003), also in combination with the 2dFGRS. Complete details on the SN Ia results, as well as figures, can be found in Tonry et al. (2003).
### Measuring the Dark Energy Equation of State
Every energy component in the Universe can be parameterized by the way its density varies as the Universe expands (scale factor $a$), with $\rho \propto
a^{-3(1+w)}$, and $w$ reflects the component’s equation of state, $w = P/(\rho
c^2)$, where $P$ is the pressure exerted by the component. So for matter, $w=0$, while an energy component that does not vary with scale factor has $w=-1$, as in the cosmological constant $\Lambda$. Some really strange energies may have $w < -1$: their density increases with time (Carroll, Hoffman, & Trodden 2003)! Clearly, a good estimate of $w$ becomes the key to differentiating between models.
The CMB observations imply that the geometry of the universe is close to flat, so the energy density of the dark component is simply related to the matter density by $\Omega_x = 1 - \Omega_m$. This allows the luminosity distance as a function of redshift to be written as $$D_L(z)\;
=\; {c(1+z)\over{H_0}}\int_0^{z}{[1+\Omega_x((1+{\rm z})^{3w}-1)]
^{-{1/2}}\over(1+{\rm z})^{{3/2}}}\; {\rm dz} \; ,$$ showing that the dark energy density and equation of state directly influence the apparent brightness of standard candles. As demonstrated graphically in Figure 1.8 ([*left*]{}), SNe Ia observed over a wide range of redshifts can constrain the dark energy parameters to a cosmologically interesting accuracy.
But there are two major problems with using SNe Ia to measure $w$. First, systematic uncertainties in SN Ia peak luminosity limit how well $D_L(z)$ can be measured. While statistical uncertainty can be arbitrarily reduced by finding thousands of SNe Ia, intrinsic SN properties such as evolution and progenitor metallicity, and observational limits like photometric calibrations and K-corrections, create a systematic floor that cannot be decreased by sheer force of numbers. We expect that systematics can be controlled to at best 3%.
Second, SNe at $z > 1.0$ are very hard to discover and study from the ground. As discussed above, both the HZT and the SCP have found a few SNe Ia at $z > 1.0$, but the numbers and quality of these light curves are insufficient for a $w$ measurement. Large numbers of SNe Ia at $z > 1.0$ are best left to a wide-field optical/infrared imager in space, such as the proposed [*Supernova/ Acceleration Probe*]{} ([*SNAP*]{}; Nugent et al. 2000) satellite.
Fortunately, an interesting measurement of $w$ can be made at present. The current values of $\Omega_m$ from many methods (most recently [*WMAP*]{}: 0.27; Spergel et al. 2003) make an excellent substitute for those expensive SNe at $z
> 1.0$. Figure 1.8 ([*left*]{}) shows that a SN Ia sample with a maximum redshift of $z =
0.8$, combined with the current 10% error on $\Omega_m$, will do as well as a SN Ia sample at much higher redshifts. Within a few years, the Sloan Digital Sky Survey and [*WMAP*]{} will solidify the estimate of $\Omega_m$ and sharpen $w$ further.
Both the SCP and the HZT are involved in multi-year programs to discover and monitor hundreds of SNe Ia for the purpose of measuring $w$. For example, the HZT’s project, ESSENCE (Equation of State: SupErNovae trace Cosmic Expansion), is designed to discover 200 SNe Ia evenly distributed in the $0.2 < z < 0.8$ range. The CTIO 4-m telescope and mosaic camera are being used to find and follow the SNe by imaging on every other dark night for several consecutive months of the year. Keck and other large telescopes are being used to get the SN spectra and redshifts. Project ESSENCE will eventually provide an estimate of $w$ to an accuracy of $\sim$10% (Fig. 1.8, [*right*]{}).
Farther in the future, large numbers of SNe Ia to be found by the [*SNAP*]{} satellite and the Large-area Synoptic Survey Telescope (the “Dark Matter Telescope”; Tyson & Angel 2001) could reveal whether the value of $w$ depends on redshift, and hence should give additional constraints on the nature of the dark energy. High-redshift surveys of galaxies such as DEEP2 (Davis et al. 2001), as well as space-based missions to map the CMB ([*Planck*]{}), should provide additional evidence for (or against) $\Lambda$. Observational cosmology promises to remain exciting for quite some time!
. I thank all of my HZT collaborators for their contributions to our team’s research, and members of the SCP for their seminal complementary work on the accelerating Universe. My group’s work at U.C. Berkeley has been supported by NSF grant AST–9987438, as well as by grants GO–7505, GO/DD–7588, GO–8177, GO–8641, GO–9118, and GO–9352 from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5–26555. Many spectra of high-redshift SNe were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and NASA; the observatory was made possible by the generous financial support of the W. M. Keck Foundation. KAIT has received donations from Sun Microsystems, Inc., the Hewlett-Packard Company, AutoScope Corporation, Lick Observatory, the National Science Foundation, the University of California, and the Sylvia and Jim Katzman Foundation.
Aguirre, A. N. 1999a, ApJ, 512, L19
——. 1999b, ApJ, 525, 583
Aguirre, A. N., & Haiman, Z. 1999, ApJ, 525, 583
Aldering, G., Knop, R., & Nugent, P. 2000, AJ, 119, 2110
Bahcall, J. N., et al. 1996, ApJ, 457, 19
Bahcall, N. A., Ostriker, J. P., Perlmutter, S., & Steinhardt, P. J. 1999, Science, 284, 1481
Balbi, A., et al. 2000, ApJ, 545, L1
Benítez, N., Riess, A., Nugent, P., Dickinson, M., Chornock, R., & Filippenko, A. V. 2002, ApJ, 577, L1
Branch, D. 1981, ApJ, 248, 1076
——. 1998, ARA&A, 36, 17
Branch, D., Fisher, A., & Nugent, P. 1993, AJ, 106, 2383
Branch, D., & Miller, D. L. 1993, ApJ, 405, L5
Branch, D., Romanishin, W., & Baron, E. 1996, ApJ, 465, 73 (erratum: 467, 473)
Branch, D., & Tammann, G. A. 1992, ARA&A, 30, 359
Caldwell, R. R., Davé, R., & Steinhardt, P. J. 1998, Ap&SS, 261, 303
Cappellaro, E., Turatto, M., Tsvetkov, D. Yu., Bartunov, O. S., Pollas, C., Evans, R., & Hamuy, M. 1997, A&A, 322, 431
Carroll, S. M., Hoffman, M., & Trodden, M. 2003, astro-ph/0301273
Carroll, S. M., Press, W. H., & Turner, E. L. 1992, ARA&A, 30, 499
Chaboyer, B., Demarque, P., Kernan, P. J., & Krauss, L. M. 1998, ApJ, 494, 96
Coil, A. L., et al. 2000, ApJ, 544, L111
Cowan, J. J., McWilliam, A., Sneden, C., & Burris, D. L. 1997, ApJ, 480, 246
Davis, M., Newman, J. A., Faber, S. M., & Phillips, A. C. 2001, in Deep Fields, ed. S. Cristiani, A. Renzini, & R. E. Williams (Berlin: Springer), 241
de Bernardis, P., et al. 2000, Nature, 404, 955
——. 2002, ApJ, 564, 559
Drell, P. S., Loredo, T. J., & Wasserman, I. 2000, ApJ, 530, 593
Efstathiou, G., et al. 1999, MNRAS, 303, L47
——. 2002, MNRAS, 330, L29
Eisenstein, D. J., Hu, W., & Tegmark, M. 1998, ApJ, 504, L57
Filippenko, A. V. 1997a, in Thermonuclear Supernovae, ed. P. Ruiz-Lapuente et al. (Dordrecht: Kluwer), 1
——. 1997b, ARA&A, 35, 309
——. 2001, PASP, 113, 1441
Filippenko, A. V., et al. 1992a, AJ, 104, 1543
——. 1992b, ApJ, 384, L15
Filippenko, A. V., Li, W. D., Treffers, R. R., & Modjaz, M. 2001, in Small-Telescope Astronomy on Global Scales, ed. W. P. Chen, C. Lemme, & B. Paczyński (San Francisco: ASP), 121
Filippenko, A. V., & Riess, A. G. 1998, Phys. Rep., 307, 31
Ford, C. H., et al. 1993, AJ, 106, 1101
Garnavich, P., et al. 1998a, ApJ, 493, L53
——. 1998b, ApJ, 509, 74
Giavalisco, M., et al. 2002, IAUC 7981
Gilliland, R. L., & Phillips, M. M. 1998, IAUC 6810
Goldhaber, G., et al. 1997, in Thermonuclear Supernovae, ed. P. Ruiz-Lapuente et al. (Dordrecht: Kluwer), 777
——. 1998a, BAAS, 30, 1325
——. 1998b, in Gravity: From the Hubble Length to the Planck Length, SLAC Summer Institute (Stanford, CA: SLAC)
——. 2001, ApJ, 558, 359
Goldhaber, G., & Perlmutter, S. 1998, Phys. Rep., 307, 325
Goobar, A., & Perlmutter, S. 1995, ApJ, 450, 14
Gratton, R. G., Fusi Pecci, F., Carretta, E., Clementini, G., Corsi, C. E., & Lattanzi, M. 1997, ApJ, 491, 749
Groom, D. E. 1998, BAAS, 30, 1419
Hamuy, M., Phillips, M. M., Maza, J., Suntzeff, N. B., Schommer, R. A., & Aviles, R. 1995, AJ, 109, 1
——. 1996a, AJ, 112, 2391
——. 1996b, AJ, 112, 2398
Hamuy, M., Trager, S. C., Pinto, P. A., Phillips, M. M., Schommer, R. A., Ivanov, V., & Suntzeff, N. B. 2000, AJ, 120, 1479
Hanany, S., et al. 2000, ApJ, 545, L5
Hancock, S., Rocha, G., Lazenby, A. N., & Gutiérrez, C. M. 1998, MNRAS, 294, L1
Hatano, K., Branch, D., & Deaton, J. 1998, ApJ, 502, 177
Höflich, P., Wheeler, J. C., & Thielemann, F. K. 1998, ApJ, 495, 617
Hoyle, F., Burbidge, G., & Narlikar, J. V. 2000, A Different Approach to Cosmology (Cambridge: Cambridge Univ. Press)
Ivanov, V. D., Hamuy, M., & Pinto, P. A. 2000, ApJ, 542, 588
Kim, A., Goobar, A., & Perlmutter, S. 1996, PASP, 108, 190
Leibundgut, B., et al. 1993, AJ, 105, 301
——. 1996, ApJ, 466, L21
Leonard, D. C., et al. 2002a, PASP, 114, 35 (erratum: 114, 1291)
——. 2002b, AJ, 124, 2490
Li, W., et al. 2000, in Cosmic Explosions, ed. S. S. Holt & W. W. Zhang (New York: AIP), 103
——. 2001a, PASP, 113, 1178
——. 2003, PASP, 115, 453
Li, W., Filippenko, A. V., Treffers, R. R., Riess, A. G., Hu, J., & Qiu, Y. 2001b, ApJ, 546, 734
Lineweaver, C. H. 1998, ApJ, 505, L69
Lineweaver, C. H., & Barbosa, D. 1998, ApJ, 496, 624
Matheson, T., Filippenko, A. V., Li, W., Leonard, D. C., & Shields, J. C. 2001, AJ, 121, 1648
Modjaz, M., Li, W., Filippenko, A. V., King, J. Y., Leonard, D. C., Matheson, T., Treffers, R. R., & Riess, A. G. 2001, PASP, 113, 308
Narlikar, J. V., & Arp, H. C. 1997, ApJ, 482, L119
Netterfield, C. B., et al. 2002, ApJ, 571, 604
Nomoto, K., Umeda, H., Hachisu, I., Kato, M., Kobayashi, C., & Tsujimoto, T. 2000, in Type Ia Supernovae: Theory and Cosmology, ed. J. C. Niemeyer & J. W. Truran (Cambridge: Cambridge Univ. Press), 63
Norgaard-Nielsen, H., et al. 1989, Nature, 339, 523
Nugent, P., 2000, in Particle Physics and Cosmology: Second Tropical Workshop, ed. J. F. Nieves (New York: AIP), 263
Nugent, P., Kim, A., & Perlmutter, S. 2002, PASP, 114, 803
Nugent, P., Phillips, M., Baron, E., Branch, D., & Hauschildt, P. 1995, ApJ, 455, L147
Ostriker, J. P., & Steinhardt, P. J. 1995, Nature, 377, 600
Oswalt, T. D., Smith, J. A., Wood, M. A., & Hintzen, P. 1996, Nature, 382, 692
Pain, R., et al. 2002, ApJ, 577, 120
Peacock, J. A., et al. 2001, Nature, 410, 169
Percival, W., et al. 2001, MNRAS, 327, 1297
Perlmutter, S., et al. 1995a, ApJ, 440, L41
——. 1995b, IAUC 6270
——. 1997, ApJ, 483, 565
——. 1998, Nature, 391, 51
——. 1999, ApJ, 517, 565
Phillips, M. M. 1993, ApJ, 413, L105
Phillips, M. M., et al. 1992, AJ, 103, 1632
Pskovskii, Yu. P. 1977, Sov. Astron., 21, 675
——. 1984, Sov. Astron., 28, 658
Riess, A. G., et al. 1997, AJ, 114, 722
——. 1998b, AJ, 116, 1009
——. 1999a, AJ, 117, 707
——. 1999c, AJ, 118, 2675
——. 2000, ApJ, 536, 62
——. 2001, ApJ, 560, 49
Riess, A. G., Filippenko, A. V., Li, W. D., & Schmidt, B. P. 1999b, AJ, 118, 2668
Riess, A. G., Nugent, P. E., Filippenko, A. V., Kirshner, R. P., & Perlmutter, S. 1998a, ApJ, 504, 935
Riess, A. G., Press, W. H., & Kirshner, R. P. 1995, ApJ, 438, L17
——. 1996a, ApJ, 473, 88
——. 1996b ApJ, 473, 588.
Saha, A., et al. 1997, ApJ, 486, 1
Sandage, A., et al. 1996, ApJ, 460, L15
Sandage, A., & Tammann, G. A. 1993, ApJ, 415, 1
Schmidt, B. P., et al. 1998, ApJ, 507, 46
Spergel, D. N., et al. 2003, ApJ, in press (astro-ph/0302209)
Sullivan, M., et al. 2003, MNRAS, in press (astro-ph/0211444)
Suntzeff, N. 1996, in Supernovae and Supernova Remnants, ed. R. McCray & Z. Wang (Cambridge: Cambridge Univ. Press), 41
Suntzeff, N., et al. 1996, IAUC 6490
Tonry, J. L., et al. 2003, ApJ, in press (astro-ph/0305008)
Tripp, R. 1997, A&A, 325, 871
——. 1998, A&A, 331, 815
Turatto, M., et al. 1996, MNRAS, 283, 1
Tyson, J. A., & Angel, R. 2001, in The New Era of Wide Field Astronomy, ed. R. Clowes et al. (San Francisco: ASP), 347
Umeda, H., et al. 1999, ApJ, 522, L43
van den Bergh, S., & Pazder, J. 1992, ApJ, 390, 34
Vaughan, T. E., Branch, D., Miller, D. L., & Perlmutter, S. 1995, ApJ, 439, 558
Wambsganss, J., Cen, R., & Ostriker, J. P. 1998, ApJ, 494, 29
Yungelson, L. R., & Livio, M. 2000, ApJ, 528, 108
Zaldarriaga, M., Spergel, D. N., & Seljak, U. 1997, ApJ, 488, 1
|
---
abstract: 'This letter aims at showing that the observation of evaporating black holes should allow the usual Hawking behavior to be distinguished from Loop Quantum Gravity (LQG) expectations. We present a full Monte Carlo simulation of the evaporation in LQG and statistical tests that discriminate between competing models. We conclude that contrarily to what was commonly thought, the discreteness of the area in LQG leads to characteristic features that qualify evaporating black holes as objects that could reveal quantum gravity footprints.'
author:
- 'A. Barrau'
- 'X. Cao'
- 'J. Diaz-Polo'
- 'J. Grain'
- 'T. Cailleteau'
title: Probing Loop Quantum Gravity with Evaporating Black Holes
---
#### Introduction–
Loop Quantum Gravity (LQG) is a promising framework to nonperturbatively quantize General Relativity (GR) in a background invariant way (see [@lqg_review] for introductory reviews). Interestingly, it has now been demonstrated that different approaches, based either on quantizations (covariant or canonical) of GR, or on a formal quantization of geometry lead to the very same LQG theory. As for any tentative theory of quantum gravity, experimental tests are however still missing. Trying to find possible observational signatures is obviously a key challenge. In this article we address the following question : could there be objects in the contemporary universe whose observation would lead to a clear signature of LQG ? Fortunately, the answer turns out to be positive. Although small black holes have not yet been directly observed, they could have been formed by different mechanisms in the early universe (see, [*e.g.*]{}, [@carr] for a recent review) or even by particle collisions. We don’t review here the well-known possible production mechanisms, but instead we focus on how to use the evaporation of microscopic black holes to investigate the discriminating power of the emitted spectrum. Three different possible signatures will be suggested. Although one should be careful when pushing the limits of the LQG approach to black holes to the microscopic limit, our results rely on features of the area spectrum and are rather insensitive to small modifications in the theoretical framework.
#### Theoretical Framework–
The state counting for black holes in LQG relies on the isolated horizon framework (see, [*e.g.*]{}, [@diaz1] for an up-to-date detailed review). The isolated horizon is introduced as a boundary of the underlying manifold before quantization. For a given area $A$ of a Schwarzschild black hole horizon, the physical states arise from a punctured sphere whose punctures carry quantum labels (see, [*e.g.*]{}, [@diaz2] for a detailed analysis). Two labels $(j,m)$ are assigned to each puncture, $j$ being a spin half-integer carrying information about the area and $m$ being its corresponding projection carrying information about the curvature. They satisfy the conditions $$\label{eq1}
A-\Delta\leq 8\pi \gamma \ell_P^2\sum_{p=1}^N{\sqrt{j_p(j_p+1)}}\leq A+\Delta,$$ where $\gamma$ is the fundamental Barbero-Immirzi parameter of LQG, $\Delta$ is a “smearing” area and $p$ labels the different punctures, and $$\label{eq2}
\sum_{p=1}^N{m_p=0},$$ which corresponds to the requirement of a horizon with spherical topology. Many specific features of the entropy were derived in this framework [@diaz3]. Although the proportionality between the entropy and the area is indeed recovered (when choosing correctly the $\gamma$ parameter) in the classical limit, the quantum structure still leaves a clear footprint at microscopic scales.
Long ago, Bekenstein and Mukhanov postulated that due to quantum gravitational effects the area of a black hole should be proportional to a fundamental area of the order of the Planck area [@bek] (the argument has recently been updated in [@gia]). This led to the idea of possible exciting probes of quantum gravity through associated lines in the evaporation spectrum. However, following the pioneering work of Rovelli [@rovelli1], it was soon realized that the situation is drastically different in LQG where the spacing of the energy levels decreases exponentially with the energy, therefore closing any hope for detection [@rovelli2]. In (the first paper of) [@diaz3] a possible observational effect was suggested based on an exact computation of entropy and the observation of an effective discretization of it. In this letter we readdress this issue and show that at least three different signatures can in fact be expected. Two of them are, as it could be expected, related with “Planck scale” black holes whereas the last one works also for larger black holes.\
![Spectrum of emitted particles in LQG, in the pure Hawking case, and in the Mukhanov-Bekenstein approach, from top to bottom.[]{data-label="fig1"}](speclin.eps)
#### Emission Lines in the Planck Regime–
We first consider the evaporation of a black hole in the deep quantum regime. To this aim, we have developed a dedicated and optimized algorithm. It is based on the ideas given in [@diaz1] and it was enhanced with an efficient numeration scheme using a breadth-first search. As the projection constraint is very time consuming, this improvement is mandatory to perform the computation up to high enough Planck areas. The evaporation is considered both according to the pure Hawking law and according to the LQG theory. In each case, we model the evaporation by expressing the probability of transition as the exponential of the entropy difference multiplied by the graybody factor. Arguments for the reliability and generality of this approach are given in [@renaud]. As it can be seen from Fig. \[fig1\], some specific lines associated with the transitions occurring in the very last stages of the evaporation can be identified in the LQG spectrum whereas the pure Hawking spectrum is naturally smooth. Two subtle points have to be taken into account. First, the usually assumed “optical limit” of the graybody factors induces a heavy distortion of the spectrum. The use of exact graybody factors, obtained by solving the quantum wave equations in the curved background of the black hole, is in this case mandatory. To be maximally conservative, we have used the very same graybody factors in the Hawking case and in the LQG case. Any difference, as could be possibly expected due to an LQG-inspired metric modification (see, [*e.g.*]{} [@alesci]), would only make the discrimination between models easier. We have also assumed that the Hawking evaporation stops at the same mass as expected in LQG (namely $0.4~M_{Pl}$), once again to be as conservative as possible. Second, even if one focuses on the “high energy” emission, say above $0.15~M_{Pl}$, the contribution from states with a lower temperature is far from being negligible. We have therefore pushed the computation of the area states, together with their multiplicity, up to $200~A_{Pl}$ to ensure that the number of missed quanta remain smaller than a few percent.\
Several Monte-Carlo simulations were carried out to estimate the circumstances under which the discrimination between LQG and the standard behavior is possible. At each step, the energy of the emitted particle is randomly obtained according to the relevant statistics and to the (spin-dependent) graybody factor. Most simple statistical tests fail to capture the intricate nature of the specific LQG features. We have therefore chosen to use a (slightly improved) Kolmogorov-Smirnov (K-S) test. The K-S statistics quantifies the distance between the cumulative distribution functions of the distributions and can be used for a systematic study of possible discrimination (see, [*e.g.*]{} [@sho]. By investigating the K-S excess as a function of the energy, we have optimized the relevant interval for each relative error. As this latter is assumed to be known, it is meaningful to use it as an input for the statistical procedure. Figure \[fig2\] shows the number of black holes that should be observed, for different confidence levels and as a function of the relative error on the energy reconstruction, to discriminate the models. Clearly with either enough black holes or a relatively small error, a discrimination is possible, therefore leading to a clear LQG signature. To still remain maximally conservative, we have only considered emitted leptons. For a detector located nearby the black hole, and due to the huge Lorentz factors, the electrons, muons and taus can be considered as stable whereas quarks do not have enough time to fragment into hadrons.
For the sake of completeness, we have finally implemented a K-S test between the LQG spectrum and the Bekenstein-Mukhanov one. Once again, the discrimination is possible with an even smaller number of black holes as the lines are sitting at clearly different places.
Even if the Hawking and Mukhanov hypotheses are not expected to be reliable in the Planck era, this analysis shows that a discrimination between LQG and other tentative approaches is possible.\
![Number of evaporating black holes that have to be observed as a function of the relative error on the energy reconstruction of the emitted leptons for different confidence levels (the gray scale corresponds to the number of standard deviations). The first row corresponds to the discrimination between LQG and the Hawking hypothesis and the second row between LQG and the Mukhanov-Bekenstein hypothesis.[]{data-label="fig2"}](discrimination.eps)
#### Low-energy Emission in the Planck Regime–
There is a second specific feature associated with the end point of the evaporation process. In LQG, the last transitions take place at definite energies, of the order of the Planck scale, associated with the final lines of the mass spectrum. On the other hand, in the usual Hawking picture, the most natural way to implement a minimal mass is to assume a truncation of the standard spectrum ensuring energy conservation. Even if no minimal mass is assumed, the spectrum has to be truncated to ensure that the black hole does not emit more energy than it has. This is also the case in some string gravity models [@alex]. This leads to the important consequence that the energy of the emitted quanta will progressively decrease and asymptotically tend to zero. It is possible to distinguish this “low-energy” emission associated with the end point from the (much more numerous) “low-energy” particles emitted before (when the black hole temperature was lower) thanks to the dynamics of the process. For example, as soon as one considers $\gamma$-rays with energies lower than $8\times 10^5$ GeV, the “end point” emission will take place at least 100 $\mu$s after the initial emission, making both signals easily distinguishable. Those “relic” quanta will be emitted with mean energies decreased by a factor 1/4 at each step (for scalars and fermions). The time interval between consecutive emissions will typically increase with decreasing energies as $E^{-3}$. At 100 TeV, the mean interval is around 1 s. This feature of the “standard” spectrum is therefore very different from the absence of low-energy particles expected in the LQG case.
This probe should however be considered with care as it is less reliable than the two other ones suggested in this work, being dependent on the specific assumption made for the evaporation end point in the Hawking case.
#### Peaks in the Higher-Mass Regime–
Up to now, the analysis was mostly focused on lines associated with the discreetness of the area, as could be seen on Fig \[fig1\]. However, LQG specific features also lead to broader peaks in the spectrum, with a clear pseudoperiodicity, as shown in Fig \[fig3\]. Those peaks are associated with the “large scale” structure of the area spectrum. This periodicity has been discussed in much detail (see [@diaz1] and references therein). We have observed this behavior up to 200 $A_{Pl}$ with an exact computation of the area eigenvalues and we have checked it up to 400 $A_{Pl}$ with a dedicated Monte Carlo Markov Chain (MCMC) algorithm. Although some recent arguments suggest that this periodicity is damped for higher masses [@barbero2], they cannot rule out the possibility of a “revival” of the periodicity at larger areas (or even in the asymptotic limit), so it is relevant to study the possible observational effects that this periodicity would have in the macroscopic regime, in agreement with the assumption made in most of the literature on the subject. We here assume that it remains valid up to arbitrary large masses. This is not an unavoidable prediction of LQG but this is clearly a possibility that arises, to the best of our knowledge, only in this framework. This makes it a potentially interesting probe. The mean area gap $d A$ between peaks can be shown to be independent of the scale. As, for a Schwarzschild black hole, $dA=32\pi
MdM$ and $T=1/(8\pi M)$, this straightforwardly means that $dM/T=cte$ where $dM$ refers to the mass gap between peaks. This is the key point for detection : in units of temperature, which is the natural energy scale for the emitted quanta, the mass gap does [*not*]{} decrease for increasing masses. Any observable feature associated with this pseudoperiodicity can therefore be searched for through larger black holes.
![Instantaneous spectrum of a $\sim 100~{\rm keV}$ black hole taking into account the LQG modulation of the entropy.[]{data-label="fig3"}](spon.eps)
This opens up the question of a possible detection of LQG effects with evaporating primordial black holes (PBHs) in astrophysical circumstances. If PBHs were formed with a continuous mass spectrum $n_i(M_i)$, where the subscript $i$ stands for initial values, it is now deformed according to $n(M)\propto M^2$ for $M<M_*$ and $n(M)\approx n_i(M)$ for $M>M_*$ where $M_*\approx 10^{15}$g is the initial mass of a black hole whose lifetime is of the order of the age of the Universe. This is just due to the Hawking evaporation leading to ${\rm
d}M/{\rm d}t\propto M^{-2}$. In such a case, it is easy to show that the peak structure of the instantaneous spectrum will be immediately washed out. The convolution of the individual spectra with the mass distribution will lead to a Hawking-like $E^{-3}$ integrated spectrum. We have checked this expected behavior with a Monte Carlo simulation. It should also be pointed out that the peak structure of the “end-of-the-life” spectrum, which is superimposed with the lines, is [*not*]{} due to the pseudo periodic structure of the entropy but to transitions to the last states, [*i.e.*]{} with the discreteness of the area eigenvalues.
However, this does not at all close the issue of observing LQG features with astrophysical PBHs. The continuous mass spectrum (typically scaling as $M^{-5/2}$) was a hypothesis historically associated with a possible high normalization of the primordial power spectrum (or a very blue tilt) which is ruled out by CMB observations. Realistic models for PBH formation are now associated with phase transitions (see, [*e.g.*]{}, [@kar]) or other phenomena leading to black holes formed at a given mass $M_c$. If this mass is smaller than $M_*$, those black holes have already disappeared. If $M_c>M_*$, that is if the horizon mass at the formation time was larger that $ 10^{15}$g, those black holes are evaporating so slowly that their mass has nearly not changed. As not only the mass loss rate but also the area loss rate does decrease with the mass (${\rm d}A/{\rm
d}t\propto 1/M$), the peak structure exhibited in Fig. \[fig3\] should be observed from such black holes. In this case, the instantaneous spectrum, together with its peak structure, can directly be probed. If the mass is higher than typically $10^{17}$g the black hole will emit only massless particles, that is photons ($\sim 12\%$) and neutrinos ($\sim 88\%$). The electromagnetic signal is not anymore contaminated by $\gamma$-rays due to the decay of neutral pions as quarks cannot be emitted. Although the redshift integration will slightly smear out the structures, a very clean signature can therefore be expected as no mass integration is involved developped in the possibly observed signal. In addition, one can show that the total number of photons received per second by a detector of area $S$ can be written as $\Phi\sim 10^4\times\frac{\rho_{PBH}}{\rho_c}\times\left(\frac{10^{17}~{\rm g}}{M}\right)^2\times S$ where $\rho_c=3H^2/8\pi G$ is the “cosmological” critical density and $\rho_{PBH}$ is the density of primordial black holes. This leads to a macroscopic signal for quite a large range of masses and densities.
#### Conclusion–
In this letter, we have shown that the specific features of the area of black holes in loop quantum gravity can lead to observational signatures. Although detecting evaporating black holes is in itself a challenge, we have established that footprints of the underlying quantum gravity theory might indeed be observed in this way. This opens a possible new window to probe LQG.
#### Aknoledgements–
We would like to thank Adeline Choyer with whom this study was initiated. This work was partially funded by the NSF Grants No. PHY0854743 and No. PHY0968871, the Eberly research funds of Penn State, and Spanish MICINN Grant No. ESP2007-66542-C04-01.
[99]{}
C. Rovelli, arXiv:1102.3660v5; C. Rovelli, [*Quantum Gravity*]{}, Cambridge, Cambridge University Press, 2004; C. Rovelli, Living Rev. Relativity, 1, 1 (1998); L. Smolin, arXiv:hep-th/0408048v3; T. Thiemann, Lect. Notes Phys., 631, 41 (2003); T. Thiemann, [*Modern Canonical Quantum General Relativity*]{}, Cambridge, Cambridge University Press, 2007; A. Perez, arXiv:gr-qc/0409061v3; P. Dona & S. Speziale, arXiv:1007.0402V1
B.J. Carr [*et al.*]{}, Phys. Rev. D, 81, 104019 (2010)
I. Agullo [*et al.*]{}, Phys. Rev. D, 82, 084029 (2010)
A. Ashtekar, J.C. Baez, A. Corichi, and K. Krasnov, Phys. Rev. Lett, 80, 904 (1998); A. Ashtekar, J.C. Baez, K. Krasnov, Adv. Theor. Math. Phys. 4, 1 (2000); A. Corichi, J. Diaz-Polo, and E. Fernandez-Borja, Class. Quantum Grav. 24, 243 (2007); A. Corichi, J. Diaz-Polo and E. Fernandez-Borja, Phys. Rev. Lett. 98, 181301 (2007)
J. Diaz-Polo and E. Fernandez-Borja, Class. Quantum Grav., 25, 105007 (2008); I. Agullo, J. Diaz-Polo, and E. Fernandez-Borja, Phys. Rev. D, 77, 105024 (2008); I. Agullo [*et al.*]{}, Phys. Rev. D, 80, 084006 (2009)
J. Bekenstein and V. Mukhanov, Phys. Lett. B, 360, 7 (1995)
G. Dvali, C. Gomez & S. Mukhanov, arXiv:1107.0870v1
C. Rovelli, Phys. Rev. Lett., 77, 3288 (1996)
C. Rovelli, Helv. Phys. Acta., 69, 582 (1996)
S. Massar and R. Parentani, Nucl. Phys. B, 575, 333 (2000)
E. Alesci and L. Modesto, arXiv:1101.5792v1 \[gr-qc\]
G.R. Shorack & J.A. Wellner, [*Empirical Processes With Application to Statistics*]{}, Philadelphia, Society for Industrial & Applied Mathematics, 2009.
S. Alexeyev [*et al.*]{}, Class. Quantum Grav., 19, 4431 (2002)
J.F. Barbero, E.J. Villasenor, Phys. Rev. D, 77, 121502 (2008) ; J.F. Barbero, G. Eduardo, E.J. Villasenor, Phys. Rev. D, 83, 104013 (2011) ; X. Cao and A. Barrau, arXiv:1111.1975v1
K. Jedamzik & J.C. Niemeyer, Phys. Rev. D, 59, 124014 (1999)
|
---
author:
- Maoxin Liu
- Jingfang Fan
- Liangsheng Li
- 'Xiaosong Chen[^1]'
date: 'Received: date / Revised version: date'
title: 'Continuous Percolation Phase Transitions of Two-dimensional Lattice Networks under a Generalized Achlioptas Process'
---
[leer.eps]{} gsave 72 31 moveto 72 342 lineto 601 342 lineto 601 31 lineto 72 31 lineto showpage grestore
Introduction {#Introduction}
============
The percolation phase transition concerns the formation of a macroscopic component in systems on both lattices and networks [@StaufferBook]. It provides a model for the onset of a macroscopic component in random media [@StaufferBook] and social networks [@Solomon]. It was widely believed that the percolation transition is a typical continuous phase transition for various networks [@Dorogovtsev]. However, Achlioptas, D’Souza, and Spencer [@Achlioptas] found recently that the percolation phase transition in random network becomes discontinuous (first-order) under the Achlioptas process (AP), where the edge with minimum product of cluster masses is connected from the two randomly chosen unoccupied edges. The Achlioptas process suppresses the appearance of larger cluster and discourages the formation of a giant component, which has the size comparable with the number of vertices $N$. The percolation phase transition is delayed by this Achlioptas process and becomes sharper. It was argued by them[@Achlioptas] that the phase transition is discontinuous and named as an explosive percolation. Later, the Achlioptas process was introduced to the two-dimensional regular lattice networks [@ZiffPRL09] and scale-free networks [@ChoPRL09; @RadicchiPRL09]. It was claimed that the explosive percolation was found both in lattice [@ZiffPRL09] and in scale-free networks[@ChoPRL09; @RadicchiPRL09].
In the article of Achlioptas et al.[@Achlioptas], the step interval $\Delta$ between the size of the largest component $S_1=N^{1/2}$ and $S_1=0.5N$ is used as the criterion of continuous or discontinuous phase transition. Later, Ziff applied this criterion to the two-dimensional regular lattice networks[@ZiffPRL09; @ZiffPRE10]. It was found that the size-dependence of $\Delta$ in lattice network is quite different from that in random network. It is not well established that the first-order phase transition can be distinguished from the continuous phase transition by the size-dependence of $\Delta$. It was argued by da Costa et al.[@CostaPRL10] that the explosive percolation transition, under their modified Achlioptas process, is actually continuous. Recently, Riordan et al.[@RiordanArXiv] show mathematically that all Achlioptas process have continuous phase transitions. The finite-size behavior of the order-parameter distribution function has been used as the evidence of both discontinuous[@TianArXiv] and continuous[@GrassbergerArXiv] phase transition. It is also argued with the finite-size scaling that the explosive percolation is continuous [@RadicchiPRE10; @FortunatoArXiv]. This controversy about the character of explosive percolation is going on and calls for further investigations.
In this paper, we investigate the percolation phase transition in two-dimensional lattice network under a generalized Achlioptas process (GAP), which will be introduced in the next section. The generalized Achlioptas process is characterized by a probability parameter $p$. The GAP becomes the random growth model at $p=1/2$ and the minority product rule at $p=1$. Using the finite-size scaling analysis, our Monte Carlo simulation results demonstrate clearly that the percolation phase transition in two-dimensional lattice networks under the GAP is continuous. It will be shown that the critical exponents and therefore the universality class of the continuous percolation phase transition depend on the probability parameter $p$.
Our paper is organized as follows. In the next section, we introduce a generalized Achlioptas process in two-dimensional lattice network. In Section 3, we investigate the critical points of two-dimensional lattice network under the GAP and their critical exponents with the use of finite-size scaling. In Section 4, the finite-size scaling function of the ratio $S_2/S_1$ is obtained at different probability parameter $p$, where $S_2$ and $S_1$ are the size of the second largest and the largest cluster in the network. The universality class of the critical points in our model is discussed in Section 5. Finally we make some conclusions in Section 6.
Two-dimensional lattice network under the GAP {#Model}
==============================================
We consider a two-dimensional square lattice with size $L\times L$ and periodic boundary conditions in both directions. There are $N=L^2$ vertices in this lattice. We introduce a generalized Achlioptas process for adding edges into this lattice. In the generalized Achlioptas process, two edges are picked up randomly at each step. Each edge is connected with two clusters. The edge with the minimum product of the cluster sizes is chosen and added into the lattice with a probability $p$, where $0\le p \le 1$. Correspondingly, another edge is chosen with a probability $1-p$. At $p=0.5$, the GAP is equivalent to the classic Erdös-Rényi (ER) rule, where edges are picked up randomly. The two-dimensional square lattice with the ER rule is actually the two-dimensional bond percolation (BP) model. The GAP at $p=1$ is the product rule (PR) of Ref.[@Achlioptas] and our model becomes the PR model on the two-dimensional regular lattice.
In our Monte Carlo simulations, there are $N$ isolated vertices in a two-dimensional lattice at the beginning and then edges are added into the lattice through the GAP. With the edges added, we obtain a network in the lattice. The lattice network can be characterized by a reduced edge number $r\equiv N_r/N$, where $N_r$ is the number of the edges added.
We have used the algorithm of Newmann and Ziff [@newmann1; @newmann2] in our Monte Carlo simulations. For the investigations related only to the largest cluster of lattice networks in Figs. 3, 5 and 7, the linear sizes $L=32,~64,~128,
~256,~512$, and $1024$ are taken. When the second largest cluster in the lattice is taken into account in addition, only three linear sizes $L=64,~128$, and $256$ are taken in Figs. 2, 4 and 6. To get enough samples for the average of each simulation, different steps are taken for the simulation of different system size. In our Monte carlo simulations, we run $10,000,000$ steps for $L=32$ until to $6,400,000$ steps for $L=1024$.
For the cluster ranked $R$ and with size $S_R (r,L;p)$, we defined its reduced size as $$\label{ratio1} s_R(r,L;p)\equiv S_R (r,L;p)/N.$$ In Fig.\[s\], we shown the reduced size $s_1 (r,L;p)$ of the largest cluster.
In the Monte Carlo simulations of these data, we take the lattice size $L=1024$ and the probability parameter $p=0.5,~0.6,~0.7,~0.8,~0.9$, and $1.0$. At small $r$, the reduced size of the largest cluster is nearly zero. When $r$ is large enough, the reduced size $s_1$ becomes finite. This indicates the formation of a macroscopic component. Therefore, there is a percolation phase transition in the lattice network. The transition value of the reduced edge number $r_c$ depends on the probability parameter $p$. It is shown in Fig.\[s\] that $r_c$ increases with the probability parameter $p$. This is plausible since a larger $p$ means stronger suppression of larger cluster and therefore the later appearance of a macroscopic component. In the following, we will try to verify that these percolation phase transitions are continuous or not.
Critical points of two-dimensional lattice networks under GAP {#critical point}
=============================================================
If the percolation phase transitions above were continuous, the reduced size of the cluster ranked $R$ should follow the finite-size scaling form [@PF1984; @privman] $$\label{s-scale}
s_R(r,L;p)=L^{-\beta/\nu}\tilde{s}_R(tL^{1/\nu};p),$$ where $t=(r-r_c)/r_c$ characterizes the deviation from the critical point $r_c$ and $\nu$ is the critical exponent of the correlation length $\xi=\xi_0 |t|^{-\nu}$. This finite-size scaling form is supposed to be valid in the asymptotic critical region where $L\gg$ lattice spacing and $|t|\ll 1$. Outside the asymptotic region, additional correction terms should be taken into account.
For the largest cluster of lattice network, we have the finite-size scaling form $$\label{eq:S1}
s_1(r,L;p)=L^{-\beta/\nu}\tilde{s}_1(tL^{1/\nu};p).$$ In the bulk limit $L \to \infty$, the reduced size of the largest cluster becomes $$s_1(r,\infty;p)=0$$ for $r < r_c$ and $$s_1(r,\infty;p)=a_p\; t^{\beta}$$ for $r > r_c$. The emergent macroscopic component is characterized by the critical exponent $\beta$. The smaller the critical exponent $\beta$ is, the larger is the macroscopic component.
Near the critical point, the reduced size of the second largest cluster can be written also in a finite-size scaling form $$\label{eq:S2}
s_2(r,L;p)=L^{-\beta/\nu}\tilde{s}_2(tL^{1/\nu};p).$$ Using Eqs. (\[ratio1\], (\[eq:S1\]), and (\[eq:S2\]), we can obtain the finite-size scaling form of the ratio $$\label{eq:S2/S1}
S_2/S_1=\tilde{s}_2(tL^{1/\nu};p)/\tilde{s}_1(tL^{1/\nu};p)\equiv U(tL^{1/\nu};p).$$ At the critical point $t=0$, the ratio $$\label{ratioc}
\left.S_2/S_1\right|_{r=r_c}=U(0;p),$$ which is independent of the system size $L$. Therefore, the curves of $S_2/S_1$ at different system size $L$ have a cross-point at $r=r_c$. The critical point corresponds the fixed point of $S_2/S_1$, which can be used to determine the critical point of our system.
The logarithm of Eq. (\[eq:S1\]) can be expressed as $$\label{eq:lnS}
\ln s_1(r,L;p)=-(\beta/\nu) \ln L+\ln \tilde{s}_1(tL^{1/\nu};p).$$ At the critical point $r =r_c$, we have $$\label{eq:lnS1}
\ln s_1(r_c,L;p)=-(\beta/\nu) \ln L+\ln \tilde{s}_1(0;p),$$ which is a straight line with respect to $\ln L$. We can use this property to determine the critical point $r_c$ of the lattice networks also. From the slope of this straight line, the critical exponent ratio $\beta/\nu$ can be determined.
In the following, we use both the fixed point of $S_2/S_1$ and the linear dependence of $\ln s_1$ on $\ln L$ as the criterion to determine the critical point of the two-dimensional lattice networks under the generalized Achlioptas process. If we have reached the asymptotic critical region, both $s_1$ and $s_2$ satisfy the finite-size scaling form in Eqs.(\[eq:S1\]) and (\[eq:S2\]). The critical reduced edge numbers $r_c$ obtained from $S_2/S_1$ and $\ln s_1$ should be equal. On the other hand, the consistence of $r_c$ obtained from two different methods can be used as the indicator of the accuracy of our simulation results.
For the generalized Achlioptas process with probability parameter $p=0.5$, edges are added randomly into a two-dimensional lattice and our model becomes the so-called bond percolation model. It is well known that the bond percolation model has a continuous phase transition. We can determine the critical point of this model by the two methods described above. In Fig.\[ratiobp\], the ratio $S_2/S_1$ is shown as a function of the reduced edge number $r$ at different system size $L$. A fixed point between $r=0.4998$ and $r=0.5002$ is found and is in full agreement with $r_c=1/2$ of the bond percolation model. In Table \[tab1\], we denote the critical point obtained from $S_2/S_1$ as $r_c^{(1)}=0.5000 \pm 0.0002$.
As we have discussed above, a critical point can be determined alternatively by the linear relationship between $\ln s_1 $ and $\ln L$. In Fig.\[slopebp\], $\ln s_1 (r,L;p)$ at different reduced edge numbers are shown. At $r=0.4996$ and $r=0.5004$, the curves of $\ln s_1 (r,L;p)$ are curved and their curvatures have different sign. At $r=0.5$, the curve of $\ln s_1 (r,L;p)$ becomes a straight line. Therefore, we obtain the critical point $r_c^{(2)}=0.5000 \pm 0.0004$, which is in agreement with $r_c^{(1)}$. From the slope of $\ln s_1 (r,L;p)$ at $r_c$, we get the critical exponent ratio $\beta/\nu=0.108$. Our Monte Carlo simulation results agree very well with the exact results $r_c=1/2$ and $\beta/\nu=5/48$ of two-dimensional bond percolation model, for a review, see Ref. [@essam].
For probability parameter $p=0.8$, the edge that minimizes the product of two connecting cluster sizes is added into the lattice with a probability $0.8$ from two randomly chosen edges. The connection of smaller clusters is favored. The critical point of this system is investigated by the fixed point of $S_2/S_1$ and the linear dependence of $\ln s_1$ on $\ln L$. In Fig.\[ratio08\], the ratio $S_2/S_1$ is plotted as a function of the reduced edge number $r$ at different system sizes $L$. A fixed point of $S_2/S_1$ is found. It is between $r=0.5207$ and $r=0.5209$. So there is a continuous phase transition in this system and the critical point is at $r_c^{(1)}=0.5208 \pm 0.0001$. Alternatively, this critical point can be determined from $\ln s_1 (r,L;p)$. In Fig.\[slope08\], $\ln s_1 (r,L;p)$ at $r=0.5205, 0.5207$, and $0.5209$ are shown. The curvature of $\ln s_1 (r,L;p)$ is negative at $r=0.5205$ and positive at $r=0.5209$. The function $\ln s_1 (r,L;p)$ at $r=0.5207$ can be described quite well by a straight line with zero curvature. So the critical point $r_c^{(2)}=0.5207 \pm 0.0002$, in agreement with $r_c^{(1)}$. The slope of the straight line at $r_c=0.5207$ gives the critical exponent ratio $\beta/\nu=0.081$.
At $p=1.0$, our model becomes the two-dimensional lattice network under the Achlioptas process. The ratio $S_2/S_1$ of this model is shown in Fig.\[ratiopr\] for different system size $L$. There is a cross-point between $r=0.5265$ and $r=0.5267$, which corresponds to the critical point of this system. Therefore its critical point is at $r_c^{(1)}=0.5266 \pm 0.0001$. The curves of $\ln s_1 (r,L;p)$ at $r=0.52651,0.52655$, and $0.52659$ are shown in Fig.\[slopepr\]. Their curvatures change with $r$ from negative to positive. The function becomes linear with respect $\ln L$ around $r=0.52655$. So we get the critical reduced edge number $r_c^{(2)}=0.52655 \pm 0.00004$, which agrees with $r_c^{(1)}$ given above. From the slope of $\ln s_1 (r_c,L;p)$ with respect to $\ln L$, the critical exponent ratio $\beta/\nu=0.064$ is obtained. Our results of the critical point and the critical exponent ratio are in full agreement with the results of Refs.[@ZiffPRL09; @ChoPRL09; @RadicchiPRL09].
In Fig.\[rc\], we summarize the critical reduced edge numbers $r_c$ of two-dimensional lattice networks under the GAP with different $p$. It is found that $r_c$ increases with $p$. At a larger $p$, the continuous percolation phase transition appears at a larger critical reduced edge number $r_c$ and the formation of a giant component is delayed.
In Fig.\[beta\], the dependence of the critical exponent ratio $\beta/\nu$ on $p$ is shown. With the increase of $p$, the ratio $\beta/\nu$ decreases. We will show in the next section that $1/\nu$ increases with $p$. So it can be concluded that the critical exponent $\beta$ decreases with $p$. Smaller $\beta$ indicates the stronger emergence of a giant component in the networks after the percolation transition. Therefore, the increase of $p$ results in the delayed appearance of a continuous percolation phase transition and the formation of a larger giant component at the same time.
Finite-size scaling functions of $S_2/S_1$ {#finite-size scaling}
==========================================
In the last section, we have mentioned that the size ratio $S_2/S_1$ follows the finite-size scaling form in Eq.(\[eq:S2/S1\]) when $r$ is near the critical point $r_c$. For a given probability parameter $p$, the different curves of $S_2/S_1$ at different $L$ collapse into a finite-size scaling function after using the scaling variable $tL^{1/\nu}$, where $\nu$ is the critical exponent of correlation length. In the following, we will investigate the finite-size scaling function of $S_2/S_1$ for different $p$.
At $p=0.5$, we use the critical exponent of two-dimensional bond percolation model $\nu=4/3$ [@essam] for the finite-size scaling function of $S_2/S_1$. After defining the scaling variable $tL^{1/\nu}$ with this value of $\nu$, our Monte Carlo simulation results at $L=64,~126,~256$ collapse and we get the finite-size scaling function of $S_2/S_1$, which is shown in Fig.\[scalingbp\].
At $p=1.0$, the critical exponent $\nu$ is unknown. According to the finite-size scaling form in Eq.(\[eq:S2/S1\]), the curves of $S_2/S_1$ at different system sizes can collapse only when the correct critical exponent $\nu$ is used for the scaling variable. This property can be used also for determining the critical exponent $\nu$. At $1/\nu=0.93$, the curves of $S_2/S_1$ at $L=64,~128,~256$ collapse into its finite-size scaling function, which is shown in Fig.\[scalingpr\].
In Fig.\[scalingall\], we demonstrate the variation of the finite-size scaling function of $S_2/S_1$ with the probability parameter $p$. In the region before the percolation phase transition, the finite-size scaling function of $S_2/S_1$ increases with $p$. The second largest cluster in this region is more important at larger $p$. In the region after the percolation phase transition, the finite-size scaling function of $S_2/S_1$ decreases with $p$. The largest cluster in this region is more dominant at larger $p$. To get the finite-size scaling functions at different $p$, the corresponding exponents of correlation length are determined and presented in Fig.\[nu\] and Tabel \[tab1\].
[lllllll]{} p & $r_c^{(1)}$ & $r_c^{(2)}$ & $\beta/\nu$ & $1/\nu$\
$ 0.5$ & 0.5000(2) & 0.5000(4) & 0.108(4) & 0.75\
$ 0.6 $ & 0.5082(2) & 0.5082(4) & 0.102(3)& 0.77(1)\
$ 0.7 $ & 0.5153(3) & 0.5153(2) & 0.092(4) & 0.79(1)\
$0.8$& 0.5208(1) & 0.5207(2) & 0.081(5) & 0.83(1)\
$0.9 $& 0.5244(1)& 0.5244(1)& 0.070(3) & 0.88(1)\
$1.0$& 0.5266(1) & 0.52655(4)& 0.064(3) & 0.93(1)\
Universality classes {#universality}
====================
The concept of universality plays a fundamental role in statistical and elementary particle physics [@fisher98; @zinn-justin]. The universality is characterized by the dimensionality $d$ of the system and by the number $n$ of the components of the order parameter [@privman91]. Within a certain $(d,n)$ universality class, the critical exponents are independent of microscopic details and are universal. In a finite-size system near its critical point, there is also universality. For example, the Binder cumulant ratio of magnetization at the critical point is universal. The ratio $S_2/S_1$ here is similar to the Binder cumulant ratio of magnetization. We could suppose that the ratio $S_2/S_1$ at the critical point is also universal and does not depend on microscopic details.
In our previous investigations of the two-dimensional lattice networks under a generalized Achlioptas process, the dimensionality $d$ of systems is fixed and the macroscopic property of their order parameter is unchanged. But we have found in Figs.\[beta\],\[nu\] and \[U\] that the critical exponents $\beta$, $\nu$ and the ratio $S_2/S_1$ at $r_c$ depend on the probability parameter $p$. So the universality of percolation phase transition in these networks is characterized in addition by the probability parameter $p$ of GAP. A different probability parameter $p$ of GAP generates a different probability distribution of configuration. So the probability parameter $p$ is actually related to the macroscopic character of network. For a general classification of universality class in complex networks, further investigations are needed.
Conclusions
===========
We have investigated the percolation phase transitions in two-dimensional lattice network under a generalized Achlioptas process. In this GAP, we choose randomly two unoccupied edges in a two-dimensional lattice and the edge that minimizes the product of the two connecting cluster sizes is taken with a probability $p$. Our model becomes the two-dimensional bond percolation model at $p=0.5$ and the two-dimensional lattice network under the minority product rule at $p=1$.
The size $S_1$ of the largest cluster in the lattice increases with the edge number $N_r$. When the reduced edge number $r=N_r/N$ is larger than a certain value $r_c$, $S_1$ becomes comparable with the lattice size $N=L^2$. At $r_c$, a giant component emerges and there is a percolation phase transition. From the finite-size scaling analysis of $S_1$ and the ratio $S_2/S_1$, we can conclude that this percolation phase transition is continuous at probability parameter $0.5\le p \le 1$. The critical exponent ratio $\beta/\nu$ can be determined from the power-law behavior of $S_1$ at $r_c$. To obtain the finite-size scaling function of the ratio $S_2/S_1$ from the Monte carlo simulation data of different $L$ with the scaling variable $tL^{1/\nu}$ , the critical exponent of correlation length $\nu$ can be fixed. We find that the critical reduced edge number $r_c$ increases with the probability parameter $p$, which is shown in Fig.\[rc\]. The critical exponent ratio $\beta/\nu$ and the critical exponent $\nu$ decrease with the probability parameter $p$, as demonstrated in Fig.\[beta\] and \[nu\]. Under the GAP with $0.5 < p \le 1$, the formation of larger cluster is suppressed and this suppression increases with $p$. So the formation of a giant component should be delayed at a larger probability parameter $p$. This delay is accompanied then by the stronger emergence of the giant component, which is characterized by smaller $\beta$. It is plausible that $r_c$ increases and $\beta$ decreases with $p$. The finite-size scaling functions of the ratio $S_2/S_1$ are given for different $p$ in Fig.\[scalingall\].
Within a certain universality class characterized by the dimensionality of the system and by the number of components of the order parameter, the universal quantities (critical exponents, amplitude ratios, and scaling functions) of different systems are identical. For the two-dimensional lattice networks under a GAP we discuss here, the critical exponents $\beta$, $\nu$ and the ratio $S_2/S_1$ at the critical point depend on the the probability parameter $p$, which has been pointed out above. So the universality class of the percolation phase transition in this model should be characterized in addition by the probability parameter $p$. To understand the universality class of the critical phenomena in networks in general, further investigations are needed. For random networks, we introduce also a generalized Achlioptas process and the phase transitions in these systems are investigated [@Fan11].
This work is supported by the National Natural Science Foundation of China under grant 10835005.
D. Stauffer and A. Aharony, [*Introduction to Percolation Theory*]{} (Taylor & Francis, London, 1994).
S. Solomon, G. Weisbuch, L. de Arcangelis, N. Jan, and D. Stauffer, Physica A [**277**]{}, 239 (2000).
S. N. Dorogovtsev, A. V. Goltsev, and J. F. F. Mendes, Rev. Mod. Phys. [**80**]{}, 1275 (2008).
D. Achlioptas, R. M. D’Souza, and J. Spencer, Science [**323**]{}, 1453 (2009).
R. M. Ziff, Phys. Rev. Lett. [**103**]{}, 045701 (2009).
Y. S. Cho, J. S. Kim, J. Park, B. Kahng, and D. Kim, Phys. Rev. Lett. [**103**]{}, 135702 (2009).
F. Radicchi and S. Fortunato, Phys. Rev. Lett. [**103**]{}, 168701 (2009).
R. M. Ziff, Phys. Rev. E [**82**]{}, 051105 (2010).
R. A. da Costa, S. N. Dorogovtsev, A. V. Goltsev, and J. F. F. Mendes, Phys. Rev. Lett. [**105**]{}, 255701 (2010).
O. Riordan and L. Warnke, Science [**333**]{}, 322 (2011).
L. Tian and A. N. Shi, [*arXiv:1010.5900*]{} (2010).
P. Grassberger, C. Christensen, G. Bizhani, S. W. Son, and M. Paczuski, [*arXiv:1103.3728v2*]{}.
F. Radicchi and S. Fortunato, Phys. Rev. E [**81**]{}, 036110 (2010).
S. Fortunato and F. Radicchi, [*arXiv:1101.3567v1*]{} (2011).
M. E. J. Newmann and R. M. Ziff, Phys. Rev. Lett. [**85**]{}, 4104 (2000).
M. E. J. Newmann and R. M. Ziff, Phys. Rev. E [**64**]{}, 016706 (2001).
V. Privman and M. E. Fisher, Phys. Rev. B [**30**]{}, 322 (1984).
V. Privman, [*Finite Size Scaling and Numerical Simulation of Statistical Systems*]{}, (World Scientific, Singapore, 1990).
J. W. Essam,[*Phase Transitions and Critical Phenomena*]{}, edited by C. Domb and J. L. bebowitz (Academie Press, London, 1972), Vol. 2,p. 197.
M. E. Fisher, Rev. Mod. Phys. [**46**]{}, 597 (1974);[**70**]{}, 653 (1998).
J. Zinn-Justin, [*Quantum Field Theory and Critical Phenomena*]{}, (Clarendon Press, Oxford,1996).
V. Privman, A. Aharony, and P. C. Hohenberg, in [*Phase Transitions and Critical Phenomena*]{}, edited by C. Domb and J. L. Lebowitz (Academic, New York, 1991), Vol. 14, p.1.
Jingfang Fan, Maoxin Liu, Liangsheng Li, and Xiaosong Chen, to be published.
[^1]: *e-mail:[email protected]*
|
---
abstract: 'We study the excitonic recombination dynamics in an ensemble of (9,4) semiconducting single-wall carbon nanotubes by high sensitivity time-resolved photo-luminescence experiments. Measurements from cryogenic to room temperature allow us to identify two main contributions to the recombination dynamics. The initial fast decay is temperature independent and is attributed to the presence of small residual bundles that create external non-radiative relaxation channels. The slow component shows a strong temperature dependence and is dominated by non-radiative processes down to 40 K. We propose a quantitative phenomenological modeling of the variations of the integrated photoluminescence intensity over the whole temperature range. We show that the luminescence properties of carbon nanotubes at room temperature are not affected by the dark/bright excitonic state coupling.'
author:
- 'S. Berger'
- 'C. Voisin'
- 'G. Cassabois'
- 'C. Delalande'
- |
P. Roussignol\
Laboratoire Pierre Aigrain, École Normale Supérieure\
24, rue Lhomond, 75005 Paris, France\
- |
X. Marie\
Laboratoire de Nanophysique, Magnétisme et Optoélectronique, INSA\
135 avenue de Rangueil, 31077 Toulouse, France
title: 'Temperature dependence of exciton recombination in semiconducting single-wall carbon nanotubes'
---
Single-wall carbon nanotubes (SWCNT) are very promising nanoscale materials but, as expected for objects consisting only of surface atoms, they are highly sensitive to the coupling with their environment which may dramatically alter their electronic and optical properties. In fact, the luminescence of semiconducting SWCNT is one of the most sensitive probes of such environment-induced effects. Most of the samples do not show any luminescence in bulk proportions and one has to carefully isolate the nanotube from their neighbors (and prevent the formation of bundles of nanotubes) in order to observe radiative recombination across the bandgap [@oconnel; @lauretphysicaE]. However, this effect is only one signature of a more general change in the electronic properties of the nano-object coupled to its environment. A better understanding of the nature of the recombination channels in SCWNT is required for any application in photonics or optoelectronics, especially for such a one-dimensional nano-structure where Coulomb interactions are very strong. Indeed, it is known from both experimental and theoretical works [@heinzscience; @maultzsch] that the photoexcited electron-hole pairs form excitons with high binding energy. Due to the symmetry of the nanotubes the lowest energy lying state is expected to be dipole forbidden (dark state) [@zhao] which could lead to an intrinsically low quantum yield in agreement with estimates reported in the literature ($Q<10^{-3}$ ) [@oconnel; @lebedkin; @wangPRL].
Recently, progress in the sample preparation and use of powerful optical techniques gave new insights into the excitonic recombination processes. Time resolved pump-probe measurements have shown that in light-emitting samples where SWCNT are isolated in micelles, the recombination dynamics is at least one order of magnitude slower than in non-emitting samples consisting in SWCNT bundles [@lauretPRB; @ostojic]. Time-resolved photoluminescence (TR-PL) measurements on ensembles of isolated SCWNT [@wangPRL; @reich] revealed a non-exponential PL decay with a fast component within the first few picoseconds and a long lasting tail of tens of picoseconds. This latter observation led to the conclusion that the intrinsic radiative lifetime may be long in SWCNT. Recent time-resolved photoluminescence (TR-PL) measurements of individual SWCNT have revealed a monoexponential decay within the experimental sensitivity with a wide dispersion of lifetimes from one tube to another : statistics on (6,4) tubes at 87 K show values spreading from 10 to 180 ps with however a small number of events above 60 ps [@hagen]. In that context, ensemble measurements may provide powerful statistical information to identify the recombination mechanisms and their temperature variations.
In this letter, we present high-sensitivity measurements of TR-PL of nanotubes embedded in a gelatin matrix as a function of the temperature. We propose a simple description of the heterogeneity of the sample which allows us to reproduce the nonexponential temporal response of the sample over three decades at any temperature between 10 and 300 K. The initial fast decay is temperature independent and is attributed to the presence of small residual bundles that create external non-radiative relaxation channels. The slow component shows a previously unresolved linear temperature dependence and is dominated by non-radiative processes down to 40 K. At low temperature, both the integrated PL intensity and lifetime measurements indicate the existence of a regime where carriers are trapped to shallow non-emitting states. From a quantitative phenomenological modeling we deduce an estimate of the dark/bright states splitting in SWCNTs. We emphasize the strikingly weak variations of the PL intensity as a function of temperature and show that the dark/bright excitonic states coupling does not play a key role in the luminescence properties of carbon nanotubes at room temperature.
The sample consists in purified SWCNTs obtained by the HiPCO method and embedded in a gelatin matrix. Following the process described in [@oconnel], we first prepare a suspension of isolated SWCNT by strong sonication in an aqueous solution of SDS (1% wt). After centrifugation at 200 000 g for 4h the supernatant is collected. In order to obtain a solid sample, we then heat the supernatant at 70 $^o$C and add commercial dehydrated gelatin of low gel point (40 $^o$C). After mixing, a small amount of the solution deposited on a substrate forms a homogeneous gel as it cools down to room temperature. The photoluminescence intensity of SWCNT embedded in such a gel is comparable to the one of the initial suspension (without gelatin), whereas a deposit of the initial suspension shows a drop of the PL signal of at least one order of magnitude when the solvant evaporates. We believe that the high hydratation level of the gel preserves the micelle structure based on hydrophilic/hydrophobic competition, in contrast to the case of an evaporation of the suspension on a substrate where the reaggregation of nanotubes as bundles is very likely to occur. Moreover, the SWCNT doped gel is an easy handle solid state composite material that can be cooled down to 4 K and heated back to room temperature without any apparent damage, even for tens of cycles.
![Photoluminescence intensity at 10 K of isolated HiPCO SWCNT embedded in a gelatin matrix plotted as a function of emission and excitation energy. By comparison to the semi-empirical Kataura plot, PL spots are assigned to a given SWCNT chirality.[]{data-label="fig:PLEmap"}](Voisin_figure1.eps)
The excitation map of the luminescence of the sample at 10 K is displayed in Fig. \[fig:PLEmap\]. The excitation is provided by a cw Ti:sapphire laser and the detector consists in a InGaAs photodiode. Following the procedure introduced by Bachilo et al., we assign each peak to a given pair of chiral indices [@bachilo]. For the TR-PL experiments we will focus on the (9,4) chirality by tuning the excitation energy at 1.7 eV in resonance with the second excitonic transition. This emission line is centered at 1.13 eV with a width of about 40 meV.
Time resolved photoluminescence measurements were performed using 1.4 ps pulses from a mode-locked Ti-sapphire laser ($\lambda = 730$ nm $\leftrightarrow$ 1.7 eV). The excitation fluence was kept constant at about 2 $\mu$J.cm$^{-2}$. The PL signal was spectrally dispersed in a 0.35 m monochromator and detected by a synchro-scan streak camera (S1-photocathode) with an mean overall time resolution of 25 ps. TR-PL temporal sections were obtained by spectrally integrating the full line of the (9,4) SWCNT, *i.e.* between 1.08 and 1.165 eV. In order to achieve the best temperature control, the SWCNT doped gel is directly deposited on the cold finger of the cryostat.
![(a) Time-resolved photoluminescence of isolated (9,4) SWCNT in a gel on a semi-logarithmic scale for an excitation at 1.7 eV. System response function (dotted line) obtained from the elastic scattering of the laser : the temporal resolution is 26 ps. Normalized photoluminescence transients (open circles) are well reproduced by the convolution (solid lines) of the model $I(t)$ (eq. \[eq2\]) and the response function. (b) Normalized integrated PL signal as a function of the temperature (black dots) ; normalized c.w. PL signal (open circle). (c) Probability distribution of lifetimes at 20 K. []{data-label="fig:PLfit"}](Voisin_figure2.eps)
TR-PL signals are displayed in Fig. \[fig:PLfit\] (a) for different temperatures. The high sensitivity of the setup allows us to detect the signal decay over three decades. We carefully checked that the signal profile remains unchanged when dividing the fluence by up to a factor of 8. Hence we exclude any hidden temperature effect due to laser heating as well as any many body mechanisms such as exciton-exciton annihilation [@wang_PRB; @ma_PRL; @huang_PRL]. Comparison between the temporal integration of these signals and measurements under cw detection (focus mode) shows that we do not miss any hypothetic long living component that could play an important role in the luminescence (Fig. \[fig:PLfit\] (b)).
At low temperature, the signal shows an initial fast component and after 400 ps a quasi-exponential long living tail. This overall non-exponential dynamics is systematically observed for ensemble measurements, whatever the technique [@lebedkin; @wangPRL; @lauretPRB; @ostojic]. We believe that it is the signature of the profound inhomogeneity of the sample even within a given chirality.
In order to go further in the data analysis, we propose a simple model of the inhomogeneity of the sample. Let us first consider the exciton recombination in one single nanotube : in addition to an “internal” decay rate $\gamma_0$ (including both radiative and non radiative contributions), the exciton experiences a small number $j$ of “external” decay channels due to the coupling to the environment, each with a rate $\gamma_{ex}$. Thus the overall rate $\gamma$ varies from one tube to another with the number $j$ of additional channels : $\gamma=\gamma_0 + j \gamma_{ex} $. However the recombination dynamics remains mono-exponential for each tube which is consistent with previous observations of mono-exponential exciton recombination in individual SWCNT [@hagen]. The probability $P_n (j)$ of having $j$ extrinsic channels in a nanotube is assumed to be Poissonian with a parameter $n$ discribing the mean number of external decay channels: $P_n (j)= e^{-n} \frac{n^j}{j!}$.
For a large ensemble of nanotubes, the TR-PL signal reads :
$$\begin{aligned}
\label{eq1}
I(t) &=& A\sum_{j=0}^{\infty} P_n (j) \exp(-\gamma_0 t - j \gamma_{ex}t) \\
&=& A \exp \lbrack-\gamma_0 t - n (1-e^{-\gamma_{ex} t})\rbrack
\label{eq2}\end{aligned}$$
where $A$ is the overall amplitude of the signal.
We fit the PL transients with the convolution of $I(t)$ (eq. (\[eq2\])) and the system response function. The agreement is excellent for the whole time and temperature ranges as shown in Fig. \[fig:PLfit\] (a). We find out that $n$ and $\gamma_{ex}$ are almost independent on the temperature with $n=2.8 \pm0.1$ and $1/\gamma_{ex} = 70 \pm 10$ ps. These values indicate that each tube experiences a small number of relatively efficient additional channels. We attribute this to the presence of remaining small bundles, the external process being a coupling between tubes within the bundle. Indeed, bundles of 2 or 3 nanotubes are hardly distinguishable from single nanotubes in an AFM inspection, especially when surrounded by surfactant molecules. Moreover, such an intertube coupling within a bundle has already been observed by means of time-resolved measurements and has been shown to be temperature independent [@lauret]. As a result, this method allows us to extract the response of genuine individual nanotubes (Fig. \[fig:DecayRate\]).
From this analysis, we deduce that only 6% ($P_n(0)$ with $n=2.8$) of the SWCNTs within the sample are effectively individual (and show a decay rate $\gamma_0$). Nevertheless, due to their much lower non radiative decay rate, these tubes provide more than 40% of the total PL intensity at low temperature and still 15% at 290 K. The profound inhomogeneity of the SWCNT ensemble is particularly striking when looking at the statistical distribution of exciton lifetimes at a given temperature (Fig. \[fig:PLfit\] (c)). At 20 K, the average decay time ($1/\gamma$) is close to 30 ps whereas the internal decay time ($1/\gamma_0$) is 10 times larger. This histogram of lifetimes is in good agreement with the experimental data of Hagen et al. [@hagen] especially concerning the presence of rare events at very large values (10 times the average of the distribution), which is a strong feature of our model.
Following this analysis of the inhomogeneity, we can then estimate the quantum yield $Q_i$ of an individual nanotube ($j=0$ component in eq.(\[eq1\])) from the average quantum yield $\overline{Q}$ of the sample : $\frac{\overline{Q}}{Q_i}=\sum_j \frac{\gamma_0 P_n(j)}{\gamma_0+j\gamma_{ex}}$. We find $\frac{Q_i}{\overline{Q}} \simeq 8$ at 40 K. From the values $\overline{Q} \simeq 10^{-3}$ reported in the literature for ensembles [@oconnel; @lebedkin; @wangPRL] (in agreement with our own estimate), we deduce that the quantum yield of an isolated tube can reach one percent. This striking result is in agreement with a recent report of high quantum yield for suspended nanotubes, reaching up to 7% [@lefebvre] and confirms the extreme sensitivity of nanotubes optical properties to their environment.
![(a) : Internal decay rate $\gamma_0$ (open circles) of the (9,4) SWCNT as a function of temperature. The solid line is the computed internal decay time from the three level model. (b) : Normalized integrated photoluminescence signal (black squares) from isolated nanotubes (corresponding to the $j=0$ component in eq. \[eq1\]) and corresponding fraction in PL measurements under cw excitation (open circles). The solid line is the computed PL intensity from an individual nanotube in the three level model. (Inset) : Schematic of the 3-level model.](Voisin_figure3.eps)
. \[fig:DecayRate\]
From the fitting of the temperature dependent data we deduce the evolution of the internal decay rate $\gamma_0$ with temperature (Fig. \[fig:DecayRate\] (a)). At low temperatures (between 10 and 40 K) the internal decay rate remains almost constant when heating up the sample. Then at temperatures higher than 40 K and up to room temperature, the internal decay rate shows a linear increase which has never been reported before. The temperature dependence of the integrated PL intensity (Fig. \[fig:PLfit\] (b)) shows a similar bivalent behavior with a steep increase of the quantum yield when heating up the sample from 10 to 50 K and then a soft decrease between 50 K and room temperature. The peak temperature of 40 K (which corresponds to a typical energy of about 4 meV) thus clearly separates two different regimes in the recombination dynamics.
The temperature evolution of both the integrated PL signal and the recombination time $1/\gamma_0$ in individual nanotubes can be quantitatively reproduced by means of a simple three level model (inset of Fig. \[fig:DecayRate\]). The highest level (B) is coupled to the ground state (G) through radiative and non radiative recombination processes with rates $\gamma_R$ and $\gamma_{NR}$ respectively. The radiative decay rate is supposed to be proportional to $T^{-1/2}$ (one dimensional material [@andreani]) and much smaller than the non radiative decay rate. The latter is taken proportional to the Bose-Einstein occupation number of a phonon mode $\hbar \omega_p$. The intermediate level (D) is only coupled through non radiative processes to the ground state with the same rate $\gamma_{NR}$ and thus does not contribute to light emission. The two excited states are coupled to each other with rates $\gamma_{\uparrow}$ and $\gamma_{\downarrow}$.
For each temperature we have performed a numerical computation of the time evolution of the populations and of the integrated PL signal of an isolated nanotube. We find that the coupling rates $\gamma_{\uparrow}$ and $\gamma_{\downarrow}$ between states (B) and (D) have to be much faster than both the radiative and non radiative decays in order to reproduce the experimental data, which means that the populations are almost in thermal equilibrium. The linear variation of the internal decay rate $\gamma_0$ as a function of the temperature (above 40 K) indicates that the recombination is dominated by phonon assisted processes and is well reproduced with our model by taking a phonon energy of 5 $\pm1$ meV (Fig. \[fig:DecayRate\] (a)). This linear behavior is typical of quasi elastic phonon assisted scattering for which the Bose occupation factor becomes linear for temperatures well above the phonon energy.
We compare the computed PL signal with the PL signal of one isolated nanotube extracted from the ensemble experimental data. This is achieved by selecting the term corresponding to $j=0$ (eq. \[eq1\]) in the fit of the experimental TR-PL signal and integrating it over the time. Numerical simulations are in excellent agreement with experiments for an energy splitting between states (B) and (D) of 3.5 $\pm0.5$ meV (Fig. \[fig:DecayRate\] (b)). We find that the quenching of the PL at low temperature reflects an accumulation of the carriers in the intermediate state (D). When the sample is heated up, the occupation probability of the bright state increases while the non radiative rate remains almost constant, resulting in an increasing PL signal. In the time domain, the decay rate $\gamma_0$ is almost constant since it is dominated by the non radiative contribution ($ kT < \hbar \omega$). Above 40 K the increase of the non radiative decay rate results in a decrease of the PL signal and an increase of the overall decay rate.
While our model gives a phenomenological explanation of the data, the nature of states (B) and (D) is not elucidated. The need of a one-dimensional law ($1/\sqrt{T}$) for the radiative decay rate in order to reproduce the temperature dependence of the PL intensity suggests that the level (B) is the delocalized bright excitonic state. On the other hand, the level (D) can either be the dark state in the one-dimensional exciton picture or a localized shallow defect level. However, the main point is that the splitting between states (B) and (D) is of the order of 3.5 meV (which compares with the lowest theoretical estimates of the one dimensional excitonic splitting [@zhao; @perebeinos]). This explains why the temperature variations of the PL signal are strikingly weak as compared to the regular behavior in semiconducting nanostructures (for which variations of the PL intensity of several orders of magnitude are commonly observed on the same temperature range [@gurioli]). Thus, at room temperature both states are equally populated and the presence of a dark state lying at lower energy does not play a significant role in the luminescence properties of carbon nanotubes at room temperature.
In summary, we have demonstrated that the inhomogeneity of ensembles of carbon nanotubes and most probably the presence of remaining small bundles is responsible for their non exponential response. We propose a simple modeling to access the internal dynamics of genuine isolated nanotubes. We show that on a large temperature scale above 40 K the variation of the quantum yield is moderate (less than 50%). This is a consequence of a very small splitting between dark and bright states. This means that the room temperature PL properties (and especially the low average quantum yield) hardly depend on the presence of the dark state. On the other hand we have shown that the inhomogeneity of the sample may hide much larger quantum yields for genuine individual nanotubes. The direct measurement of the quantum yield of one individual nanotube, although challenging, would be of highest interest for future investigation of SWCNT as light emitters.
The authors are grateful to the whole team of LNMO for technical support and to A. Filoramo and L. Capes for helping in sample preparation. LPA de l’ENS is “Unité Mixte de Recherche associée au CNRS (UMR 8551) et aux universités Paris 6 et 7.” This work has been done in the framework of the GDRE n$^{o}$ 2756 ’Science and applications of the nanotubes - NANO-E’. S.B. is funded by a DGA grant.
M.J. O’Connell, S.M. Bachilo, C.B. Huffman, V.C. Moore, M.S. Strano, E.H. Haroz, K.L. Rialon, P.J. Boul, W.H. Noon, C. Kittrell, J.P. Ma, R.H. Hauge, R.B. Weisman, R.E. Smalley, Science [**297**]{}, 593 (2002).
J-S. Lauret, C. Voisin, G. Cassabois, P. Roussignol, C. Delalande, A. Filoramo, L. Capes, E. Valentin, O. Jost, Physica E [**21**]{}, 1057 (2004).
F. Wang, G. Dukovic, L.E. Brus, T.F. Heinz, Science [**308**]{}, 838 (2005).
J. Maultzsch, R. Pomraenke, S. Reich, E. Chang, D. Prezzi, A. Ruini, E. Molinari, M.S. Strano, C. Thomsen, C. Lienau, Phys. Rev. B [**72**]{}, 241402(R) (2005).
H. Zhao and S. Mazumdar, Phys. Rev. Lett. [**93**]{}, 157402 (2004).
S. Lebedkin, F. Hennrich, T. Skipa and M. Kappes, J. Phys. Chem. B [**107**]{}, 1949 (2003).
F. Wang, G. Dukovic, L.E. Brus, T.F. Heinz, Phys. Rev. Lett. [**92**]{}, 177401 (2004).
J-S. Lauret, C. Voisin, S. Berger, G. Cassabois, C. Delalande, P. Roussignol, L. Goux-Capes, A. Filoramo, Phys. Rev. B [**72**]{}, 113413 (2005).
G.N. Ostojic, S. Zaric, J. Kono, M.S. Strano, V.C. Moore, R.H. Hauge, R.E. Smalley, Phys. Rev. Lett. [**92**]{}, 117402 (2004).
S. Reich, M. Dworzak, A. Hoffmann, C. Thomsen, M.S. Strano, Phys. Rev. B [**71**]{}, 033402 (2005).
A. Hagen, M. Steiner, M.B. Raschke, C. Lienau, T. Hertel, H. Qian, A.J. Meixner, A. Hartschuh, Phys. Rev. Lett. [**95**]{}, 197401 (2005).
S.M. Bachilo, M.S. Strano, C. Kittrell, R.H. Hauge, R.E. Smalley, R.B. Weisman, Science [**298**]{}, 2361 (2002).
F. Wang, G. Dukovic, E. Knoesel, L.E. Brus and T.F. Heinz, Phys. Rev. B. [**70**]{}, 241403(R) (2004).
Y-Z. Ma, L. Valkunas, S.L. Dexheimer, S. M. Bachilo and G. R. Fleming, Phys. Rev. Lett. [**94**]{}, 157402 (2005).
L. Huang and T. D. Krauss, Phys. Rev. Lett. [**96**]{}, 057407 (2006).
J-S. Lauret, C. Voisin, G. Cassabois, C. Delalande, P. Roussignol, O. Jost and L. Capes, Phys. Rev. Lett. [**90**]{}, 057404 (2003).
J. Lefebvre, D. G.Austing, J. Bond and P. Finnie, Nano Lett. [**6**]{}, 1603 (2006)
Andreani L. C., “Optical transitions, excitons, and polaritons in bulk and low-dimensionnal semiconductor structures”, NATO ASI series B : Physics, vol. 340, Plenum Press, New York, 1995.
V. Perebeinos, J. Tersoff, P. Avouris, Nano Lett. [**5**]{}, 2495 (2005).
M. Gurioli, A. Vinattieri, M. Colocci, C. Deparis, J. Massies, G. Neu, A. Bosacchi, S. Franchi, Phys. Rev. B [**44**]{}, 3115 (1991).
|
---
abstract: 'Let $G=(V,E)$ be an undirected graph, $L_G\in \mathbb{R}^{V \times V}$ be the associated Laplacian matrix, and ${\bm b} \in \mathbb{R}^V$ be a vector. Solving the Laplacian system $L_G {\bm x} = {\bm b}$ has numerous applications in theoretical computer science, machine learning, and network analysis. Recently, the notion of the Laplacian operator $L_F:\mathbb{R}^V \to 2^{\mathbb{R}^V}$ for a submodular transformation $F:2^V \to \mathbb{R}_+^E$ was introduced, which can handle undirected graphs, directed graphs, hypergraphs, and joint distributions in a unified manner. In this study, we show that the submodular Laplacian system $L_F({\bm x}) \ni {\bm b}$ can be solved in polynomial time. Furthermore, we also prove that even when the submodular Laplacian system has no solution, we can solve its regression form in polynomial time. Finally, we discuss potential applications of submodular Laplacian systems in machine learning and network analysis.'
author:
- |
Kaito Fujii kaito\[email protected] Tasuku Soma tasuku\[email protected]\
Graduate School of Information Science and Technology\
The University of Tokyo,\
7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan Yuichi Yoshida [email protected]\
National Institute of Informatics,\
2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430, Japan
bibliography:
- 'main.bib'
title: 'Polynomial-Time Algorithms for Submodular Laplacian Systems '
---
|
---
author:
- 'Jakub M. Tomczak'
- 'Ewelina Wglarz-Tomczak'
- 'Agoston E. Eiben'
bibliography:
- 'main.bib'
title: Differential Evolution with Reversible Linear Transformations
---
<ccs2012> <concept> <concept\_id>10003752.10003809.10003716.10011138.10011803</concept\_id> <concept\_desc>Theory of computation Bio-inspired optimization</concept\_desc> <concept\_significance>500</concept\_significance> </concept> </ccs2012>
Introduction
============
Background
==========
Our Approach
============
Experiments
===========
Conclusion
==========
Appendix {#appendix .unnumbered}
========
EW-T is financed by a grant within Mobilnosc Plus V from the Polish Ministry of Science and Higher Education (Grant No. 1639/MOB/V/2017/0).
|
---
abstract: 'It’s well-known in [Kähler ]{}geometry that the infinite dimensional symmetric space ${\mathcal{H}}$ of smooth [Kähler ]{}metrics in a fixed [Kähler ]{}class on a polarized [Kähler ]{}manifold is well approximated by finite dimensional submanifolds ${\mathcal{B}}_k \subset {\mathcal{H}}$ of Bergman metrics of height $k$. Then it’s natural to ask whether geodesics in ${\mathcal{H}}$ can be approximated by Bergman geodesics in ${\mathcal{B}}_k$. For any polarized [Kähler ]{}manifold, the approximation is in the $C^0$ topology. While Song-Zelditch proved the $C^2$ convergence for the torus-invariant metrics over toric varieties. In this article, we show that some $C^{\infty}$ approximation exists as well as a complete asymptotic expansion for principally polarized Abelian varieties. We also get a $C^\infty$ complete asymptotic expansion for harmonic maps into ${\mathcal{B}}_k$ which generalizes the work of Rubinstein-Zelditch on toric varieties.'
address: 'Department of Mathematics, Northwestern University, USA'
author:
- Renjie Feng
title: Bergman metrics and geodesics in the space of Kähler metrics on principally polarized Abelian varieties
---
Introduction
============
Let $(M,\omega)$ be an $m$-dimensional polarized [Kähler ]{}manifold. Then the space ${\mathcal{H}}$ of smooth [Kähler ]{}metrics in a fixed [Kähler ]{}class will be an infinite dimensional Riemannian manifold under the natural $L^2$ metric. At the level of individual metrics $\omega \in {\mathcal{H}}$, there exists a well-developed approximation theory [@T; @Z2]: Given $\omega$, one can define a canonical sequence of Bergman metrics $\omega_k \in {\mathcal{B}}_k$ which approximates $\omega$ in the $C^\infty$ topology. The approximation theory is based on microlocal analysis in the complex domain, specifically Bergman kernel asymptotics on and off the diagonal. Our principal aim is to study the approximation of certain global aspects of the geometry, such as the approximation of the harmonic maps or geodesics in ${\mathcal{H}}$ by the corresponding objects in ${\mathcal{B}}_k$.
The geodesic equation for the [Kähler ]{}potentials $\phi_t$ of $\omega_t$ is a complex homogeneous Monge-Ampère equation [@D; @S]. Concerning the solution of this Dirichlet problem, we have the following regularity theorem: $\phi_t\in C^{1,\alpha}([0, T] \times M)$ for all $\alpha< 1$ if the endpoint metrics are smooth [@C]. It is therefore natural to study the approximation of Monge-Ampère geodesics $\phi_t$ in ${\mathcal{H}}$ by the much simpler geodesics $\phi_k(t, z)$ in ${\mathcal{B}}_k$, which are defined by one parameter subgroups of $GL(d_k + 1)$. The problem of approximating geodesic segments in ${\mathcal{H}}$ between two smooth endpoints by geodesic segments in ${\mathcal{B}}_k$ was raised by Arezzo-Tian, Donaldson and Phong-Sturm [@AT; @D; @PS]). Phong-Sturm proved that $\phi_k(t, z)\rightarrow \phi_t$ in a weak $C^0$ sense on $[0, 1]\times M$; a $C^0$ result with a remainder estimate was later proved by Berndtsson [@B].
To understand the approximation of ${\mathcal{H}}$-geodesics by ${\mathcal{B}}_k$-geodesics better, e.g., the rate of the approximation, we can test some special varieties and expect a better result. For example, in the toric varieties case, when one restricts to torus-invariant metrics, the geodesic equation becomes the real homogeneous Monge-Ampère equation and thus can be linearized by the Legendre transform [@S]. Thus the geodesic will be smooth if the endpoints are two smooth metrics. For such geodesics, Song-Zelditch proved a profound $C^2$ convergence in space-time derivatives with remainder estimates. In a subsequent paper by Rubinstein-Zelditch [@RZ], it was proved that the harmonic map equation can be linearized and thus can be solved and that harmonic maps into ${\mathcal{H}}$ are also the $C^2$ limit of the cooresponding ones into ${\mathcal{B}}_k$.
Our motivation in this article is to test the convergence of geodesics and more general harmonic maps over the principally polarized Abelian varieties by applying the method developed in [@RZ; @SoZ]. Our main result is that $\phi_k(t,z)\rightarrow \phi_t(z)$ in the $C^\infty$ topology in this Abelian case. Moreover, $\phi_k(t,z)$ has a complete asymptotic expansion in $k$ with the leading term $\phi_t(z)$ and the second term $\log (k^m R_\infty)$ where $R_\infty$ is the ratio of the norming constants (\[uyee\]). We also test the convergence of the harmonic maps into ${\mathcal{H}}_0^\Gamma$ of $(S^1)^m$-invariant metrics by the corresponding ones into ${\mathcal{B}}_k$ and the convergence is still in the $C^\infty$ topology.
Background
==========
Geodesics in ${\mathcal{H}}$ and ${\mathcal{B}}_k$
--------------------------------------------------
Let $M$ be an $m$-dimensional compact [Kähler ]{}manifold, $L\rightarrow M$ an ample holomorphic line bundle. Let $h$ be a smooth hermitian metric on $L$, then $h^k$ will be the induced metric on $L^k$. The curvature of $h$ is the $(1,1)$-form on $M$ defined locally by the formula $R(h) =-\frac{\sqrt{-1}}{2} \partial \bar \partial \log |s(z)|^2_h$, where $s(z)$ is a local, nowhere vanishing holomorphic section [@GH]. If we fix a hermitian metric $h_0$ and let $\omega_0 = R(h_0)$, then we define ${\mathcal{H}}$ as the space of [Kähler ]{}metrics in the fixed class of $[\omega_0]$: $$\label{HCALDEF} {\mathcal{H}}\ = \
\{\phi\in C^{\infty} (M) : \omega_\phi\ = \ \omega_0+ \frac{\sqrt{-1}}{2}\partial \bar \partial \phi>0\
\},$$where $\phi$ is identified with $h = h_0e^{-\phi}$ so that $R(h)=\omega_{\phi}$. If we define the metric $g_{{\mathcal{H}}}$ on ${\mathcal{H}}$ as $$\label{metric} ||\psi||^2_{g_{{\mathcal{H}}}, \phi}\ = \ \int_M |\psi|^2\
\omega_{\phi}^m,\ \, \;\; {\rm ~ where~} \phi \in {\mathcal{H}}{\rm~ and ~} \psi \in T_{\phi} {\mathcal{H}}\simeq C^{\infty}(M).$$ Then formally $({\mathcal{H}}, g_{{\mathcal{H}}})$ is an infinite dimensional non-positively curved symmetric Riemannian manifold [@D; @M; @S]. Furthermore, the geodesics of ${\mathcal{H}}$ in this metric are the paths $\phi_t$ which satisfy the partial differential equation: $$\label{eeew} \ddot{\phi}-|\partial \dot{\phi}|^2_{\omega_{\phi}}=0.$$
The space ${\mathcal{H}}$ contains a family of finite-dimensional non-positively curved symmetric spaces ${\mathcal{B}}_k$ which are defined as follows: Let $H^0(M, L^k)$ be the space of holomorphic sections of $L^k \to M$ and let $d_k + 1 =
\dim H^0(M, L^k)$. For large $k$ and for $\underline{s} = (s_0, ...., s_{d_k})$ an ordered basis of $H^0(M, L^k)$, let $$\iota_{\underline{s}}: M \rightarrow \mathbb{CP}^{d_k},\;\;z \rightarrow [s_0(z),
\dots, s_{d_k}(z)]$$ be the Kodaira embedding. Then we have a canonical isomorphism $L^k = \iota_{\underline{s}}^* O(1)$. We then define a Bergman metric of height $k$ to be a metric of the form: $$\label{FSDEFa} FS_k(\underline{s}):= (\iota_{\underline{s}}^*
h_{FS})^{1/k} = \frac{h_0}{\left( \sum_{j = 0}^{d_k}
|s_j(z)|^2_{h_0^k} \right)^{1/k}},$$ where $h_{FS}$ is the Fubini-Study Hermitian metric on ${\mathcal{O}}(1) \to {\mathbb{CP}}^{d_k}$. Note that the right side of (\[FSDEFa\]) is independent of the choice of $h_0$. We define the space of Bergman metrics as: $$\label{FSDEFaww}{\mathcal{B}}_k = \ \{FS_k(\underline{s}): \underline{s} \hbox{\ a basis
of $H^0(M, L^k) $\}. }\ $$ Then ${\mathcal{B}}_k=GL(d_k+1)/U(d_k+1)$ is a finite-dimensional negatively curved symmetric space. It’s proved in [@T; @Z2] that the union ${\mathcal B} = \bigcup_{k=1}^{\infty} {\mathcal{B}}_k$ is dense in ${\mathcal{H}}$ in the $C^{\infty}$ topology : If $h \in {\mathcal{H}}$, then there exists $h(k) \in {\mathcal{B}}_k$ such that $h(k) \rightarrow h$ in $C^{\infty}$ topology.
In fact, there is a canonical choice of the approximating sequence $h(k)$ [@T] which is used throughout the article. The hermitian metric $h$ on $L$ induces a natural inner product $Hilb_k(h)$ on ${H^0(M,L^k)}$ defined by: $$\label{dsldd} \langle s_1, s_2\rangle_{h^k}=\int_M (s_1(z),s_2(z))_{h^k}\frac{\omega_h^m}{m!} ,\; \;\mbox{where}\; \omega_h=R(h),$$ for any $s_1,s_2\in {H^0(M,L^k)}$. In particular, the norm square of the holomorphic section is: $$\label{dsldddd} \|s\|^2_{h^k}=\int_M |s|^2_{h^k}\frac{\omega_h^m}{m!}$$ Now choose $\underline{s}(k)$ as an orthonormal basis of ${H^0(M,L^k)}$ with respect to the inner product $Hilb_k(h)$, then we have the following $C^{\infty}$ asymptotics for the Bergman kernel as $k\rightarrow \infty$ [@Z2] (see also [@BBS; @BS]):$$\label{gbvgg}\sum_{j = 0}^{d_k}
|s_j(z)|^2_{h^k}=k^m+a_1(z)k^{m-1}+\cdots ,$$where $a_1(z)$ is the scalar curvature of $h$. Now let $\underline{\hat{s}}(k)=k^{-\frac{m}{2}}\underline{s}(k)$. Then the Bergman metric $h(k)=FS_k\circ Hilb_k(h):=FS_k(\underline{\hat{s}}(k))$ will be an approximating sequence of $h$; to be more precise, (\[FSDEFa\]) and (\[gbvgg\]) imply that for each $r>0$,$$\left \|\frac{h(k)}{h}-1\right\|=O(\frac{1}{k^2})\,\,,\left \|\omega(k)-\omega\right\|=O(\frac{1}{k^2})\,\,,\left \|\phi(k)-\phi\right\|=O(\frac{1}{k^2}),$$ where the norms are taken with respect to $C^r(\omega_0)$. Here, as before, $\omega=R(h)$, $\omega(k)=R(k)$, $h=h_0e^{-\phi}$, $h(k)=h_0e^{-\phi(k)}$.
Now we can compare geodesics in ${\mathcal{H}}$ and Bergman geodesics in ${\mathcal{B}}_k$. Let $h_0,h_1\in{\mathcal{H}}$. Then there will be a unique $C^{1,\alpha}$ Monge-Ampère geodesic $h_t=h_0e^{-\phi_t(z)}$: $[0,1]$$\rightarrow {\mathcal{H}}$ joining $h_0$ to $h_1$ for all $\alpha\in (0,1)$ [@C]. Assume $h_0(k)=FS_k(\hat {\underline{s}}^{(0)}(k))$ and $h_1(k)=FS_k(\hat {\underline{s}}^{(1)}(k))$ are two sequence in ${\mathcal{B}}_k$ obtained by the canonical construction approximating $h_0$ and $h_1$. Then the geodesic joining $h_0(k)$ and $h_1(k)$ in the space ${\mathcal{B}}_k=GL(d_k+1)/U(d_k+1)$ is constructed in [@PS] as follows : Let $\sigma_k \in GL(d_k+1)$ be the change of basis matrix defined by $\sigma_k \cdot \hat{\underline{s}}^{(0)}(k)=\hat {\underline{s}}^{(1)}(k)$. Without loss of generality, we may assume that $\sigma_k$ is diagonal with entries $e^{\lambda_0},...,e^{\lambda_{d_k}}$ for some $\lambda_j\in{\mathbb{R}}$. Let $\hat {\underline{s}}^{(t)}(k)=\sigma_k^t\cdot\hat {\underline{s}}^{(0)}(k)$ where $\sigma_k^t$ is diagonal with entries $e^{\lambda_jt}$. Define $$\label{phdfcid}
h_k(t,z)=FS_k(\hat {\underline{s}}^{(t)}(k))=h_0e^{-\phi_k(t,z)}.$$ Then $h_t(k,z)$ is the smooth geodesic in $GL(d_k+1)/U(d_k+1)$ joining $h_0(k)$ to $h_1(k)$. Explicitly, use identity (\[FSDEFa\]) again, we have: $$\label{phid}
\phi_k(t,z)\ = \frac{1}{k}
\log \left(\sum_{j=0}^{d_k}e^{2\lambda_jt}|\hat s_j^{(0)}(k)|^2_{h_0^k}
\right).$$ Then the main result of Phong-Sturm [@PS] is that the Monge-Ampère geodesic $\phi _t(z)$ is approximated by Bergman geodesic $\phi_k(t,z)$ in a weak $C^0$ sense on $[0, 1]\times M$; a $C^0$ result with a remainder estimate was later proved by Berndtsson [@B].
For special varieties, one expects better results. The first evidence is in [@SoZ]: Song-Zelditch proved the convergence of $\phi_k(t, z) \rightarrow \phi_t(z)$ is much stronger for toric hermitian metrics on the torus-invariant line bundle over the smooth toric [Kähler ]{}manifold. To be more precise, define the space of toric Hermitian metrics: $${\mathcal{H}}(\mathbb{T}^m) = \{\phi \in {\mathcal{H}}: (e^{i \theta})^* \phi = \phi, \;\; {\rm ~for~all~} e^{i \theta}
\in \mathbb{T}^m\}$$ Then for the smooth geodesic in ${\mathcal{H}}(\mathbb{T}^m) $ with the endpoints $h_0$ and $h_1 \in {\mathcal{H}}(\mathbb{T}^m)$, they proved: $$\lim_{k\rightarrow \infty}\phi_k(t,z)=\phi(t,z) \;\; \mbox{in} \; C^2([0,1]\times M)$$ And they also obtained the rate of the convergence and the remainder estimates. In fact, their method can be applied to the principally polaried Abelian varieties. In our article, we consider the Abelian case and prove the existence of $C^\infty$ convergence, moreover, we can expand $\phi_k(t,z)$ in $k$ completely with the leading term $\phi_t$.
$\Gamma$-invariant space ${\mathcal{H}}_0^\Gamma$
-------------------------------------------------
Throughout the article, we will use the following notation: denote $\Gamma=(S^1)^m\cong ({\mathbb{R}}/{\mathbb{Z}})^m$, the isomorphism is given by $e^{2\pi i \theta}\rightarrow \theta \mod {\mathbb{Z}}^m$; thus we can identify a periodic function on ${\mathbb{R}}^m$ with period 1 in each variable with a function defined on $\Gamma$; denote $y^2=y_1^2+\cdots+y^2_m$ and $x \cdot y=x_1y_1+\cdots+x_my_m$ for $x, y\in {\mathbb{R}}^m$.
By performing affine transformation, it suffices to consider the principally polarized Abelian variety $M={\mathbb{C}}^m/\Lambda$, where $\Lambda={\mathbb{Z}}^m+i{\mathbb{Z}}^m$. We will prove our main result for this model case first and in section \[general\], we will sketch how to extend our argument to the general lattice. Now for $M={\mathbb{C}}^m/\Lambda$, where $\Lambda={\mathbb{Z}}^m+i{\mathbb{Z}}^m$, we can write each point in $M$ as $z=x+iy$, where $x, y \in {\mathbb{R}}^m$ and they can be considered as the period coordinate in $M$. There is a natural action on $M$: the group $\Gamma=(S^1)^m$ acts on $M$ via translations in the Langrangian subspace ${\mathbb{R}}^m \subset {\mathbb{C}}^m$, i.e., the translation of $x$ in the universal covering space.
Let $L \rightarrow M$ be a principal polarization of $M$; then there exists a hermitian metric defined on $L$ [@GH]:$$h=e^{-2\pi y^2}$$ The curvature of $h$ is given by $R(h)=\frac{ \sqrt{-1}}{2}\pi \sum_{\alpha=1}^m dz_{\alpha} \wedge d\bar z_{\alpha}$ which is in the class $[\pi c_1(L)]$. Now fix $\omega_0=R(h)$ a flat metric on $M$ with associated [Kähler ]{}potential $2\pi y^2$, denote ${\mathcal{H}}_0^{\Gamma}$ as the space of $\Gamma$-invariant [Kähler ]{}metrics in the fixed class $[\omega_0]$, then: $${\mathcal{H}}_0^{\Gamma}\ = \
\{\psi\in C^{\infty} _{\Gamma}(M) : \omega_\psi\ = \ \omega_0+ \frac{\sqrt{-1}}{2}\partial \bar \partial \psi>0\}.$$ Note that a smooth function $\psi(x,y)$ defined on $M$ invariant under the $\Gamma$ action should be independent of $x$ variable; thus in fact it induces a smooth function on $M/\Gamma$, i.e., $\psi$ can be considered as a smooth and periodic function on the universal covering space $y\in {\mathbb{R}}^m$.
All hermitian metrics $h$ on $L$ such that $R(h)=\omega_{\psi}\in {\mathcal{H}}_0^{\Gamma}$ are of the form: $$h=e^{-2\pi y^2-4\pi \psi(y)}.$$ In section \[jhnmb\], we will see such $h$ is a well defined hermitian metric on $L$. And the corresponding [Kähler ]{}potential is: $$\label{dhgdg}\varphi(y)=2\pi y^2+ 4\pi \psi(y),$$ where $\psi(y)$ is a smooth and periodic function with period $1$ and ${\nabla^2}\varphi(y)>0$.
The following fact about the space ${\mathcal{H}}_0^{\Gamma}$ is crucial [@D; @S]: Given any $\varphi_0$ and $\varphi_1$ $\in {\mathcal{H}}_0^{\Gamma}$, we can join them by a smooth geodesic $\varphi_t \in {\mathcal{H}}_0^{\Gamma}$. Thus throughout the article, we will consider the geodesic in the form $\varphi_t(y)=2\pi y^2+4 \pi \psi_t(y)$. In section \[general\], we show how to get our main results for the case of a general lattice.
Main results
============
Complete asymptotics of geodesics
---------------------------------
Our main task in this article is to prove the following theorem:
\[ghjiuyum\]Let $M$ be a principally polarized Abelian variety and let $L \rightarrow M $ be a principal polarization of $ M$. Given $h_0$ and $h_1$ in ${\mathcal{H}}_0^{\Gamma}$ of the space of $\Gamma$-invariant [Kähler ]{}metrics, let $h_t \in {\mathcal{H}}_0^{\Gamma}$ be the smooth geodesic between them. Let $h_k(t)$ be the Bergman geodesic between $h_0(k)$ and $h_1(k)$ in ${\mathcal{B}}_k$. Let $h_k(t)=e^{-\phi_k(t,z)}h_0$ and $h_t=e^{-\phi_t(z)}h_0$, then, $$\lim_{k\rightarrow \infty}\phi_k(t,z)=\phi_t(z)$$ in the $C^{\infty}([0,1] \times M)$ topology. Moreover, we have the following $C^{\infty}$ complete asymptotics: $$\phi_k(t,z)=\phi_t(z)+mk^{-1}\log k+k^{-1}a_1(t,\mu_t)+k^{-2}a_2(t,\mu_t)+\cdots$$ for $k$ large enough, where $\mu_t(y)=\nabla \varphi_t(y)$ where $y$ is defined in (\[jhgui\]) and each $a_n$ is a smooth function of $\mu_t$ and $t$. In particular, $a_1=\log R_{\infty}$ where $R_{\infty}$ is defined by (\[uyee\]).
We now sketch the proof of our main result for the model case: define the inner product on ${H^0(M,L^k)}$ induced by $h_t^k$ in the sense of (\[dsldd\]), then in Proposition \[dfghhg\] we first prove that: for any fixed $t$, the following theta functions of level $k$: $$\label {a}\theta_j(z)=\sum_{n \in {\mathbb{Z}}^m}e^{-\pi \frac{(j+kn)^2}{k} +2\pi i(j+kn)\cdot z} \,\,, j\in ({\mathbb{Z}}/k{\mathbb{Z}})^m$$form an orthogonal basis with respect to this inner product, in particular, $\dim {H^0(M,L^k)}=k^m$. Therefore, we can choose the orthonormal basis $\underline{s}^{(t)}(k)$ as $\theta_j$ normalized by $\|\theta_j\|_{h_t^k}$. Hence, if $\sigma_k \in GL(k^m)$ such that $\sigma_k \cdot \underline{\hat{s}}^{(0)}(k)=\underline{\hat{s}}^{(1)}(k)$, then $\sigma_k$ can be chosen to be diagonal with entries $e^{\lambda_j}=\|\theta_j\|_{h^k_0}/ \|\theta_j\|_{h^k_1}$. Hence, the equation (\[phid\]) of the Bergman geodesic becomes: $$\label{phdsdid}
\phi_k(t,z)\ = \frac{1}{k}
\log \sum_{j\in ({\mathbb{Z}}/k{\mathbb{Z}})^m} \left( \frac{\|\theta_j\|^2_{h_0^k}}{\|\theta_j\|^2_{h_1^k}}
\right)^t\frac{|\theta_j|^2_{h_0^k}}{\|\theta_j\|^2_{h_0^k}}.$$ Our main theorem is to prove this term converges to $\phi_t(z)$ in the $C^{\infty}([0,1]\times M)$ topology. But $$\phi_k(t,z)-\phi_t(z)=\frac{1}{k}
\log \sum_{j\in ({\mathbb{Z}}/k{\mathbb{Z}})^m} \left( \frac{\|\theta_j\|^2_{h_0^k}}{\|\theta_j\|^2_{h_1^k}}
\right)^t\frac{|\theta_j|^2_{h_0^k}e^{-k\phi_t}}{\|\theta_j\|^2_{h_0^k}},$$ denote $\rho _{k}(j,t)=\|\theta_j\|^2_{h_t^k}$ as the norming constant and denote $$R_k(j,t)= \frac{\rho _{k}(j,t)}{(\rho _{k}(j,0))^{1-t}(\rho _{k}(j,1))^t},$$ and as usual $h_t = e^{-\phi_t}h_0$, then we can rewrite $$\phi_k(t,z)-\phi_t(z)=\frac{1}{k}\log \sum_{j\in ({\mathbb{Z}}/k{\mathbb{Z}})^m}R_k(j,t)\frac{|\theta_j|^2_{h_t^k}}{\|\theta_j\|^2_{h_t^k}}.$$ Thus our goal is equivalent to prove this term goes to $0$ in the $C^{\infty}$ topology as $k \rightarrow \infty$. In fact we prove the following result that implies Theorem \[ghjiuyum\] immediately:
\[oiut\] With all assumptions and notations as above, we have: $$\frac{1}{k} \log\sum_{j\in ({\mathbb{Z}}/k{\mathbb{Z}})^m} R_k(j,t)\frac{|\theta_j|^2_{h_t^k}}{\|\theta_j\|^2_{h_t^k}}=mk^{-1}\log k+\log R_{\infty}(\mu_t,t)+ k^{-2}c_1(\mu_t,t)+\cdots$$ where $\mu_t(y)=\nabla \varphi_t(y)$, $c_n(\mu_t,t)\in C^\infty(M\times [0,1])$ and periodic in $y$ variables for any fixed $t$ and $R_{\infty}$ is defined by (\[uyee\]). Furthermore, this expansion can be differentiated any number of times on both sides with respect to $t$ and $y$ (or $z$).
The proof of Lemma \[oiut\] is a consequence of the following two facts
- Regularity: $R_k(j,t)$ admits the complete asymptotics with the leading term given by $R_{\infty}(x,t)$ evaluated at the point $x_0=-\frac{4\pi j}{k}$ (Lemma \[jhgf\]).
- The generalized Bernstein Polynomial $\sum_{j\in ({\mathbb{Z}}/k{\mathbb{Z}})^m}f(-\frac{j}{k})\frac{|\theta_j|^2_{h^k}}{\|\theta_j\|^2_{h^k}}$ admits complete asymptotics for any periodic function $f$ defined on ${\mathbb{R}}^m$ with period 1 (Lemma \[ddgsgs\]).
Dedekind-Riemann sums
-----------------------
In section \[dddsgg\], we prove the following generalized Bernstein Polynomial Lemma using the basic properties of theta functions and Weyl quantization:
\[ddgsgs\]Let $f(x) \in C^{\infty}({\mathbb{R}}^m)$ and periodic in each variable with period $1$, let $h\in {\mathcal{H}}_0^{\Gamma}$, then we have the complete asymptotics: $$\label{bvc}\frac{1}{k^m}\sum_{ j\in({\mathbb{Z}}/k{\mathbb{Z}})^m}f(-\frac{j}{k})\frac{|\theta_j|^2_{h^k}}{\|\theta_j\|^2_{h^k}}=f(\mu)+k^{-1}b_1(\mu)+\cdots$$ where $\mu(y)=y+\nabla\psi(y)$ and $b_n(\mu)\in C^\infty({\mathbb{R}}^m)$ for all $n\in \mathbb{N}$.
The generalized Bernstein polynomial Lemma \[ddgsgs\] has an application to Dedekind-Riemann sums for the periodic functions. Results about the complete asymptotics of Dedekind-Riemann sums for the smooth functions with compact support over the polytope $P$ were obtained by Brion-Vergne, Guillemin-Sternberg and many others (cf. [@BV; @GS]). For purposes of comparison, Theorem 4.2 of [@GS] states that for $f\in C^{\infty}_{0}({\mathbb{R}}^n)$: $$\frac{1}{k^m}\sum _{\alpha\in {\mathbb{Z}}^m \cap kP}f (\frac{\alpha}{k})=\left(\sum_F\sum_{\gamma \in \Gamma_{F}^{\sharp}}\tau _{\gamma}\left(\frac{1}{k}\frac{\partial}{\partial h}\right)\int_{P_{h}}f(x)dx \right)\mid_{h=0}$$ where $\alpha$ is the lattice point in the $kth$ dilate of the polytope $kP$ and $P_h$ is a parallel dilate of $P$. We refer to [@GS] for more details.
Afterward, Zelditch related the Bernstein polynomials to the Bergman kernel for the Fubini-Study metric on ${\mathbb{CP}}^1$, and generalized this relation to any compact [Kähler ]{}toric manifold, implying many interesting results [@Z1]. To be more precise, let $(L,h)\rightarrow (M,\omega)$ be a toric Hermitian invariant line bundle over a [Kähler ]{}toric manifold with associated moment polytope $P$, he proved the following complete asymptotics: $$\sum_{\alpha\in {\mathbb{Z}}^m \cap kP}f(\frac{\alpha}{k})\frac{|s_{\alpha}|^2_{h^k}}{\|s_{\alpha}\|^2_{h^k}}=f(x)+k^{-1}\mathcal{L}_1f(x)+k^{-2}\mathcal{L}_2f(x)+\cdots$$ where $f\in C_{0}^{\infty}({\mathbb{R}}^m)$, each $\mathcal{L}_j$ is a differential operator of order $2j$, $s_{\alpha}$ is the orthogonal basis of ${H^0(M,L^k)}$ which in fact are monomials $z^{\alpha}$. Then the simple integration yields: $$\frac{1}{k^m}\sum_{\alpha \in {\mathbb{Z}}^m\cap kP }f(\frac{\alpha}{k})=\int_{P}f(x)dx+ \frac{1}{2k}\int_{\partial P}f(x)dx+\frac{1}{k^2}\int_{P}\mathcal{L}_2f(x)dx+\cdots$$ In [@F], this method is then generalized to the polyhedral set.
In section \[dddsgg\], we will first generalize the method in [@F; @Z1] to Abelian varieties to get the Lemma \[ddgsgs\]. If we take the integral over $M$ on both sides of (\[bvc\]) and note $\sum_{j\in({\mathbb{Z}}/k{\mathbb{Z}})^m}f(\frac{j}{k})=\sum_{j\in({\mathbb{Z}}/k{\mathbb{Z}})^m}f(-\frac{j}{k})$, then we have the following Dedekind-Riemann sums for periodic functions:
\[iuu\] Let $f(x) \in C^{\infty}({\mathbb{R}}^m)$ and periodic in each variable with period 1, then: $$\label{bvsssc}\frac{1}{k^m}\sum_{j\in({\mathbb{Z}}/k{\mathbb{Z}})^m}f(\frac{j}{k})=\int _{[0,1]^m}f(x)dx+ k^{-1}\int _{[0,1]^m}b_1(x)dx+\cdots$$ where each $b_n(x)\in C^\infty({\mathbb{R}}^m)$ and can be computed explicitly.
Complete asymptotics of harmonic maps
-------------------------------------
A harmonic map between two Riemannian manifolds $(N_1, g_1)$ and $(N_2, g_2)$ is a critical point of the energy functional $$E(f) = \int_{N_1} |df|^2_{g_1\otimes f^*g_2}dVol_{g_1}$$ on the space of smooth maps $f: N_1 \rightarrow N_2$. Note that this notion may also be defined when the target manifold $(N_2, g_2)$ is an infinite-dimensional weakly Riemannian manifold, e.g., $({\mathcal{H}}, g_{{\mathcal{H}}})$. By a smooth map $f$ from $N$ to ${\mathcal{H}}$ we mean a function $f\in C^\infty(N \times M)$ such that $f(q, \cdot) \in {\mathcal{H}}$ for each $q \in N$ (see Definition 1.1 in [@RZ]).
In [@RZ], Rubinstein-Zelditch proved that, in the toric case, the Dirichlet problem for a harmonic map $\varphi: N \rightarrow {\mathcal{H}}(\mathbb{T}^m)$ of any compact Riemannian manifold $N$ with smooth boundary into ${\mathcal{H}}(\mathbb{T}^m)$ of toric invariant metrics admits a smooth solution that may be approximated in $C^2(N\times M)$ by a special sequence of harmonic maps $\varphi_k : N \rightarrow {\mathcal{B}}_ k(\mathbb{T}^m) \subset {\mathcal{H}}(\mathbb{T}^m)$ into the subspaces ${\mathcal{B}}_ k(\mathbb{T}^m)$ of Bergman metrics (Theorem 1.1 in [@RZ]). This generalized the work of Song-Zelditch in the case of geodesics, i.e., where $N = [0, 1]$.
In the spirit of [@RZ], we consider the harmonic maps into the space of ${\mathcal{H}}_0^\Gamma$ of $\Gamma$-invariant Abelian metrics . Then we can prove that the approximation of the harmonic into ${\mathcal{H}}_0^\Gamma$ by the corresponding ones into ${\mathcal{B}}_k$ is still $C^\infty$.
\[harmin\] Let $M$ be a principally polarized Abelian variety and let $L\rightarrow M$ be a principal polarization. Let $(N, g)$ be a compact oriented smooth Riemannian manifold with smooth boundary $\partial N$. Let $\psi: \partial N \rightarrow {\mathcal{H}}_0^\Gamma$ denote a fixed smooth map. There exists a harmonic map $\varphi:N \rightarrow {\mathcal{H}}_0^\Gamma$ with $\varphi|_{\partial N} = \psi$ and harmonic maps $\varphi_k : N \rightarrow {\mathcal{B}}_k$ with $\varphi_k |_{\partial N} = FS_k \circ Hilb_k(\psi)$, then we have the following $C^\infty$ complete asymptotics, $$\varphi_k=\varphi+mk^{-1}\log k+k^{-1}a_1+k^{-2}a_2+\cdots$$ where each $a_n$ is smooth and $a_1=\log K_\infty$ where $K_\infty$ is defined by (\[ddgssd\]).
The proof of Theorem \[harmin\] is similar to the one in [@RZ]. In section \[testharmonic\], we will sketch the main steps of the proof for the model case.
Final remarks {#test}
-------------
The $C^2$ convergence of Song-Zelditch for the toric varieties can be improved to the $C^{\infty}$ convergence for the Abelian varieties mainly because of the Regularity Lemma \[jhgf\]: $R_k(j,t)$ admits complete asymptotics. But for the toric case, they do not know the existence of the complete asymptotics of $R_k(\alpha,t)$, where $\alpha$ is a lattice point in $P$ which is the image of the moment map of toric varieties $\nabla_{\rho} \varphi : M \rightarrow P$. In fact, they have the following lemma: $$(\frac{\partial}{\partial t})^n R_k(\alpha,t)=(\frac{\partial}{\partial t})^nR_{\infty}(\frac{\alpha}{k},t)+O(k^{-\frac{1}{3}})\,\,,0 \leq n\leq 2.$$ They can not prove the existence of complete asymptotics because they can not get the joint asymptotics of $k$ and $\alpha$ of the norming constant $\rho_{k}(\alpha)=\|s_{\alpha}\|_{h^k}^2$, where $s_{\alpha}$ is the holomorphic section of the invariant line bundle. Recall that the boundary of $P$ is the image of the points with isotropy group of $\mathbb{T}^n$, $1 \leq n\leq m$ under the moment map $\nabla_{\rho}\varphi$ and the boundary causes serious complications. To be more precise, they can rewrite $\rho_{k}(\alpha)$ as: $$\rho_{k}(\alpha)=\int_P e^{-k(u_{\varphi}(x)+\langle\frac{\alpha}{k}-x, \nabla u_{\varphi}(x)\rangle)}dx$$ where $u_{\varphi}$ is the symplectic potential defined on $P$, i.e., the Legendre transform of [Kähler ]{}potential $\varphi$. Note that the critical point of the phase is given by $\frac{\alpha}{k}$; thus they can get complete asymptotics by the stationary phase method when the point $\frac{\alpha}{k}$ is far away from the boundary of $P$. But they can not get joint asymptotics by this method when the point goes to the boundary $\partial P$ as $k \rightarrow \infty$ [@SoZ].
But in our Abelian case, we do not have such disadvantage. There is a real torus $\Gamma=(S^1)^m$ action on the Abelian varieties. This action is free, i.e., there is no point with the isotropy group of $(S^1)^n$, $1\leq n\leq m$. In section \[dghdSg\], we will see that the gradient of the [Kähler ]{}potential induces a map $\nabla \varphi_t=4\pi(y+\nabla \psi_t): M \rightarrow M/\Gamma$ which is in fact a Lie group valued moment map for any fixed $t$. The image of $\nabla \varphi_t $ is $M/\Gamma$ which has no boundary. There is another way to look at this, in section \[gnbvgi\], we rewrite $\rho_{k}(j)=\|\theta_j\|_{h^k}^2$ as an integral over the universal covering space ${\mathbb{R}}^m$ (\[dhghgndh\]): $$\rho_{k}(j)=e^{-2\pi \frac{j^2}{k}}\int_{{\mathbb{R}}^m}e^{-k\pi (-u(x)+\langle x+\frac{4\pi j}{k}, \nabla u(x) \rangle)}dx$$ where $u(x)$ is defined by Legendre transform of $\varphi$, thus we can apply the stationary phase method to this integral everywhere.
For example, in section \[jhnmb\], we can get identity (\[fdsfs\]) which is the exact formula for $\rho_k(j)=\|\theta_j\|_{h^k}^2$. If we assume $\psi\equiv 0$, i.e., we choose the flat metric over the Abelian variety, then $\|\theta_j\|^2_{h^k}$ will be a constant independent of $j$, i.e., the joint complete asymptotics of $\rho_k(j)$(which is in fact a constant) exist for any $j$ as $k \rightarrow \infty$. This is totally different from the toric case. For example, consider $(\mathbb{CP}^1, \omega_{FS})$ with Fubini-Study metric, then $\|z^{\alpha}\|^2_{h^k_{FS}}=$${k \choose \alpha}$$^{-1}$, but as proved in [@SoZ1], for any $\alpha\in [k^{-\frac{3}{4}}, 1-k^{-\frac{3}{4}}]$, by stationary phase method: $${k \choose k\alpha} \sim \frac{1}{\sqrt{2\pi k \alpha (1-\alpha)}}e^{(\alpha \log \alpha +(1-\alpha)\log(1-\alpha))}$$ Then it’s easy to see that the asymptotics are highly non-uniform as $\alpha \rightarrow 0$ or $\alpha \rightarrow 1$, where $0$ and $1$ are two boundary points of the moment polytope $[0,1]$ of $\mathbb{CP}^1$.
[**Acknolwedgements:**]{} The author would like to thank Prof. S. Zelditch for his support of this project. He would like to thank Dr. Z. Wang for many helpful discussions. Many thanks go to Dr. Y. A. Rubinstein for discussing the problem and sharing many of his fresh ideas, for reading the first version line by line, pointing out mistakes and typos and giving many suggestions about how to write the article. The author also would like to thank the referee for many helpful comments in the original version. This paper will never come out without their helps.
Abelian varieties and Theta functions {#jhnmb}
=====================================
In this section, we will review some basic properties of principally polarized Abelian varieties and theta functions, we mainly follow [@FMN], refer to [@GH; @Mu] for more details.
Let $V$ be a $m$-dimensional complex vector space and $\Lambda \cong {\mathbb{Z}}^{2m}$ a maximal lattice in $V$ such that the quotient $M = V/\Lambda$ is an Abelian variety, i.e., a complex torus which can be holomorphically embedded in projective space. We assume that $M$ is endowed with a principal polarization, then we can always find a basis $\lambda_1,...,\lambda_{2m}$ for $\Lambda$, such that $\lambda_1,...,\lambda_{m}$ is a basis of $V$ and $$\lambda _{m+\alpha}=\sum_{\beta =1}^mZ_{\beta \alpha}\lambda_{\beta}, \,\, \alpha=1,...,m$$ where $Z=(Z_{\alpha\beta})_{\alpha,\beta=1}^{m}$is a $m\times m$ matrix satisfies $Z^T=Z$ and $Im Z>0$. Conversely, principally polarized Abelian varieties are parametrized by such matrices.
Let $x_1,...,x_m,y_1,..., y_m$ be the coordinates on $V$ which are dual to the generators $\lambda_1,...,\lambda_{2m}$ of the lattice $\Lambda$. Then $x_{\alpha}$ and $y_{\alpha}$ can also be considered as periodic coordinates in $M$, and are related to the complex ones by: $$\label{jhgui}z_{\alpha}=x_{\alpha}+\sum _{\beta=1}^mZ_{\alpha \beta}y_{\beta}\,\,\,,\bar z_{\alpha}=x_{\alpha}+\sum _{\beta=1}^m \bar Z_{\alpha \beta}y_{\beta}.$$
Let $L \rightarrow M$ be the holomorphic line bundle, if we further assume $L$ is a principal polarization of $M$, then the first Chern class $c_1(L)$ is given by: $$\begin{array}{lll} \label{poiubvy}\omega_0
& = & \sum_{\alpha=1}^m dx_{\alpha}\wedge dy_{\alpha}
\\ && \\
& =& \frac{\sqrt{-1}}{2}\sum_{\alpha, \beta}(Im Z)^{\alpha \beta}dz_{\alpha} \wedge d \bar z_{\beta}. \end{array}$$ The space ${H^0(M,L^k)}$ is naturally isomorphic with the space of holomorphic functions $\theta$ on $V$ satisfying: $$\theta(z+\lambda_{\alpha})=\theta(z), \,\,\, \theta(z+\lambda_{m+\alpha})=e^{-2k\pi iz_{\alpha}-k\pi i Z_{\alpha \alpha}}\theta(z).$$ In fact, these theta functions are in form [@FMN]: $$\theta(z)=\sum_{l\in ({\mathbb{Z}}/k{\mathbb{Z}})^m}a_l\theta_l( z,\Omega),$$ where $$\label{dsbn}\theta_l(z,\Omega )=\sum_{n\in {\mathbb{Z}}^m}e^{\pi i(l+kn) \frac{Z}{k}(l+kn)^T}e^{2\pi i(l+kn)\cdot z}, \,\,\, l\in ({\mathbb{Z}}/k{\mathbb{Z}})^m .$$ In particular, dim ${H^0(M,L^k)}$$=k^m$.
Now consider the hermitian metric $h$ defined on $L$, $h$ should be a positive $C^{\infty}$ function of $z$ satisfying: $$\label{iuytreeth}h(z)|\theta(z)|^2=h(z+\lambda)|\theta(z+\lambda)|^2$$ for any $\lambda \in \Lambda$; thus $$\label{iuyth} h(z+\lambda_{\alpha})=h(z)\,\,, h(z+\lambda_{m+\alpha})=|e^{2\pi i z_{\alpha}}|^2|e^{\pi i Z_{\alpha \alpha}}|^2h(z).$$ Conversely, any such function $h$ defines a metric on $L$.
For simplicity, we first consider the Abelian variety $M={\mathbb{C}}^m/\Lambda$, where $\Lambda={\mathbb{Z}}^m+i{\mathbb{Z}}^m$. Write $z=x+iy$, where $x$ and $y \in {\mathbb{R}}^m$ and can be viewed as the periodic coordinate of $M$. Let $L \rightarrow M$ be a principal polarization of $M$, then by formula (\[dsbn\]), the global holomorphic section of $H^0(M,L)$ is given by the following Riemann theta functions: $$\label{oiuy} \theta(z)=\sum_{n\in {\mathbb{Z}}^m}e^{-\pi n^2 +2\pi i n\cdot z},$$ where $n^2=n_1^2+\cdots+n_m^2$ and $n\cdot z=n_1z_1\cdots+n_mz_m$. And the global holomorphic section of ${H^0(M,L^k)}$ is given by: $$\label {a}\theta_j(z)=\sum_{n \in {\mathbb{Z}}^m}e^{-\pi \frac{(j+kn)^2}{k} +2\pi i(j+kn)\cdot z} ,\,\,\, j\in(\mathbb{Z}/k\mathbb{Z})^m$$ with dim${H^0(M,L^k)}$$=k^m$. Furthermore, $\theta_j(z)$ are holomorphic functions over ${\mathbb{C}}^m$ and satisfy the following quasi-periodicity relations: $$\theta_j(z_{\alpha}+1)=\theta_j(z_{\alpha}) ,\,\,\,\, \theta_j(z_{\alpha}+i) = e^{-2\pi ikz_{\alpha}+k\pi}\theta_j(z_{\alpha}).$$ Now define the hermitian metric on $L$ as $$h_t= e^{-2\pi y^2-4\pi\psi_t(y)},$$ where $\psi_t(y)$ is a smooth and periodic function of $y\in {\mathbb{R}}^m$ with period $1$. It’s easy to check $h_t$ satisfies conditions (\[iuyth\]) $$h_t(z_{\alpha}+1)=h_t(z_{\alpha})\,\,,h_t(z_{\alpha}+i)=|e^{2\pi i z_{\alpha}}|^2e^{2\pi}h_t(z_{\alpha}),$$ thus $h_t$ is a well defined hermitian metric on $L$.
Now in our case, the natural Hermitian inner product (\[dsldd\]) defined on the space ${H^0(M,L^k)}$ reads: $$\label {acbn}\langle \theta_l,
\theta_j\rangle_{h_t^k}= \int_{M} \theta_l(z) \overline{ \theta_j(z)}e^{-2k\pi y^2-4k\pi\psi_t(y)}\frac{\omega_{h_t}^m}{m!} ,$$ where the volume form $\frac{\omega_{h_t}^m}{m!}= (4\pi)^m\det(I+{\nabla^2}\psi_t)dxdy$.
\[dfghhg\]$\left\{\theta_j\,\,,j \in({\mathbb{Z}}/k{\mathbb{Z}})^m \right \}$ forms an orthogonal basis of ${H^0(M,L^k)}$ with respect to the Hermitian inner product defined by (\[acbn\]).
By definition, $$\begin{array}{lll} \label{poiuy}\langle\theta_l, \theta_j\rangle_{h_t^k}
& = & (4\pi)^m \int _{[0,1]^m} \int _{[0,1]^m}[\sum_{n \in{\mathbb{Z}}^m}e^{-\pi \frac{(l+kn)^2}{k} +2\pi i(l+kn)\cdot z} ] \cdot
\\ && \\
& & [\sum_{p \in {\mathbb{Z}}^m}e^{-\pi \frac{(j+kp)^2}{k} -2\pi i(j+kp)\cdot \bar z}] e^{-2k\pi y^2-4k\pi\psi_t(y)}\det(I+{\nabla^2}\psi_t)dxdy \\ && \\
& = & (4\pi)^m [\sum_{n\in{\mathbb{Z}}^m}\sum_{p\in{\mathbb{Z}}^m}\int _{[0,1]^m}e^{ 2\pi i (l+kn-j-kp)\cdot x}dx] \cdot \\ && \\
& & [\int _{[0,1]^m}e^{-\pi \frac{(l+kn)^2+(j+kp)^2}{k}-2\pi (l+kn+j+kp)\cdot y-2k\pi y^2-4k\pi\psi_t}\det(I+{\nabla^2}\psi_t) dy]\end{array}$$ For the first integral, if $l_{\alpha}+kn_{\alpha}=j_{\alpha}+kp_{\alpha}$, i.e., $l_{\alpha}-j_{\alpha}=0 \mod k$, then $$\int _{[0,1]}e^{ 2\pi i (l_{\alpha}+kn_{\alpha}-j_{\alpha}-kp_{\alpha})x_{\alpha}}dx_{\alpha}=1,$$ otherwise, it’s 0. Since $1 \leq l_{\alpha}, j_{\alpha}\leq k$, hence $l_{\alpha}+kn_{\alpha}=j_{\alpha}+kp_{\alpha}$ iff $l_{\alpha}=j_{\alpha}$ and $p_{\alpha}=n_{\alpha}$; thus the first integral is nonzero iff $l=j$ and $n=p$. Then equation (\[poiuy\]) becomes: $$\langle\theta_l, \theta_j\rangle_{h_t^k} =(4\pi)^m \delta_{l,j}\sum_{n\in {\mathbb{Z}}^m}\int _{[0,1]^m}e^{-2k\pi(\frac{j}{k}+n+y)^2}e^{-4k\pi \psi_t(y)}\det(I+{\nabla^2}\psi_t) dy.$$ Hence, we can see that $\left\{ \theta_j \,\,, j\in ({\mathbb{Z}}/k{\mathbb{Z}})^m\right\}$ forms an orthogonal basis of ${H^0(M,L^k)}$.
Furthermore, we have: $$\label{fdsfs} \begin{array}{lll} \|\theta_j\|^2_{h_t^k}&=&(4\pi)^m \sum_{n\in {\mathbb{Z}}^m}\int _{[0,1]^m}e^{-2k\pi(\frac{j}{k}+n+y)^2}e^{-4k\pi \psi_t(y)}\det(I+{\nabla^2}\psi_t) dy
\\ && \\
&= &
(4\pi)^m \int _{{\mathbb{R}}^m}e^{-2k\pi (y+\frac{j}{k})^2}e^{-4k\pi \psi_t(y)}\det(I+{\nabla^2}\psi_t) dy.\end{array}$$ In the last step, we change variable $y\rightarrow y+n $ and use the fact that $\psi_t(y)$ is a smooth and periodic function with period 1. In fact, this integral is taken over the universal covering space ${\mathbb{R}}^m$.
Regularity lemma
================
$\Gamma$-invariant metrics and geodesics {#dghdSg}
----------------------------------------
In this subsection, we recall some basic properties of the space ${\mathcal{H}}^{\Gamma}_0$ of $\Gamma$-invariant [Kähler ]{}metric proved in [@D].
Now consider $M={\mathbb{C}}^m/\Lambda$ where $\Lambda={\mathbb{Z}}^m+i{\mathbb{Z}}^m$, we write each point in $M$ as $z=x+iy$, where $x$ and $y \in {\mathbb{R}}^m$ and can be considered as periodic coordinate in $M$. Let $\omega_0=\frac{\pi \sqrt{-1}}{2} \sum_{\alpha=1}^m dz_{\alpha} \wedge d\bar z_{\alpha}$ be the flat metric with associated local [Kähler ]{}potential $2\pi y^2$. The group $\Gamma=(S^1)^m$ acts on $M$ via translations in the Langrangian subspace ${\mathbb{R}}^m \subset {\mathbb{C}}^m$, and this induces an isometric action of $\Gamma$ on the space ${\mathcal{H}}$ of [Kähler ]{}metrics on $M$; so ${\mathcal{H}}_0^{\Gamma}$ of $\Gamma$-invariant metrics is totally geodesic in ${\mathcal{H}}$. Furthermore, ${\mathcal{H}}_0^{\Gamma}$ can be viewed as the set of functions: $${\mathcal{H}}_0^{\Gamma}\ = \
\{\psi\in C^{\infty}_{\Gamma} (M) : \omega_\psi\ = \ \omega_0+ \frac{\sqrt{-1}}{2}\partial \bar \partial \psi>0\}.$$ In fact, a function invariant under the action of $\Gamma$ is independent of $x$; thus it descends to a smooth function on $M/\Gamma$, i.e., smooth and periodic function with period $1$ defined on $y\in {\mathbb{R}}^m$.
The crucial point about ${\mathcal{H}}_0^{\Gamma}$ is: Given any two points $\varphi_0$ and $\varphi_1$ in ${\mathcal{H}}_0^{\Gamma}$, there exists a smooth geodesics $\varphi_t(y)$ in ${\mathcal{H}}_0^{\Gamma}$ joining them. To be more precise, in the local coordinate, the geodesic is given by the path $\varphi_t(z)=2\pi y^2+ 4\pi \psi_t(y)$ satisfying the condition: $$\label{iupoj} \ddot{\varphi} -\frac{1}{2}|\nabla \dot{\varphi}|^2_{\omega_{\psi}}=0$$ Moreover, ${\nabla^2}\varphi_t=I+{\nabla^2}\psi_t>0$ because of the positivity of [Kähler ]{}form; thus $\varphi_t$ is a convex function on ${\mathbb{R}}^m$. Then the Legendre transform of $\varphi_t(y)$ $$\label{cx}u_t(\mu)=\mu\cdot y-\varphi_t(y)$$is well defined where $$\label{css}\mu=\nabla \varphi_t=4\pi(y+\nabla \psi_t(y)).$$ For any fixed $t$, the map $\mu(y,t)=\nabla \varphi_t: {\mathbb{R}}^m \rightarrow {\mathbb{R}}^m$ and also induces a map $\mu : M \rightarrow M/\Gamma$ which is an example of a Lie group valued moment map. Following the same proof in [@D; @G; @R], we have:
\[jghnbv\]$u(t,\mu)$ is linear along the geodesic (\[iupoj\]).
According to this Proposition, we can solve equation (\[iupoj\]) in ${\mathcal{H}}^{\Gamma}_0$ as follows: given any two [Kähler ]{}potential $\varphi_0$ and $\varphi_1$, make the Legendre transform $u_0=\mathcal{L}\varphi_0$ and $u_1=\mathcal{L}\varphi_1$, then $$\label{linar}u_t=(1-t)u_0+tu_1$$ solve equation $\ddot{u}=0$; thus the inverse of Legendre transform $$\varphi_t=\mathcal{L}^{-1}u_t$$ will solve equation (\[iupoj\]) which is $C^{\infty}$.
Regularity Lemma {#gnbvgi}
-----------------
Denote $u(t, \mu)=\mathcal{L} \varphi_t(y)$ as the Legendre transform of $\varphi_t(y)$ for any fixed $t$. By properties of Legendre transform, we have: $$\label{sdsds}y=\nabla_\mu u,$$ $$\label{poijk} \frac{\partial y}{\partial \mu}=({\nabla^2}_y \varphi)^{-1}(y)=\frac{1}{4\pi}(1+{\nabla^2}\psi_t)^{-1}(y)>0.$$
Let $\rho _{k}(j,t)=\|\theta_j\|^2_{h_t^k}$ denote the norming constant. Define $$\label{utye} R_k(j,t)= \frac{\rho _{k}(j,t)}{(\rho _{k}(j,0))^{1-t}(\rho _{k}(j,1))^t},$$ $$\label{uyee} R_{\infty}(\mu,t)=(\frac{\det {\nabla^2}_\mu u}{(\det {\nabla^2}_\mu u_{0})^{1-t}(\det {\nabla^2}_\mu u_{1})^t})^{1/2} .$$ We have following regularity lemma:
\[jhgf\] We have the following complete asymptotics, $$R_{k}(j,t)= R_{\infty}(\mu,t)(1+k^{-1}a_1+k^{-2}a_2+ \cdots +k^{-\nu}a_{\nu})|_{\mu=-\frac{4\pi j}{k}}+O(k^{-\nu-1})$$ where $\nu$ is any positive integer and $O$ symbol is uniform in $t$. Moreover, $R_{\infty}(\mu,t)$ and each $a_{\nu}$ are smooth functions of $(\mu,t)$ and $4\pi$- periodic in $\mu$ for any fixed $t$.
The periodicity of $R_{\infty}(\mu,t)$ is easy to see since the map $\mu: y\rightarrow \nabla_y \phi$ induces a map from $M$ to $M/\Gamma$, thus all functions in $\mu$ variables will be periodic.
First from (\[poijk\]), we have, $$\label{uvbctye} d\mu=(4\pi)^m\det(I+{\nabla^2}\psi_t) dy.$$
Now plug (\[cx\]), (\[sdsds\]) and (\[uvbctye\]) into (\[fdsfs\]), then we can rewrite the norming constant $\rho_{k}(j,t)$ as $$\label{dhghgndh}\rho_{k}(j,t)=e^{-2\pi \frac{j^2}{k}}\int_{{\mathbb{R}}^m}e^{-k(\mu \cdot \frac{\partial u}{\partial \mu}-u+\frac{4\pi j}{k}\cdot \frac{\partial u}{\partial \mu})}d\mu.$$ Hence, by definition, we can rewrite $R_k(t,j)$ as $$R_k(j,t)=\frac{ \int _{{\mathbb{R}}^m}e^{-k\pi (\mu\cdot \frac{\partial u}{\partial \mu}-u+\frac{4\pi j}{k}\cdot\frac{\partial u}{\partial \mu})}d\mu}{ (\int _{{\mathbb{R}}^m}{e^{-k\pi (\mu \cdot \frac{\partial u_{0}}{\partial \mu}-u_0+\frac{4\pi j}{k}\cdot\frac{\partial u_0}{\partial \mu})}}d\mu)^{1-t}(\int _{{\mathbb{R}}^m}{e^{-k\pi (\mu\cdot \frac{\partial u_{1}}{\partial \mu}-u_0+\frac{4\pi j}{k}\cdot\frac{\partial u_1}{\partial \mu})}}d\mu)^t}.$$ Recall the stationary phase method (Theorem 7.7.5 in [@H]): $$\label{polk}\int u(x)e^{ik \Psi(x)}dx=\frac{e^{ik\Psi(x)}}{\sqrt{\det(k{\nabla^2}\Psi (x)/2\pi i)}}
\sum_{\lambda= 0}^{\infty} k^{ - \lambda} L_\lambda u|_{x=x'}$$ where $x'$ is the critical point of $\Psi$, Im$ \Psi \geq 0$ and $L_\lambda$ is a differential operator of order $2\lambda$. Note that in [@H], $u(x)$ is assumed to has compact support, but in fact this formula is true for any $u(x)\in C^\infty({\mathbb{R}}^m)$. The strategy is to choose a cut-off function $\chi$ in a neighborhood of $x'$ and rewrite the amplitude $u$ to be $\chi u+(1-\chi) u$, then separate the integration into two parts correspondingly. To the integration with the amplitude $\chi u$, we use the formula of stationary phase method directly; to the second part, by Theorem 1.1.4 in [@So], is $O(k^{-\infty})$.
To our case, note that the hypotheses of [@H] are satisfied since we can add some constant to ensure our phase function has positive imaginary part. Now the critical point of the phase $\Psi=\mu\cdot\frac{\partial u}{\partial \mu}-u+\frac{4\pi j}{k}\cdot\frac{\partial u}{\partial \mu}$ satisfies: $( \mu'+\frac{4\pi j}{k})\cdot {\nabla^2}u=0$. Thus the critical point of the phase is given by $\mu'=-\frac{4\pi j}{k}$ since the matrix ${\nabla^2}u>0$ . And the Hessian of the phase at the critical point is ${\nabla^2}\Psi|_{\mu=\mu'}={\nabla^2}u(\mu',t)>cI$. Thus by the formula of the stationary phase method, we have $$\begin{array}{lll}\label{t}& &\int _{{\mathbb{R}}^m}e^{-k (\mu\cdot\frac{\partial u}{\partial \mu}-u+\frac{4\pi j}{k}\cdot\frac{\partial u}{\partial \mu})}d\mu\\ && \\
& = & k^{-\frac{m}{2}}(e^{-k (\mu\cdot\frac{\partial u}{\partial \mu}-u+\frac{4\pi j}{k}\cdot\frac{\partial u}{\partial \mu})}\sqrt{\det {\nabla^2}u})(1+k^{-1}L_1(t,\mu)+k^{-2}L_2(t,\mu)\cdots)|_{\mu'=-\frac{4\pi j}{k}} \\ && \\
& = & k^{-\frac{m}{2}}(e^{k u}\sqrt{\det {\nabla^2}u})(1+k^{-1}L_1(t,\mu)+k^{-2}L_2(t,\mu)\cdots)|_{\mu'=-\frac{4\pi j}{k}}. \end{array}$$ where each $L_{\lambda}$ is a smooth function of $(\mu,t)$ and $4\pi$-periodic in $\mu$ for any fixed $t$.
Now we can get the following expression of $R_k(j,t)$ by expanding each term in denominator and numerator: $$\begin{array}{lll}R_k(j,t)& = & e^{k\pi(u-(1-t)u_0-tu_1)}(\frac{\det {\nabla^2}u}{(\det {\nabla^2}u_{0})^{1-t}(\det {\nabla^2}u_{1})^t})^{1/2}\frac{1+k^{-1}L_1(t,\mu)+\cdots}{(1+k^{-1}L_1(0,\mu)+\cdots)^{1-t}(1+k^{-1}L_1(1,\mu)+\cdots)^t}|_{\mu=-\frac{4\pi j}{k}} \\ && \\
& = & R_{\infty}(\mu,t)(1+k^{-1}a_1+k^{-2}a_2+ \ldots +k^{-\nu}a_{\nu})|_{\mu=-\frac{4\pi j}{k}}+O(k^{-\nu-1}) . \end{array}$$ In the last step, we plug in the identity (\[linar\]). Then we apply the Taylor expansion $(1+x)^\gamma=1+\gamma x+ \cdots$ to the term $(1+k^{-1}L_1(t,\mu)+\cdots)(1+k^{-1}L_1(0,\mu)+\cdots)^{t-1}(1+k^{-1}L_1(1,\mu)+\cdots)^{-t}$, choosing $\gamma$ as $t-1$ and $-t$. If we expand these three terms completely, we will get the complete asymptotics, and we can compute each term step by step. For example, the first term is $1$ and the second term is $k^{-1}(L_1(t,\mu)-(1-t)L_{1}(0,\mu)-tL_1(1,\mu))$. Moreover, $a_{\nu}$ is a polynomial of $t$ and $L_{\lambda}(t,\mu)$ for some $\lambda$, hence each $a_{\nu}$ is smooth and uniformly bounded on $[0,1]\times M$, and periodic for any fixed $t$ . Furthermore, if we combine this with the fact that $R_{\infty}(\mu,t)$ is uniformly bounded, then the error term $R_{\infty}(\mu,t)a_{\lambda+1}$ is uniformly bounded, i.e., the symbol $O$ is uniformly bounded.
Generalized Bernstein Polynomial {#dddsgg}
================================
In this section, we will prove the Lemma \[ddgsgs\]. We first introduce the definition and some basic properties of the Bergman kernel, refer to [@SZ; @Z1; @Z2] for more background.
Let $(L,h) \to (M,\omega)$ be a positive holomorphic line bundle over a compact [Kähler ]{}manifold of complex dimension $m$. We assume $\omega=-\frac{\sqrt{-1}}{2}\partial\bar{\partial} \log |{s(z)}|_{h}^{2}$, where $s(z)$ is a local holomorphic frame. We now define the Bergman kernel as the orthogonal projection from the $L^2$ integral sections to the holomorphic sections: $$\Pi_{k}: L^{2}(M,L^{k})\rightarrow H^{0}(M,L^{k}).$$ Furthermore, if $\left\{s_{j}^k\right \}_{j=0} ^{d_k}$ is an orthonormal basis of $H^{0}(M,L^{k})$ with respect to the inner product defined by (\[dsldd\]), then $$\label{fghf}\Pi_{k}(z,w)=\sum_{j=0}^{d_k}s_j^k(z)\otimes \overline{s_j^k(w)},$$ where $d_k+1$=dim $H^{0}(M,L^{k})$. The following holds for any $m$-dimensional [Kähler ]{}manifold [@BBS; @BS; @SZ]:
For any $C^{\infty}$ positive hermitian line bundle $(L,h)$, we have: $$\label{sfgs} \Pi_k(z,w)= e^{k(\phi(z, \bar{w})-\frac{1}{2 }
(\phi(z)+\phi(w)))}A_{k}(z,w) +O( k^{-\infty}),$$ where $\phi$ is the smooth local [Kähler ]{}potential for $h$, $\phi(z, \bar{w })$ is the almost analytic extension of $\phi(z)$ and $A_{k}(z,w)= k^m(1+k^{-1}a_{1}(z,w)+\cdots)$ a semi-classical symbol of order $m$.
Now we turn to the proof of Lemma \[ddgsgs\]:
Assume $M={\mathbb{C}}^m/\Lambda$ where $\Lambda={\mathbb{Z}}^m+i{\mathbb{Z}}^m$ and $L\rightarrow M$ is a principal polarization of $M$. Choose [Kähler ]{}potential $\varphi(y)=2\pi y^2+4\pi\psi(y)$ as before. From Proposition \[dfghhg\], $\left\{\theta_j, j\in ({\mathbb{Z}}/k{\mathbb{Z}})^m \right \}$ forms an orthogonal basis of ${H^0(M,L^k)}$ with respect to the Hermitian inner product defined by (\[dsldd\]); thus by formula (\[fghf\]), the Bergman kernel is given by: $$\label{kjhl}\Pi_k(z,w) = \sum_{j \in ({\mathbb{Z}}/k{\mathbb{Z}})^m}\frac{\theta_j(z) \overline{\theta_j(w)}e^{-\frac{k\varphi(Im z)}{2}-\frac{k\varphi(Im w)}{2}}}{\|\theta_j\|^2_{h^k}}.$$ For any function $f(x)\in C^\infty(\mathbb{T}^m)$, we can define the following translation operator $U: f(x) \rightarrow f(x-\frac{1}{k})$ on the universal covering space. If we consider this operator acting on the vector space ${H^0(M,L^k)}$ of holomorphic theta functions, then we have the following Weyl quantization [@K; @KR]: $$Op_k(f)=\sum_{n\in {\mathbb{Z}}^m}\widehat{f}(n)U^{n},$$ where $\widehat{f}(n)$ is the Fourier coefficients of $f$. Now apply $U$ to theta functions: $$\theta_j(z)=\sum_{n\in {\mathbb{Z}}^m}e^{-\pi \frac{(j+kn)^2}{k} +2\pi i(j+kn)\cdot z} ,$$ then for any $x \in {\mathbb{R}}^m$, it’s easy to see that: $$\label{kjhbv}U (\theta_{j}(z+x))=e^{-2\pi i \frac{j}{k}} \theta_j(z+x),$$ where $e^{-2\pi i \frac{j}{k}}=e^{-2\pi i \frac{j_1}{k}}\cdots e^{-2\pi i \frac{j_m}{k}}$. Next apply $Op_k(f)$ to theta functions, we have: $$\label{kjdghl}Op_k(f)\theta_j(z+x)=\left(\sum_{n\in {\mathbb{Z}}^m}\widehat{f}(n)e^{-2\pi i \frac{j}{k}\cdot n}\right) \theta_j(z+x)=f(-\frac{j}{k})\theta_j(z+x).$$ Now apply this operator to the Bergman kernel off the diagonal (\[kjhl\]), we have: $$\begin{array}{lll}Op_k(f)\Pi_k(z+x,w)|_{x=0} & = &Op_k(f)\sum_{j\in ({\mathbb{Z}}/k{\mathbb{Z}})^m}\frac{\theta_j(z+x) \overline{\theta_j(w)}e^{-\frac{k\varphi(Im z)}{2}-\frac{k\varphi(Im w)}{2}}}{\|\theta_j\|^2_{h^k}}|_{x=0}\\ && \\&= & \sum_{j\in ({\mathbb{Z}}/k{\mathbb{Z}})^m}f(-\frac{j}{k})\frac{\theta_j(z) \overline{\theta_j(w)}e^{-\frac{k\varphi(Im z)}{2}-\frac{k\varphi(Im w)}{2}}}{\|\theta_j\|^2_{h^k}}.
\end{array}$$ Here we use the fact that $\varphi (Im(z+x))=\varphi(Imz)=\varphi(y)$. Now choose $z=w$, we have: $$\label{iuy}\frac{1}{k^m}\sum_{j\in ({\mathbb{Z}}/k{\mathbb{Z}})^m}f(-\frac{j}{k})\frac{|\theta_j|_{h^k}}{\|\theta_j\|^2_{h^k}}=\frac{1}{k^m} Op_k(f)\Pi_k(z+x,z)|_{x=0}.$$ Now we get the complete asymptotics of $\Pi_k(z+x,z)$ as follows: by assumption, our [Kähler ]{}potential only depends on $y=$Im$z$, i.e., $\varphi(z)=\varphi(y)=\varphi(\frac{z-\bar z}{2i})$, thus the almost analytic extension of $\varphi$ is given by $$\label{dsv}\varphi(z,\bar w)=\varphi(\frac{z-\bar w}{2i}).$$ Hence, formula (\[sfgs\]) reads: $$\label{ddgdewd} \begin{array}{lll}\Pi_k(z+x,z) &= & e^{k(\varphi(z+x,\bar z)-\frac{1}{2 }
(\varphi(z+x)+\varphi(z)))}A_{k}(z+x,z) \\ && \\&= & e^{k(\varphi(z+x,\bar z)-\varphi(z))}A_{k}(z+x,z), \end{array}$$ where $A_{k}(z+x,z)= k^m(1+k^{-1}a_{1}(z+x,z)+\cdots)$. In the last step, we use the fact that $\varphi(z+x)=\varphi(z)=\varphi(Im z)$ again.
Now apply the operator $\frac{1}{k^m} Op_k(f)$ on both sides of (\[ddgdewd\]), $$\label{ddddgd} \begin{array}{lll} \frac{1}{k^m} Op_k(f)\Pi_k(z+x,z)|_{x=0} &= &\frac{1}{k^m} \sum_{n\in {\mathbb{Z}}^m} \hat f(n)U^{n}\Pi_k(z+x,z)|_{x=0} \\ && \\&= & \frac{1}{k^m} \sum_{n\in {\mathbb{Z}}^m} \hat f(n)\Pi_k(z-\frac{n}{k},z) \\ && \\&= & \frac{1}{k^m} \sum_{n\in {\mathbb{Z}}^m} \hat f(n) e^{k(\varphi(z-\frac{n}{k},\bar z)-\varphi(z))}A_{k}(z-\frac{n}{k},z) \\ && \\&= & \frac{1}{k^m} \sum_{n\in {\mathbb{Z}}^m} \hat f(n) e^{k(\varphi(y- \frac{n}{ 2ik})-\varphi(y))}A_{k}(z-\frac{n}{k},z). \end{array}$$ In the last step, by identity (\[dsv\]), the almost analytic extension $\varphi(z-\frac{n}{k},\bar z)=\varphi(\frac{z-\frac{n}{k}-\bar z}{2i})=\varphi(y- \frac{n}{ 2ki})$.
To the last equation in (\[ddddgd\]), if we apply the Taylor expansion to $e^{k(\varphi(y- \frac{n}{ 2ik})-\varphi(y))}$ and use the complete asymptotic of $A_{k}(z-\frac{n}{k},z)= k^m(1+k^{-1}a_{1}(z-\frac{n}{k},z)+\cdots)$, we will get the complete asymptotic of $Op_k(f)\Pi(z+x,z)|_{x=0}$. For example, we can compute the leading term as follows: first, $e^{k(\varphi(y- \frac{n}{ 2ik})-\varphi(y))}=e^{ -\nabla\varphi\cdot \frac{n}{ 2i}+O(k^{-1})}=e^{ -\nabla\varphi\cdot \frac{n}{ 2i}}+O(k^{-1})$; second, $\frac{1}{k^m} A_{k}(z-\frac{n}{k},z)= 1+O(k^{-1})$. Hence the leading term is given by, $$\sum_{n\in {\mathbb{Z}}^m}\hat f(n)e^{ -\nabla\varphi\cdot \frac{n}{ 2i}} =f(\frac{\nabla \varphi}{4\pi})=f(\mu),$$ where $\mu=y+\nabla\psi$. Hence, we can get the complete asymptotics step by step if we further expand $e^{k(\varphi(y- \frac{n}{ 2ik})-\varphi(y))}$ and $A_k$.
As a remark, if we replace $f$ and $h$ to be a path of smooth periodic function $f_t$ and any path $h_t$ in ${\mathcal{H}}_0^{\Gamma}$, then the lemma still holds with the leading term $f_t(\mu)$. Furthermore, we can differentiate the complete asymptotics with respect to $t$ on both sides.
$C^{\infty}$ convergence of Bergman geodesics {#dsss}
=============================================
In this section, we will apply the Regularity Lemma and the generalized Bernstein Polynomial Lemma to prove Lemma \[oiut\]:
We first apply Lemma \[jhgf\], and denote $A_{\nu}(\mu,t)\sim R_{\infty}(\mu,t)a_{\nu}(\mu)$, then $A_{\nu}(\mu,t)$ is periodic in $\mu$ since $R_{\infty}(\mu,t)$ and $a_{\nu}(\mu)$ are periodic. Then: $$\begin{array}{lll}& & \sum_{j\in ({\mathbb{Z}}/k{\mathbb{Z}})^m}R_k(j,t)\frac{|\theta_j|^2_{h_t^k}}{\|\theta_j\|^2_{h_t^k}} \\ && \\&\sim & \sum_{j\in ({\mathbb{Z}}/k{\mathbb{Z}})^m} R_{\infty}(\mu,t)(1+k^{-1}a_1+k^{-2}a_2+ \cdots )|_{\mu=-\frac{4\pi j}{k}}\frac{|\theta_j|^2_{h_t^k}}{\|\theta_j\|^2_{h_t^k}}
\\ && \\
& \sim & \sum_{j\in ({\mathbb{Z}}/k{\mathbb{Z}})^m} R_{\infty}(-\frac{4\pi j}{k},t)\frac{|\theta_j|^2_{h_t^k}}{\|\theta_j\|^2_{h_t^k}}+\frac{1}{k} \sum_{j\in ({\mathbb{Z}}/k{\mathbb{Z}})^m}A_{1}(-\frac{4\pi j}{k},t)\frac{|\theta_j|^2_{h_t^k}}{\|\theta_j\|^2_{h_t^k}}+\cdots .
\end{array}$$ Since $R_{\infty}(\mu,t)$ is periodic with period $4\pi$, then $R_{\infty}(4\pi \mu)$ will be periodic with period $1$, thus if we apply Lemma \[ddgsgs\] to function $R_{\infty}(4\pi \mu)$, we have: $$\sum_{j\in ({\mathbb{Z}}/k{\mathbb{Z}})^m} R_{\infty}(-\frac{4\pi j}{k},t)\frac{|\theta_j|^2_{h_t^k}}{\|\theta_j\|^2_{h_t^k}}\sim k^m(R_{\infty}(\mu,t)+k^{-1}b_{11}(\mu,t)+\cdots),$$ where $\mu=4\pi(y+\nabla \psi_t)$. In fact, we can apply Lemma \[ddgsgs\] to each coefficient, e.g., $$\frac{1}{k}\sum_{j\in ({\mathbb{Z}}/k{\mathbb{Z}})^m}A_{1}(-\frac{4\pi j}{k},t)\frac{|\theta_j|^2_{h_t^k}}{\|\theta_j\|^2_{h_t^k}}\sim k^m(k^{-1}A_{1}(\mu,t)+\cdots)$$ and so on, then we have the complete asymptotics: $$\begin{array}{lll}\sum_{j\in ({\mathbb{Z}}/k{\mathbb{Z}})^m}R_k(j,t)\frac{|\theta_j|^2_{h_t^k}}{\|\theta_j\|^2_{h_t^k}}&\sim & k^m(R_{\infty}(\mu,t)+k^{-1}(A_{1}+b_{11})+\cdots).
\end{array}$$ We can divide $R_{\infty}$ since in Lemma \[jhgf\], we prove this term is strictly positive, uniformly bounded and smooth. Hence, $$\begin{array}{lll}&&\frac{1}{k}\log\sum_{j\in ({\mathbb{Z}}/k{\mathbb{Z}})^m}R_k(j,t)\frac{|\theta_j|^2_{h_t^k}}{\|\theta_j\|^2_{h_t^k}} \\ && \\
& \sim & k^{-1}\log [k^mR_{\infty}(\mu,t)(1+\frac{1}{k}\frac{A_{1}+b_{11}}{R_{\infty}}+\cdots)]
\\ && \\
& \sim &mk^{-1}\log k+k^{-1}\log R_{\infty}+k^{-1}\log (1+\frac{1}{k}\frac{A_{1}+b_{11}}{R_{\infty}}+\cdots)
\\ && \\
& \sim & mk^{-1}\log k+k^{-1}\log R_{\infty}+k^{-2}\frac{A_{1}+b_{11}}{R_{\infty}}+ \cdots .
\end{array}$$ In the last step, we use the Taylor expansion $\log(1+x)\sim x-\frac{x^2}{2}+\cdots$. Moreover, $$\frac{1}{k}\log \sum_{j\in ({\mathbb{Z}}/k{\mathbb{Z}})^m}R_k(j,t)\frac{|\theta_j|^2_{h_t^k}}{\|\theta_j\|^2_{h_t^k}}\longrightarrow 0$$ in $C^{\infty}$ topology as $k \rightarrow \infty$. This implies that the Bergman geodesics converge to the geodesic in the [Kähler ]{}space in $C^{\infty}$ topology.
General Lattice {#general}
=================
In this section, we will sketch the proof of our main theorem for any principally polarized Abelian variety.
Let $M={\mathbb{C}}^m/\Lambda$ where $\Lambda=$ Span$_{{\mathbb{Z}}}\{\lambda_1,...,\lambda_{2m}\}$ is a lattice in ${\mathbb{C}}^m$ with its normalized period matrix given by $\Omega :=[I,Z]$ where $Z^t=Z$ and $Im Z>0$. Choose $\{x_1,...,x_m,y_1,...,y_m\}$ as the coordinates of the basis dual to $\{\lambda_1,...,\lambda_{2m}\}$ such that $z_{\alpha}=x_{\alpha}+\sum _{\beta=1}^mZ_{\alpha \beta}y_{\beta}$ and $\bar z_{\alpha}=x_{\alpha}+\sum _{\beta=1}^m\bar Z_{\alpha \beta}y_{\beta}$ [@GH].
Assume $L\rightarrow M$ is a principal polarization of $M$, then the holomorphic sections of ${H^0(M,L^k)}$ are given by theta functions (\[a\]). Now consider the [Kähler ]{}potential in the form: $$\varphi(t,y)=2\pi y X y^T+4\pi \psi(t,X y^T)$$ where $y=(y_1,...,y_m)$, $X=Im Z$. We assume $\varphi$ is convex in $y$ and $\psi$ is smooth on ${\mathbb{R}}^m$ and periodic with period $1$ in each variable $y_j$ for any fixed $t$. Then it’s easy to check that such [Kähler ]{}potential satisfies conditions (\[iuyth\]).
By choosing such [Kähler ]{}potential, Proposition \[dfghhg\] still holds depending on the following computations (see also [@FMN]): $$\langle\theta_{l'}(z,\Omega ),\theta_{l}(z,\Omega )\rangle_{h^k_t}= \int_{[0,1]^m\times [0,1]^m} \left(\sum_{n'\in {\mathbb{Z}}^m} e^{-i \pi(l+kn')\cdot\frac{Z}{k}(l+kn')^T}e^{-2\pi i(l+kn')\cdot z}\right)\cdot$$ $$\left(\sum_{n\in {\mathbb{Z}}^m} e^{-i \pi(l+kn)\cdot\frac{\bar Z}{k}(l+kn)}e^{-2\pi i(l+kn)^T\cdot\bar z}\right)\cdot e^{-2k\pi y X y^T-4k\pi \psi(t,X y)}\cdot \det {\nabla^2}\varphi_t(y) dxdy$$ $$=\delta_{l,l'}\sum_{n\in {\mathbb{Z}}^m} \int_{[0,1]^m} e^{-2k\pi(y+\frac{l+kn}{k})\cdot X(y+\frac{l+kn}{k})^T} e^{-4k\pi \psi(t,X y)}\cdot \det {\nabla^2}\varphi_t(y) dy.$$ Thus $\left\{\theta_l(z,\Omega ), \,\, l\in ({\mathbb{Z}}/k{\mathbb{Z}})^m \right\}$ forms an orthogonal basis of ${H^0(M,L^k)}$. Furthermore, $$\label{eqati}\|\theta_l(z,\Omega )\|^2_{h^k_t}=\int _{{\mathbb{R}}^m}e^{-2k\pi(y+\frac{l}{k})X(y+\frac{l}{k})^T}e^{-4k\pi \psi(t,X y)}\cdot \det {\nabla^2}\varphi_t(y)dy.$$ Then all main steps in the model case can be extended to the general case immediately:
- Define $u(t,y)$ as the Legendre transform of $\varphi(t,y)$ with respect to $y$ variables for any fixed $t$, then we can still linearize $u(t,y)$ along the geodesics since the Proposition \[jghnbv\] is only the property of convex functions (p.106 in [@R]).
- By substituting $\varphi$ by the Legendre transform $u(t,y)$, we rewrite (\[eqati\]) as $$e^{-2k\pi (\frac{j}{k})X(\frac{j}{k})^T}\int_{{\mathbb{R}}^m} e^{-4k\pi (\nabla u \cdot X \cdot (\frac{j}{k})^T+u-\mu \cdot \nabla u)}d\mu,$$ where $\mu=\nabla \varphi$.
By applying the stationary phase method, we can get the complete asymptotics of this integration evalued at $\mu'=-X\cdot (\frac{4\pi j}{k})^T$ which is the critical point of the phase function. Thus $R_k(j,t)$ which is the ratio of the norming constants will be asymptotic to $R_\infty(\mu,t)$ as $$R_k(j,t)\sim R_\infty(\mu,t)(1+k^{-1}a_1(\mu,t)+\cdots )|_{\mu=-X\cdot (\frac{4\pi j}{k})^T}$$ If we change variable as $ \mu\cdot (4\pi X)^{-1} =\nu$, then $R_\infty(\nu,t)$ and each $a_j(\nu,t)$ are smooth functions over ${\mathbb{R}}^m$ and periodic with period $1$ in variables $\nu$ for any fixed $t$.
- In the general case, we define the operator $U: f(x)\rightarrow f(x-\frac{1}{k})$. Then for general theta functions $\theta_l(z,\Omega )$, we still have: $$U(\theta_l(z,\Omega ))=e^{-2\pi i \frac{l}{k}} \theta_l(z,\Omega ),$$ where $e^{-2\pi i \frac{l}{k}}$ denotes $e^{-2\pi i \frac{l_1}{k}}\cdots e^{-2\pi i \frac{l_m}{k}}$. Then by applying the Weyl quantization to the Bergman kernel and using Fourier transform and Taylor expansion, for any $f(4\pi X \cdot x^T)\in C^\infty(\mathbb R^m)$ which is also periodic with period $1$ in $x$ variables, following the proof in section \[dddsgg\], we can prove $$\frac{1}{k^m}\sum_{j\in ({\mathbb{Z}}/k{\mathbb{Z}})^m}f(- X\cdot(\frac{4\pi j}{k})^T)\frac{ |\theta_j(z,\Omega )|_{h^k}}{\| \theta_j(z,\Omega )\|_{h^k}}\sim f(\mu)+k^{-1}b_1(\mu)+\cdots ,$$ where $\mu= \nabla \varphi$ . Our main result with the same formula as the model case holds if we apply this formula to $R_\infty(\mu,t)$ and each $a_j(\mu,t)$ and follow the steps in section \[dsss\].
Thus our main result holds for any principally polarized Abelian variety.
Complete asymptotics of harmonic maps {#testharmonic}
=====================================
The proof of Theorem \[harmin\] is similar to the one in [@RZ]. For brevity, we just sketch the main steps for the model case $M={\mathbb{C}}^m/\Lambda$, where $\Lambda={\mathbb{Z}}^m+i{\mathbb{Z}}^m$.
The crucial formula in the toric case is the identity (4.1) in [@RZ], while in our Abelian case, we modify it to be $$\label{comp}\begin{array}{lll}&& \varphi_k(q,z)-\varphi(q,z) \\ && \\
&= &\frac{1}{k} \log \sum_{j\in({\mathbb{Z}}/k{\mathbb{Z}})^m} \exp \left(\int_{\partial N}\partial_{\nu(p)} G(p,q)\log \|\theta_j(z)\|^2_{h_\psi^k(p)}dV_{\partial N}(p)\right) |\theta_j(z)|^2_{h_\varphi^k(q)},\end{array}$$ where $G(q,p)$ denotes the positive Dirichlet Green kernel for the Laplacian $\triangle_ {N,g}$, $dV_{\partial N}$ is the induced measure on $\partial N$ by restricting the Riemannian volume form $dV_{N} $ from $N$ to $\partial N$ and $\nu(q)$ is an outward unit normal to $\partial N$. Then to prove Theorem \[harmin\] is equivalent to prove that (\[comp\]) admits complete asymptotics. Denote $$K_k(q,j)=\exp \left(-\int_{\partial N}\partial_{\nu(p)} G(p,q)\log \frac{ \|\theta_j(z)\|^2_{h_\varphi^k(q)}}{\|\theta_j(z)\|^2_{h_\psi^k(p)}}dV_{\partial N}(p)\right)$$ Then we can rewrite (\[comp\]) as $$\label{codrmp}\varphi_k(q,z)-\varphi(q,z) =\frac{1}{k} \log \sum_{j\in({\mathbb{Z}}/k{\mathbb{Z}})^m} K_k(q,j) \frac{|\theta_j(z)|^2_{h_\varphi^k(q)}}{\|\theta_j(z)\|^2_{h_\varphi^k(q)}}$$ Put $u_q:=u_{\varphi(q)}=u(q,\cdot)$ is the Legendre transform of $\varphi_q(y) \in {\mathcal{H}}_0^\Gamma$ for $q\in N$. Denote $$\label{ddgssd}K_\infty(q,x)=\exp \left(-\frac{1}{2}\int_{\partial N}\partial_{\nu(p)} G(p,q)\log \frac{\det {\nabla^2}u_q(x)}{\det {\nabla^2}u_p(x)}dV_{\partial N}(p)\right)$$ where $x=\nabla \varphi$.
From the proof of the Regularity Lemma \[jhgf\], if we plug in the complete asymptotic expansion of the norming constants $\|\theta_j(z)\|^2_{h_\varphi^k(q)}$ and $\|\theta_j(z)\|^2_{h_\psi^k(p)}$, we have the following complete asymptotic expansion, $$\label{dfddgssd}K_k(q,j)=K_\infty(q,x)+k^{-1}b_1(q,x)+\cdots |_{x=-\frac{4\pi j}{k}}$$
If we plug (\[dfddgssd\]) into the right hand side of (\[codrmp\]), we obtain the following expansion, $$\begin{array}{lll}\frac{1}{k}\log \left(\sum_j K_\infty(q,\frac{-4\pi j}{k}) \frac{|\theta_j(z)|^2_{h_\varphi^k(q)}}{\|\theta_j(z)\|^2_{h_\varphi^k(q)}}+k^{-1}\sum_j b_1(q,\frac{-4\pi j}{k}) \frac{|\theta_j(z)|^2_{h_\varphi^k(q)}}{\|\theta_j(z)\|^2_{h_\varphi^k(q)}}+\cdots \right).\end{array}$$
Hence, Theorem \[harmin\] follows if we apply the generalized Bernstein Lemma \[ddgsgs\] to each summation above and follow the steps in section \[dsss\].
[HHHH]{} C. Arezzo and G. Tian, *Infinite geodesic rays in the space of [Kähler ]{}potentials*, Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) 2 (2003), no. 4, 617-630. B. Berndtsson, *Probability measures related to geodesics in the space of [Kähler ]{}metrics*, arxiv:0907.1806v2. R. Berman, B. Berndtsson and J. Sjöstrand, *A direct approach to Bergman kernel asymptotics for positive line bundles*. Ark. Mat. 46 (2008), no. 2, 197-217. L. Boutet de Monvel and J. Sjöstrand, *Sur la singularite des noyaux de Bergman et de Szegö*, Asterisque 34-35 (1976), 123-164.
N. Berline and M. Vergne, *Local Euler-Maclaurin formula for polytopes*, Moscow Math. J. 7 (2007), no. 3, 355-386. X. X. Chen, *The space of [Kähler ]{}metrics*, J. Differential Geom. 56 (2000), 189-234. S. K. Donaldson, *Symmetric spaces, Kähler geometry and Hamiltonian dynamics*, in: Northern California Symplectic Geometry Seminar, American Mathematical Society, 1999, pp. 13-33. R. Feng, *Szasz analytic functions and noncompact [Kähler ]{}toric manifolds*, to appear in Journal of Geometric Analysis. C. A. Florentino, J. M. Mourão and J. P. Nunes, *Coherent State Transforms and Abelian Varieties*, Journal of Functional Analysis, Volume 192, Number 2, July 2002, pp. 410-424(15). D. Guan, *On modified Mabuchi functional and Mabuchi moduli space of Kähler metrics on toric bundles*, Math. Res. Lett. 6 (1999), no. 5-6, 547-555. G. Griffiths and J. Harris, *Principles of Algebraic Geometry*, Wiley-Interscience, (1978). V. Guillemin and S. Sternberg, *Riemann sums over polytopes*, Ann. Inst. Fourier, volume. 57, no. 7 (2007), 2183-2195. L. Hörmander, *The Analysis of Linear Partial Differential Operators*, Grund. Math. Wiss. 256, Springer-Verlag, N.Y. (1983). D. Kelmer, *Arithmetic Quantum Unique Ergodicity for Symplectic Linear Maps of the Multidimensional Torus*, arxiv:0510.079v5, to appear in Ann. of Math. P. Kurlberg and Z. Rudnick, *On quantum ergodicity for linear maps of the torus*, Commun. Math. Phys. 222, 201-227 (2001).
T. Mabuchi, *Some symplectic geometry on compact [Kähler ]{}manifolds*, Osaka J. Math. 24 (1987) 227-252. D. Mumford, *Tata lectures on theta. I*, Progress in Mathematics, vol. 28, Birkhäuser, Boston, 1983. D. H. Phong and J. Sturm, [ *The Monge-Ampère operator and geodesics in the space of [Kähler ]{}potentials*]{}, Invent. Math. 166 (2006), no. 1, 125-149. Y. A. Rubinstein, *Geometric Quantization and Dynamical Constructions on the Space of [Kähler ]{}metrics*, PhD.Thesis, MIT,2008. Y. A. Rubinstein and S. Zelditch, *Bergman approximations of harmonic maps into the space of [Kähler ]{}metrics on toric varieties*, J. Symp. Geom. Volume 8, Number 3 (2010), 239-265. C. Sogge, *Fourier integrals in classical analysis*, Cambridge Tracts in Mathematics (1993). S. Semmes, *Complex Monge-Ampère and symplectic manifolds*, Amer. J. Math. 114 (1992) 495-550. J. Song and S. Zelditch, *Bergman metrics and geodesics in the space of [Kähler ]{}metrics on toric varieties*, Analysis & PDE. Vol. 3 (2010), No. 3, 295-358. J. Song and S. Zelditch, *Convergence of Bergman geodesics on $\mathbb{CP}^1$*, Ann. Inst. Fourier(Grenoble) 57 (2007), no. 6, 2209-2237. B. Shiffman and S. Zelditch, *Almost holomorphic sections of ample line bundles over symplectic manifolds*, J. Reine Angew. Math. 544 (2002), 181-222. G. Tian, *On a set of polarized [Kähler ]{}metrics on algebraic manifolds*, J. Differential Geom. 32 (1990), Math. Volume 13, Number 4 (1963), 1171-1180.
S. T. Yau, *Open problems in geometry*, Proc. Symp. Pure Math. 54, AMS Providence, RI (1993) 1-28. S. Zelditch, *Bernstein polynomials, Bergman kernels and toric Kähler varieties*, J. Symplectic Geom. Volume 7, Number 2 (2009), 51-76. S. Zelditch, *Szegö kernels and a theorem of Tian*, IMRN 6 (1998), 317-331.
|
---
abstract: 'Laser driven plasma accelerators promise much shorter particle accelerators but their development requires detailed simulations that challenge or exceed current capabilities. We report the first direct simulations of stages up to 1 TeV from simulations using a Lorentz boosted calculation frame resulting in a million times speedup, thanks to a frame boost as high as $\gamma=1300$. Effects of the hyperbolic rotation in Minkowski space resulting from the frame boost on the laser propagation in the plasma is shown to be key in the mitigation of a numerical instability that was limiting previous attempts.'
author:
- 'J.-L. Vay'
- 'C. G. R. Geddes'
- 'E. Cormier-Michel'
- 'D. P. Grote'
bibliography:
- 'PRL\_vay\_2010.bib'
title: |
Effects of Hyperbolic Rotation in Minkowski Space\
on the Modeling of Plasma Accelerators in a Lorentz Boosted Frame
---
Laser driven plasma waves produce accelerating gradients orders of magnitude greater than standard accelerating structures (which are limited by electrical breakdown) [@TajimaPRL79; @EsareyRMP09]. High quality electron beams of energy up-to 1 GeV have been produced in just a few centimeters [@GeddesNature04; @ManglesNature04; @FaureNature04; @LeemansNature06] with 10 GeV stages being planned as modules of a high energy collider [@SchroederAAC08], and detailed simulations are required to realize the promise of much shorter particle accelerators using this technique [@BruhwilerAAC08]. Such simulations challenge or exceed current capabilities, in particular for high energy stages at GeV energies and beyond.
The linear theory predicts that for the intense lasers (a$\gtrsim$1) typically used for acceleration, the laser depletes its energy over approximately the same length $L_d=\lambda_p^3/2\lambda_0^2$ over which the particles dephase from the wake, where $\lambda_p=\sqrt{\pi c^2m/e^2n_e}$ is the plasma wavelength, $\lambda_0$ is the laser wavelength, $c$ is the speed of light, and $m$, $e$ and $n_e$ are respectively the electron mass, charge and density in the plasma [@TajimaPRL79]. As a result of beam dephasing and laser depletion, the maximum bunch energy gain scales approximately as the square of the plasma wavelength and the inverse of the plasma density, which implies that higher energy stages operate with longer plasmas, rending computer simulations more challenging, as the ratio of longest to shortest spatial lengths of interest (plasma length/laser wavelength) rises. As a matter of fact, direct explicit multi-dimensional simulations of 10 GeV stages, which will operate in m-scale plasmas at order $10^{17}/cc$ densities, have been considered until recently beyond the current state of the art [@BruhwilerAAC08; @GeddesPAC09].
Recently, first principles Particle-In-Cell modeling of laser-plasma wakefield accelerators using a Lorentz boosted frame of reference [@VayPRL07] have been shown to being sped up by up-to three orders of magnitude in the calculations of stages in the 100 MeV-10 GeV energy range [@BruhwilerAAC08; @VayPAC09; @MartinsPAC09; @VaySciDAC09; @HuangSciDAC09; @VayDPF09; @MartinsCPC10; @MartinsPoP10; @MartinsNP10]. Maximum obtainable speedups calculated using linear theory predict that higher speedups are attainable, in the range of 4-6 orders of magnitude for stages in the energy range of 10 GeV-1 TeV respectively [@VayAAC10; @VayARXIV10]. Practical limitations have prevented reaching these speedups, including a violent high frequency numerical instability, limiting the Lorentz boost $\gamma$ below $100$ [@BruhwilerAAC08; @VaySciDAC09; @MartinsCPC10; @VayAAC10; @VayARXIV10].
We report for the first time direct explicit simulations of stages in the range of 0.1 GeV-1 TeV, using a Lorentz boosted calculation frame with gamma as high as 1300, verifying the performance and energy gain scaling [@CormierAAC08; @GeddesPAC09] of plasma accelerator stage with deep laser depletion into the 1 TeV range and providing the tools for detailed designs for upcoming 10 GeV experiments such as BELLA [@LeemansAAC10]. As we have shown in [@VayPRL07], the speedup provided by computing using a Lorentz boosted frame comes from the properties of space and time contraction and dilation of the Lorentz transformation. In this paper, the property of rotation of space-time of the Lorentz transformation is utilized to overcome the numerical instability that has arisen for boost values needed for reaching the maximal theoretical speedup. In conjunction with the development of novel numerical techniques that are described elsewhere [@VayAAC10; @VayARXIV10], this allows the simulations to approach the theoretically calculated speedups of 4-6 orders of magnitude for 10 GeV-1 TeV stages, which in turn allows simulations of high energy plasma accelerators.
*Effect of the hyperbolic rotation in Minkowski space.—*
[m[1.8cm]{}m[7.cm]{}m[7.cm]{}]{} & &\
[Laser]{} & {width="70mm"} & {width="70mm"}\
[Wake]{} & {width="70mm"} & {width="70mm"}
The effects of the Lorentz transformation on the laser and wake propagation through a 100 MeV laser plasma acceleration stage [@CormierAAC08; @GeddesPAC09] is illustrated in space in Figure \[Fig\_surf2de\] and in space-time in Figure \[Fig\_hyprot\], taken from simulations using the Particle-In-Cell code Warp [@Warp]. The Lorentz transformation can be described as a hyperbolic rotation in Minkowski space and its rotational effect is explicitly visible in Figure \[Fig\_hyprot\].
Figure \[Fig\_surf2de\] shows surface renderings of the transverse and longitudinal electric fields respectively, as the beam enters its early stage of acceleration by the plasma wake, from calculations in the laboratory frame and a Lorentz boosted frame at $\gamma=13$ (approximately the laser group velocity in the plasma column $\gamma_g\approx13.2$). The two snapshots offer strikingly different views of the same physical processes: in the laboratory frame, the wake is fully formed before the beam undergoes any significant acceleration, the laser (Fig. \[Fig\_surf2de\] top-left) is easily recognizable (i.e. its shape is only slightly distorted by the plasma) and leaves a visible imprint on the wake (longitudinal) field (Fig. \[Fig\_surf2de\] bottom-left); in the boosted frame, the beam is accelerated as the plasma wake develops, the laser (Fig. \[Fig\_surf2de\] top-right) is not easily recognizable (i.e. its shape is highly distorted by the plasma) and no evident imprint is left on the wake field (Fig. \[Fig\_surf2de\] bottom-right).
The physics underlying the differences between $\gamma=1$ and $\gamma=13$ views of the wake is illustrated by histories of the (transverse) laser field on the longitudinal axis, reported in Figure \[Fig\_hyprot\] from simulations performed using the laboratory frame and boosted frames at $\gamma=5$ and $13$. Simulations with $\gamma=1$ and 5 used a moving window propagating at the group velocity $v_g$ of the laser in the plasma and the data are plotted in the frame of the (galilean) moving window. The simulation with $\gamma=13$ did not use a moving window as it was performed very near the group velocity of the laser in the plasma $\gamma_g\approx13.2$. The data from the boosted frame simulation at $\gamma=13$ is presented in the boosted frame as well as in the laboratory frame moving window (after Lorentz transformation), allowing direct comparison with the simulation calculating with the laboratory frame. The calculation with the boosted frame at $\gamma=13$ was approximately 200 times faster than the calculation with the laboratory frame, as expected [@VayPRL07]. The agreement between the two is nonetheless excellent (comparing top-left plot to bottom-right plot in Figure \[Fig\_hyprot\]), confirming the accuracy of the calculation in the boosted frame. The group velocity of the wake in the plasma is always below the speed of light in vacuum while the phase velocity is always above it, resulting in oblique stripes in the laboratory frame plot. As $\gamma$ boost rises, the stripes rotate according to the rules of the Lorentz transformation, eventually becoming nearly perpendicular to the time axis as $\gamma$ boost nears $\gamma_g$ (bottom-left plot in Figure \[Fig\_hyprot\]), the laser group velocity approaches zero and the phase velocity approaches infinity. In effect, the laser oscillations that appear in the laboratory as spatial oscillations propagating in the plasma are transformed into time beating of the field for calculations in frames whose boost nears the laser group velocity. As discussed below, this effect has important consequences for the modeling of full scale stages at 10 GeV or above.
*Mitigation of a numerical instability.—* Several numerical limits have restricted the boost performance in past simulations: laser initialization, statistics and a short wavelength instability. The first two are discussed elsewhere [@VayAAC10] and we concentrate here on the latter.
A violent high-frequency numerical instability developing at the front of the plasma column for boosts at $\gamma \gtrsim 100$ in 2D and $\gamma \gtrsim 50$ in 3D was reported by various authors [@VayDPF09; @MartinsCPC10; @BruhwilerPC08]. The presence and growth rate of the instability was observed to be very sensitive to the resolution (slower growth rate at higher resolution), to the amount of damping of high frequencies, to smoothing of short wavelengths, and to the boost value (stronger instability at higher boost). An extensive set of testing was performed with Warp to investigate simulations of downscaled 100 MeV and full scale 10 GeV LPA stages [@VayARXIV10]. One of the key findings was that frames with higher boosts (up to $\gamma_g$) allow for higher levels of filtering and damping than is possible in other frames for the same accuracy, allowing mitigation of the instability. This is a direct consequence, and benefit, of the hyperbolic rotation effect from the Lorentz boost seen in Figure \[Fig\_hyprot\].
The spectral content history of the laser field on axis as it propagates through the plasma column is given in Figure \[Fig\_spectrum\] for 100 MeV and 10 GeV stages, in the laboratory frame and in the laser group velocity frames at $\gamma=13$ and $130$ respectively. Laser depletion occurs at times $T_d=L_d/c$ at respectively $T_d\approx3.1$ ns and $T_d\approx3.1$ ps for 100 MeV and 10 GeV stages. In the laboratory frame, the spectral content of the laser is concentrated initially in a narrow band around the nominal laser wavelength, then spreads due to dispersion and depletion effects as the laser propagates through the plasma. In the frame of the laser group velocity, much of the spectral content is localized initially at wavelengths that are several times the nominal laser wavelength (in vacuum), then progressively fills lower portions of the spectrum. As in practice the longitudinal numerical resolution is set relative to the vacuum laser nominal wavelength, a higher level of filtering (or damping) is acceptable for simulations in a boosted frame than for those in the laboratory frame. Furthermore, the higher the boost, the longer the wavelengths with substantial spectral content in the early part of the laser propagation (comparing spectral content below diagonal in plots from right column of Figure \[Fig\_spectrum\]), meaning that higher levels of filtering are allowable at higher boosts were the growth of the numerical instability is the strongest.
*Modeling of up to 1 TeV stages.—* Mitigation of the instability allowed simulations of stages with gain energy as high as 1 TeV in 2-1/2D and 100 GeV in 3-D, using Lorentz boosted frame with $\gamma$ as high as $1,300$ (see Figure \[Fig\_ehist100GeV\]), offering for the first time direct verification of the scaling of plasma accelerators into the 1 TeV range for deeply depleted stages. Simulations with the highest boost necessitated filtering of short wavelength over a wider band, and in agreement with the observations of the previous section, the accuracy was not compromised. The highest level of smoothing was needed for the 1 TeV case, explaining the deviation past 1 km. This deviation is of little importance in practice, where one is mostly interested in the beam evolution up-to the peak energy point. The differences at $n_e=10^{19}$cc can be attributed to the effects from having only a few laser oscillations per pulse. The theoretical speedup [@VayAAC10] of the full scale 100 GeV class run is estimated to be over 100,000. Assuming the use of a few thousands of CPUs, a simulation that would have required an impractical several decades to complete using the laboratory frame, was completed in only four hours using 2016 CPUs of the Cray system at NERSC. The speedup of the 2-1/2D 1 TeV stage is estimated to be over a million.
The boosted frame Particle-In-Cell technique accurately resolves the wavelength shifting and broadening that occurs as the laser depletes, offering advantages over other models (for example envelope, quasistatic) while providing the speed required for direct simulation of 10 GeV and beyond laser plasma accelerators to accurately model laser and beam transverse oscillations. It is being applied to the direct simulation of 10 GeV beam loaded stages for detailed designs of experiments on new lasers such as BELLA [@LeemansAAC10] (see Figure \[Fig\_bella\]), as well as next generation controlled laser plasma accelerator stages and collider modules [@SchroederAAC08].
In summary, direct simulations of stages in the range of 0.1 GeV- 1 TeV have been performed using the Lorentz boosted frame technique, verifying for the first time the performance of plasma accelerators into the 1 TeV range for deeply depleted stages. This has been possible thanks to effects of the hyperbolic rotation in Minkowski space of the laser propagation in the plasma column which has been key, in conjunction with the development of novel numerical techniques (described elsewhere), in allowing successful mitigation of a violent numerical instability that was limiting the boost performance in past simulations. As a result, the maximum theoretical speedup of over a million for a 1 TeV stage was realized, which is three orders of magnitude higher than was possible previously. The new developments offer unique highly efficent tools for the detailed designs of experiments on new lasers such as BELLA [@LeemansAAC10].
Work supported by US-DOE Contracts DE-AC02-05CH11231 and US-DOE SciDAC program ComPASS. Used resources of NERSC and LBNL cluster Lawrencium, supported by US-DOE Contract DE-AC02-05CH11231. The authors thank D. L. Bruhwiler, J. R. Cary, E. Esarey, A. Friedman, W. P. Leemans, S. F. Martins, W. B. Mori, R. D. Ryne and C. B. Schroeder for insightful discussions and/or advice in preparing this manuscript.
|
---
abstract: 'We present baryon acoustic oscillation (BAO) scale measurements determined from the clustering of 1.2 million massive galaxies with redshifts $0.2 < z < 0.75$ distributed over 9300 square degrees, as quantified by their redshift-space correlation function. In order to facilitate these measurements, we define, describe, and motivate the selection function for galaxies in the final data release (DR12) of the SDSS III Baryon Oscillation Spectroscopic Survey (BOSS). This includes the observational footprint, masks for image quality and Galactic extinction, and weights to account for density relationships intrinsic to the imaging and spectroscopic portions of the survey. We simulate the observed systematic trends in mock galaxy samples and demonstrate that they impart no bias on baryon acoustic oscillation (BAO) scale measurements and have a minor impact on the recovered statistical uncertainty. We measure transverse and radial BAO distance measurements in $0.2 < z < 0.5$, $0.5 < z < 0.75$, and (overlapping) $0.4 < z < 0.6$ redshift bins. In each redshift bin, we obtain a precision that is 2.7 per cent or better on the radial distance and 1.6 per cent or better on the transverse distance. The combination of the redshift bins represents 1.8 per cent precision on the radial distance and 1.1 per cent precision on the transverse distance. This paper is part of a set that analyses the final galaxy clustering dataset from BOSS. The measurements and likelihoods presented here are combined with others in [@Acacia] to produce the final cosmological constraints from BOSS.'
author:
- |
\
$^{1}$Center for Cosmology and AstroParticle Physics, The Ohio State University, Columbus, OH 43210, USA\
$^{2}$Institute of Cosmology & Gravitation, Dennis Sciama Building, University of Portsmouth, Portsmouth, PO1 3FX, UK\
$^{3}$Lawrence Berkeley National Lab, 1 Cyclotron Rd, Berkeley CA 94720, USA\
$^{4}$ Instituto de Física Teórica, (UAM/CSIC), Universidad Autónoma de Madrid, Cantoblanco, E-28049 Madrid, Spain\
$^{5}$ Leibniz-Institut für Astrophysik Potsdam (AIP), An der Sternwarte 16, 14482 Potsdam, Germany\
$^{6}$ Instituto de Astrofísica de Canarias (IAC), C/Vía Láctea, s/n, E-38205, La Laguna, Tenerife, Spain\
$^{7}$ Departamento Astrofísica, Universidad de La Laguna (ULL), E-38206 La Laguna, Tenerife, Spain\
$^{8}$Department of Physics and Astronomy, Ohio University, 251B Clippinger Labs, Athens, OH 45701, USA\
$^{9}$Instituto de Fisica, Universidad Nacional Autonoma de Mexico, Apdo. Postal 20-364, Mexico\
$^{10}$Institut de Ci[è]{}ncies del Cosmos (ICCUB), Universitat de Barcelona (IEEC-UB), Mart[í]{} i Franqu[è]{}s 1, E08028 Barcelona, Spain\
$^{11}$Department of Physics, Yale University, 260 Whitney Ave, New Haven, CT 06520, USA\
$^{12}$Max-Planck-Institut für extraterrestrische Physik, Postfach 1312, Giessenbachstr., 85741 Garching, Germany\
$^{13}$Universitäts-Sternwarte München, Ludwig-Maximilians-Universität München, Scheinerstraße 1, 81679 München, Germany\
$^{14}$Department of Physics and Astronomy, University of Utah, 115 S 1400 E, Salt Lake City, UT 84112, USA\
$^{15}$Harvard-Smithsonian Center for Astrophysics, 60 Garden St., Cambridge, MA 02138, USA\
$^{16}$McWilliams Center for Cosmology, Department of Physics, Carnegie Mellon University, 5000 Forbes Ave., Pittsburgh, PA 15213\
$^{17}$Department of Physics, University of California, Berkeley, CA 94720, USA\
$^{18}$Department of Chemistry and Physics, King’s College, 133 North River St, Wilkes Barre, PA 18711, USA\
$^{19}$Campus of International Excellence UAM+CSIC, Cantoblanco, E-28049 Madrid, Spain\
$^{20}$Instituto de Astrofísica de Andalucía (CSIC), E-18080 Granada, Spain\
$^{21}$Departamento de Física Teórica M8, Universidad Autonóma de Madrid (UAM), Cantoblanco, E-28049, Madrid, Spain\
$^{22}$Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Star[ß]{}e 1, D-85740 Garching bei München, Germany\
$^{23}$Kavli Institute for the Physics and Mathematics of the Universe (WPI),\
The University of Tokyo Institutes for Advanced Study, The University of Tokyo, Kashiwa, Chiba 277-8583, Japan\
$^{24}$Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802, USA\
$^{25}$Institute for Gravitation and the Cosmos, The Pennsylvania State University, University Park, PA 16802, USA\
$^{26}$Center for Cosmology and Particle Physics, Department of Physics, New York University, 4 Washington Place, New York, NY 10003, USA\
$^{27}$School of Physics and Astronomy, University of St Andrews, North Haugh, St Andrews KY16 9SS, UK\
$^{28}$National Astronomy Observatories, Chinese Academy of Science, Beijing, 100012, P.R. China\
$^{29}$Department of Astronomy, University of California, Berkeley, CA 94720, USA
date: To be submitted to MNRAS
title: 'The clustering of galaxies in the completed SDSS-III Baryon Oscillation Spectroscopic Survey: Observational systematics and baryon acoustic oscillations in the correlation function'
---
\[firstpage\]
cosmology: observations - (cosmology:) large-scale structure of Universe
Introduction
============
The Baryon Oscillation Spectroscopic Survey (BOSS) has built on the legacy of previous wide-field surveys such as Two Degree Field Galaxy Redshift Survey (2dFGRS; @2df) and the Sloan Digital Sky Survey I-II (SDSS; @York00) to amass a sample (@DR12 [@Reid15]) of more than 1 million spectroscopic redshifts of the galaxies with the greatest stellar mass to $z < 0.75$. This final BOSS data set represents the premier large-scale structure catalog for use in measuring cosmologic distances based on the baryon acoustic oscillation (BAO) feature and the rate of structure growth via the signature of redshift-space distortions (RSD).
Previous results have demonstrated that the current and previous BOSS data sets produce precise and robust BAO and RSD measurements (c.f., @Reid12 [@alphDR9; @Chuang13; @Kazin13; @Sanchez13; @Anderson14DR9; @alph; @Sanchez14; @Samushia14; @CuestaDR12; @Gil15BAO; @Gil15RSD]). The results of [@Ross12; @Ross14; @Alam15; @Osumi15] have demonstrated that the BOSS results are robust to observational systematic concerns and details of sample selection related to galaxy evolution. This paper represents a final, detailed, investigation of observational systematic concerns in the BOSS sample. We detail how the angular selection functions of the BOSS galaxy samples are defined and test for any systematic uncertainty that is imparted into BAO measurements based on this process. The work we present details how BOSS galaxy data can be combined into one BOSS galaxy catalog, and that robust BAO distance and RSD growth measurements can be obtained from the data set.
This work uses the ‘combined’ BOSS galaxy catalog to determine BAO scale distance measurements, making use of density field ‘reconstruction’ (c.f., @Pad12). Following [@Xu13; @Anderson14DR9; @alph; @Ross152D; @CuestaDR12], we use the monopole and quadrupole of the correlation function to measure the expansion rate, $H(z)$, and the angular diameter distance, $D_A(z)$, at the redshift of BOSS galaxies. BAO measurements obtained using the monopole and quadrupole of the power spectrum are presented in [@BeutlerDR12BAO], while [@VargasDR12BAO] diagnoses the level of theoretical systematic uncertainty in the BOSS BAO measurements. Measurements of the rate of structure growth from the RSD signal are presented in [@BeutlerDR12RSD; @GriebDR12RSD; @SanchezDR12RSD; @SatpathyDR12RSD]. [@Acacia] combines the results of these seven (including this work) results together into a single likelihood that can be used to test cosmological models.
The paper is outlined as follows: In Section \[sec:analysis\] we describe how clustering measurements and their covariance are determined, and how these measurements are used to determine the distance to BOSS galaxies using the BAO feature; in Section \[sec:data\], we describe how BOSS galaxies are selected, masked, and simulated. In section \[sec:weights\], we describe how weights that correct for observational systematic relationships with galaxy density are determined and applied to clustering measurements. In Section \[sec:clus\], we present the configuration-space clustering of BOSS galaxies, demonstrating the effect of systematic weights, comparing the clustering of different BOSS selections and showing that the clustering in the independent NGC and SGC hemispheres is consistent and that the separate BOSS selections can be combined into one BOSS sample to be used for clustering measurements. In Section \[sec:BAOrob\], we show that the BOSS BAO measurements are robust to observational systematics (both for data and mock samples). In Section \[sec:BAOres\], we present the BAO measurements of the BOSS combined sample; these measurements are used in [@Acacia], combined with the BAO distance measurements and RSD growth measurements of [@BeutlerDR12BAO; @BeutlerDR12RSD; @GriebDR12RSD; @SanchezDR12RSD; @SatpathyDR12RSD; @VargasDR12BAO] and using the methods described in [@SanchezDR12comb] to constrain cosmological models. In Section \[sec:disc\], we compare our BAO results with those obtained from other BOSS studies and make general recommendations for how to consider any residual observation systematic uncertainty when using BOSS clustering results.
Unless otherwise noted, we use a flat $\Lambda$CDM cosmology given by $\Omega_m = 0.31$, $\Omega_bh^2 = 0.0220$, $h=0.676$. This is consistent with [@Planck2015] and is the same as used in the companion papers studying the BOSS combined sample.
Analysis Tools {#sec:analysis}
==============
Clustering statistics
---------------------
We work in configuration space. The procedure we use is the same as in [@alph], except that our fiducial bin-size is 5 $h^{-1}$Mpc (as justified in Appendix \[app:binsize\]). We repeat some of the details here. We determine the multipoles of the correlation function, $\xi_{\ell}(s)$, by finding the redshift-space separation, $s$, of pairs of galaxies and randoms, in units $h^{-1}$Mpc assuming our fiducial cosmology, and cosine of the angle of the pair to the line-of-sight, $\mu$, and employing the standard [@LS] method $$\xi(s,\mu) =\frac{DD(s,\mu)-2DR(s,\mu)+RR(s,\mu)}{RR(s,\mu)},
\label{eq:xicalc}$$ where $D$ represents the galaxy sample and $R$ represents the uniform random sample that simulates the selection function of the galaxies. $DD(s,\mu)$ thus represent the number of pairs of galaxies with separation $s$ and orientation $\mu$.
When counting, each pair is summed as the multiplication of the weights associated with the pair galaxy/random points. For galaxies, the total weight corrects for systematic dependencies in the imaging and spectroscopic data (see Section \[sec:weights\]) multiplied by a weight, $w_{\rm FKP}$, that is meant to optimally weight the contribution of galaxies based on their number density at different redshifts. The random points are weighted only by $w_{\rm FKP}$. The $w_{\rm FKP}$ weight is based on [@FKP] and defined as $$w_{\rm FKP} = 1/(1+n(z)P_0).
\label{eq:wfkp}$$ In this analysis (and other companion DR12 papers), we use $P_0 = 10^4h^{3}$Mpc$^{-3}$, while previous BOSS analyses have used $P_0 = 2\times10^4h^{3}$Mpc$^{-3}$. The choice of $P_0 = 10^4h^{3}$Mpc$^{-3}$ is motivated by the fact that this is close to the value of the BOSS power spectrum at $k= 0.14h$Mpc$^{-1}$ and [@FB14] suggest this scale is the effective scale to use for BOSS BAO measurements.
We calculate $\xi(s,|\mu|)$ in evenly-spaced bins[^1] of width 5 $h^{-1}$Mpc in $s$ and 0.01 in $|\mu|$. We then determine the first two even moments of the redshift-space correlation function via $$\frac{2\xi_{\ell}(s)}{2\ell+1} = \sum^{100}_{i=1} 0.01\xi(s,\mu_i)L_{\ell}(\mu_i),
\label{eq:xiell}$$ where $\mu_i = 0.01i-0.005$ and $L_\ell$ is a Legendre polynomial of order $\ell$.
We will also use data that has had the “reconstruction” process applied [@Eis07rec; @Pad12]. In this case, there is a shifted random field, denoted S, and the original random field, and equation (\[eq:xicalc\]) becomes $$\xi(s,\mu) =\frac{DD(s,\mu)-2DS(s,\mu)+SS(s,\mu)}{RR(s,\mu)},
\label{eq:xicalcrec}$$
Likelihood analysis/parameter inference
---------------------------------------
We assume the likelihood distribution, ${\cal L}$, of any parameter (or vector of parameters), $p$, of interest is a multi-variate Gaussian: $${\cal L}(p) \propto e^{-\chi^2(p)/2}.$$ The $\chi^2$ is given by the standard definition $$\chi^2 = {\bf D}{\sf C}^{-1}{\bf D}^{T},$$ where ${\sf C}$ represents the covariance matrix of a data vector and ${\bf D}$ is the difference between the data and model vectors, when model parameter $p$ is used. We assume flat priors on all model parameters, unless otherwise noted.
In order to estimate covariance matrices, we use a large number of mock galaxy samples (see Section \[sec:mocks\]), unless otherwise noted. The noise from the finite number of mock realizations requires some corrections to the $\chi^2$ values, the width of the likelihood distribution, and the standard deviation of any parameter determined from the same set of mocks used to define the covariance matrix. These factors are defined in [@Hartlap07; @Dod13; @Per14] and we apply them in the same way as in, e.g., [@alph]. We use 996 mocks and thus the factors end up being only 3 per cent.
Fitting the BAO Scale
---------------------
The fundamental aim of BAO measurements is to measure the angular diameter distance, $D_A(z)$ and the expansion rate, $H(z)$. We do so by measuring how different the BAO scale is in our clustering measurements compared to its location in a template constructed using our fiducial cosmology. There are two effects that determine the difference between the observed BAO position and that in the template. The first is the difference between the BAO position in the true intrinsic primordial power spectrum, and that in the model, with the multiplicative shift depending on the ratio $r_{\rm d}/r^{\rm fid}_{\rm d}$, where $r_{\rm d}$ is the sound horizon at the drag epoch (and thus represents the expected location of the BAO feature in co-moving distance units, due to the physics of the early Universe) . The second is the difference in projection. The data is measured using a fiducial distance-redshift relation, matching that of the template: if this is wrong we will see a shift that depends on $H(z)$ in the radial direction, and $D_A(z)$ in the angular direction. The combination of these effects means that our comparison of BAO positions measures: $$\alpha_{||} = \frac{\left(H(z)r_{\rm d}\right)^{\rm fid}}{H(z)r_{\rm d}},~~\alpha_{\perp} = \frac{D_A(z)r^{\rm fid}_{\rm d}}{D^{\rm fid}_A(z)r_{\rm d}}.$$ It is often convenient for the purposes of comparison to translate these to $$\alpha = \alpha_{||}^{1/3}\alpha_{\perp}^{2/3}, ~~ 1+\epsilon = \left(\frac{\alpha_{||}}{\alpha_{\perp}}\right)^{1/3},$$ here $\alpha$ is the BAO measurement expected from spherically averaged clustering measurements and $\epsilon$ the significance of the BAO feature introduced into the quadrupole by assuming a fiducial cosmology that does not match the true cosmology.
The methodology we use to measure $\alpha_{||}, \alpha_{\perp}$ is based on that used in [@Xu13; @alph; @Ross152D], but we employ improved modeling of the post-reconstruction quadrupole based on the results of [@Seo15], which are similar to [@White15] and [@Cohn16]. We present the relevant details here.
We generate a template $\xi(s)$ using the linear power spectrum, $P_{\rm lin}(k)$, obtained from [Camb]{}[^2] [@camb] and a ‘no-wiggle’ $P_{\rm nw}(k)$ obtained from the [@EH98] fitting formulae, both using our fiducial cosmology (except where otherwise noted). We account for redshift-space distortion (RSD) and non-linear effects via $$P(k,\mu) = C^2(k,\mu,\Sigma_s)\left((P_{\rm lin}-P_{\rm nw})e^{-k^2\sigma_v^2}+P_{\rm nw}\right),$$ where $$\sigma^2_v = (1-\mu^2)\Sigma^2_{\perp}/2+\mu^2\Sigma^2_{||}/2,$$
$$C(k,\mu,\Sigma_s) = \frac{1+\mu^2\beta(1-S(k))}{(1+k^2\mu^2\Sigma^2_s/2)},
\label{eq:Csk}$$
$S(k)$ is the smoothing applied in reconstruction; $S(k) = e^{-k^2\Sigma_r^2/2}$ and $\Sigma_r = 15 h^{-1}$Mpc for the reconstruction applied to the BOSS DR12 sample. Finally, we fix $\beta=0.4$ and $\Sigma_s = 4 h^{-1}$Mpc and use $\Sigma_{\perp} = 2.5 h^{-1}$Mpc and $\Sigma_{||} = 4 h^{-1}$Mpc for post-reconstruction results and $\Sigma_{||}= 10 h^{-1}$Mpc and $\Sigma_{\perp}= 6 h^{-1}$Mpc pre-reconstruction. The choices to the damping scales are similar to those of [@BeutlerDR12BAO; @VargasDR12BAO] and the values found in [@Seo15]. We show in Appendix \[app:rob\] that the specific choices have little impact on our results. Note, the bias priors we define below effectively allow the amplitude of $\xi_2$ to vary. Given $P(k,\mu)$, we determine the multipole moments $$P_{\ell}(k) = \frac{2\ell+1}{2}\int_{-1}^1 P(k,\mu)L_{\ell}(\mu)d\mu,$$ where $L_{\ell}(\mu)$ are Legendre polynomials. These are transformed to $\xi_{\ell}$ via $$\xi_{\ell}(s) = \frac{i^{\ell}}{2\pi^2}\int dk k^2P_{\ell}(k)j_{\ell}(ks)$$ We then use $$\xi(s,\mu) = \sum_{\ell}\xi_{\ell}(s)L_{\ell}(\mu)$$ (summing to $\ell = 4$) and take averages over any given $\mu$ window to create any particular template: $$\xi(s,\alpha_{\perp},\alpha_{||})_{F, {\rm mod}}(s) = \int_0^1d\mu F(\mu^{\prime})\xi(s^{\prime},\mu^{\prime}),$$ where[^3] $\mu^{\prime} = \mu\alpha_{||}/\sqrt{\mu^2\alpha_{||}^2+(1-\mu^2)\alpha_{\perp}^2}$ and $s^{\prime} = s\sqrt{\mu^2\alpha_{||}^2+(1-\mu^2)\alpha_{\perp}^2}$ and the specific $F(\mu^{\prime})$ are defined below.
In practice, we fit for $\alpha_{\perp},\alpha_{||}$ using $\xi_0,\xi_2$. To fit $\xi_0,\xi_2$, we recognize $\xi_2 = 5\int_0^1d\mu\left(1.5\mu^2\xi(\mu)-0.5\xi(\mu)\right)$ and, denoting $3\int_0^1d\mu\mu^2\xi(\mu)$ as $\xi_{\mu2}$ (so here $F(\mu) = 3\mu^2$), we fit to the data using the model $$\xi_{0, {\rm mod}}(s) = B_0\xi_{0}(s,\alpha_{\perp},\alpha_{||}) + A_{0}(s)
\label{eq:xi0mod}$$ $$\xi_{2, {\rm mod}}(s) = \frac{5}{2}\left(B_2\xi_{\mu2}(s,\alpha_{\perp},\alpha_{||}) - B_0\xi_0(s,\alpha_{\perp},\alpha_{||})\right) + A_{2}(s),
\label{eq:xi2mod}$$ where $A_x(s) = a_{x,1}/s^2+a_{x,2}/s+a_{x,3}$. In each case, the parameter $B_x$ essentially sets the size of the BAO feature in the template. We apply a Gaussian prior of width ${\rm log}(B_x) = 0.4$ around the best-fit $B_0$ in the range $50 < s < 80h^{-1}$Mpc with $A_x = 0$. We have fixed $\beta = 0.4$ in the fiducial template and the $1-S(k)$ term in Equation (\[eq:Csk\]) forces its effective value to zero at large scales (in the post-reconstruction case). However, note that the greater the difference there is between $B_2$ and $B_0$, the greater the amplitude of $\xi_{2,{\rm mod}}$ will be. Thus, $B_2$ plays essentially the same role in our analysis as $\beta$ has in previous analyses (e.g., @alph).
Modeling $\xi_{0,2}$ in the manner described above isolates the anisotropic BAO scale information, while marginalizing over broad-band shape and amplitude information. The pair of moments $\xi_{0,2}$ represent an optimal and complete pair in the case where BAO scale information is spherically distributed [@Ross152D].
Data {#sec:data}
====
The BOSS DR12 Galaxy Sample
---------------------------
The SDSS-III [@Eis11] BOSS [@Dawson12] targeted galaxies for spectroscopy using SDSS imaging data, as described in [@Reid15]. The SDSS-I, II, and III surveys obtained wide-field CCD photometry [@C; @Gunn06] in five passbands ($u,g,r,i,z$; @F), amassing a total footprint of 14,455 deg$^2$. From this data, BOSS targeted and subsequently observed spectra for 1.4 million galaxies [@DR12], using the BOSS spectrograph [@Smee13] and the SDSS telescope [@Gunn06]. Observations were performed in a series of 15-minute exposures and integrated until a fiducial minimum signal-to-noise ratio, chosen to ensure a high redshift success rate, was reached. Redshifts were determined as described in [@Bolton12].
The full details of the BOSS galaxy samples are given in [@Reid15][^4]. Here, we summarise the most relevant details in order to provide the background required to understand the analysis of observational effects presented in Section \[sec:weights\].
The CMASS sample is designed to be approximately stellar mass limited above $z = 0.45$. Such galaxies are selected from the SDSS DR8 [@DR8] imaging via $$\begin{aligned}
17.5 < i_{\rm cmod} & < &19.9\\
r_{\rm mod} - i_{\rm mod} & < & 2 \\
d_{\perp} & > & 0.55 \label{eq:hcut}\\
i_{\rm fib2} & < &21.5\\
i_{\rm cmod} & < &19.86 + 1.6(d_{\perp} - 0.8) \label{eq:slide}\end{aligned}$$ where all magnitudes are corrected for Galactic extinction (via the @SFD dust maps), $i_{\rm fib2}$ is the $i$-band magnitude within a $2^{\prime \prime}$ aperture, the subscript $_{\rm mod}$ denotes ‘model’ magnitudes [@EDR], the subscript $_{\rm cmod}$ denotes ‘cmodel’ magnitudes [@DR2], and $$d_{\perp} = r_{\rm mod} - i_{\rm mod} - (g_{\rm mod} - r_{\rm mod})/8.0.
\label{eq:dp}$$
For CMASS targets, stars are further separated from galaxies by only keeping objects with $$\begin{aligned}
i_{\rm psf} - i_{\rm mod} &>& 0.2 + 0.2(20.0-i_{\rm mod}) \label{eq:sgsep1}\\
z_{\rm psf}-z_{\rm mod} &>& 9.125 -0.46z_{\rm mod} \label{eq:sgsep2}\end{aligned}$$
unless the object also passes the LOWZ cuts.
The LOWZ sample is selected based on the following $$\begin{aligned}
r_{\rm cmod} < 13.5 + c_{\parallel}/0.3 \label{eq:lzslide}\\
|c_{\perp}| < 0.2 \\
16 < r_{\rm cmod} < 19.6 \label{eq:lzrc}\\
r_{\rm psf}-r_{\rm mod} > 0.3 \end{aligned}$$ where $$c_{\parallel} = 0.7(g_{\rm mod}-r_{\rm mod})+1.2(r_{\rm mod}-i_{\rm mod}-0.18)$$ and $$c_{\perp} = r_{\rm mod}-i_{\rm mod}-(g_{\rm mod}-r_{\rm mod})/4.0 -0.18 .$$
![The number density as a function of redshift for the three different LOWZ selections, in the North Galactic Cap (NGC). The LOWZE2 and LOWZE3 selections were applied to early BOSS observations.[]{data-label="fig:nzlowz"}](nzlowzsel.pdf){width="84mm"}
As detailed in [@Reid15], approximately 900 deg$^2$ of the LOWZ sample was targeted with more restrictive cuts than the nominal LOWZ selection. This 900 deg$^2$ area is divided into two separate selections. Covering 130 deg$^2$, the ‘LOWZE2’ selection applies the CMASS $i$-band star/galaxy separation cut (equation \[eq:sgsep1\]) and had an $r_{\rm cmod}$ limit that was 0.1 magnitudes brighter for both equation (\[eq:lzslide\]) (13.4) and equation (\[eq:lzrc\]) (19.5). These bright limits reduce the density of the sample by 16 per cent (as can be seen in Fig. \[fig:nzlowz\]). Covering 760 deg$^2$, the ‘LOWZE3’ sample is the same as the LOWZE2 selection, except that the $z$-band star/galaxy selection (equation \[eq:sgsep2\]) is also applied and the bright limit is $r_{\rm cmod} > 17$. The $z$-band star/galaxy separation cut reduces the density of the sample by an additional 39 per cent, in a manner that depends strongly on the size of the PSF, as detailed in Section \[sec:see\]. This gives the LOWZE3 sample approximately half the number density of the LOWZ sample.
Given that each sample is a subset of the nominal LOWZ sample, we are able to apply the respective cuts to reproduce LOWZE2 and LOWZE3 samples over the full BOSS footprint. Thus, unless explicitly stated otherwise, when studying each respective sample, we will do so over the full BOSS NGC footprint in order to obtain the best statistical understanding of the samples. Doing so allows us to test the properties of these samples and thereby combine them into one full BOSS galaxy sample. The number density as a function of redshift is displayed in Fig. \[fig:nzlowz\] for each of the LOWZ selections. Compared to the nominal LOWZ selection, the reduction in number density is approximately constant as a function of redshift for the LOWZE2, while for LOWZE3 the difference grows greater at lower redshifts.
In addition to the color cuts applied to targeting, we apply cuts in redshift of $0.43 < z < 0.7$ to CMASS and $0.15 < z < 0.43$ to the LOWZ, LOWZE2, and LOWZE3 samples when measuring their individual clustering signals. These samples are combined into one full BOSS sample, applying no redshift cuts on the individual samples. We do not expect the galaxies that are removed to have a statistically significant effect on the trends observed, and thus we consider the effect of this to be negligible.
Mask
----
The BOSS mask is described in detail in section 5.1 of [@Reid15]. The most basic mask to be applied to BOSS is defined by the coverage of the spectroscopic tiles, i.e., the survey footprint; this is shown in figure 1 of [@Acacia]. On top of the survey footprint, a series of veto masks are applied. These include masks for bright stars, bright objects [@Rykoff14], and non-photometric conditions.
We define additional veto masks based on the seeing at the time the imaging data was observed and the Galactic extinction. Survey area is discarded where the $i$-band seeing, given in terms of the full-width-half-maximum of the point spread function (‘PSF\_FWHM’) is greater than $2^{\prime\prime}$. This is due to the $i_{\rm fib2}$ selection, as these magnitudes are convolved with $2^{\prime\prime}$ seeing and are therefore ill-defined where the seeing is worse. We additionally remove areas where the $g$- and $r$-band PSF\_FWHM are greater than $2^{\prime\prime}.3$ and $2^{\prime\prime}.1$; these values are roughly equivalent to the $i$-band value of $2^{\prime\prime}.0$, given the optics of the SDSS telescope. These cuts on seeing remove 0.5 and 1.7 per cent of the area in NGC and SGC footprints.
We cut areas where the Galactic extinction, as given by the [@SFD] $E(B-V)$ value, is greater than 0.15. A negligible amount of area in the NGC (0.06 per cent) has worse extinction than this. This cut removes 2.2 per cent of the area in the SGC. We find a correlation between the projected density of LOWZ galaxies and $E(B-V)$ at high extinction values (see Fig. \[fig:sea\]), and thus cut at $E(B-V) = 0.15$ to remove this trend and make the data quality more similar between the NGC and SGC.
Galactic Hemisphere {#sec:NSdata}
-------------------
![The number density as a function of redshift for CMASS (solid curves) and LOWZ (dashed curves) selections, in the North and South Galactic Caps (NGC, colored ‘forestgreen’; and SGC, colored ‘darkkhaki’). The overall offset between densities in the two regions is due to calibration offsets in the imaging data between the two regions.[]{data-label="fig:nz"}](nzlowzcmassdr12NS.pdf){width="84mm"}
As explained in [@Ross11; @Ross12], we expect different number densities for BOSS galaxies in the NGC and SGC, due to the fact that [@Schlafly11] have shown there are measurable offsets in the DR8 [@DR8] photometry between the two regions. The final BOSS DR12 results are consistent with these earlier studies: Accounting for all weights, we find a 1.0 per cent larger projected density of the CMASS sample ($0.43 < z < 0.75$) in the SGC compared to the NGC. In the LOWZ sample ($0.2 < z < 0.43$), the projected density is 7.6 per cent higher in the SGC compared to the NGC. For this reason, the NGC and SGC are treated to have separate selection functions, as has been the standard practice throughout the lifetime of BOSS analyses.
Fig. \[fig:nz\] displays the number density of the CMASS and LOWZ samples in the NGC and SGC. One can see that the LOWZ sample in the SGC has a greater density than the NGC by a nearly constant factor. For the CMASS sample, the SGC distribution is somewhat skewed compared to the NGC selection. The number density is greater at the low redshift end, due to the fact that the offset in photometry effectively lowers $d_{\perp}$ limit (equation \[eq:hcut\]) in the SGC compared to the NGC. These differences in $n(z)$ imply that the galaxy populations will be slightly different in the different hemispheres and should thus be considered when the results from each hemisphere are combined.
Mock Galaxy Samples {#sec:mocks}
-------------------
We use two independent methods to create two samples of close to 1000 mock realizations designed to match BOSS galaxy samples[^5]. The two methods are ‘QPM’ [@QPM] and MultiDark PATCHY (MD-P)[@PATCHY14; @Kitaura15] and each has been tuned to match the footprint, redshift distribution, and halo occupation distribution of BOSS samples. We therefore expect the clustering of the mock samples to match the BOSS measurements. We use both sets of these mock samples to generate covariance matrices and to test methodology. Each uses its own cosmology; the differences between these cosmologies aid in assessing the robustness of our results[^6]. The cosmology used for each mock and the BAO measurements we expect to find for them when analyzing them using our fiducial cosmology are listed in Table \[tab:baoexp\].
The tests we performed on the LOWZ and CMASS samples were completed using the QPM mocks; this work was completed (as a pre-requisite) prior to the definition of the BOSS combined sample.[^7] This same work allowed the combined sample MD-P and QPM mocks to be created. [@Kitaura15] demonstrate that the MD-P mocks are a better match to the combined sample, with some improvement over QPM due to the treatment of the lightcone (see @Kitaura15 for full details). Thus, in what follows we exclusively use the QPM mocks in tests of the LOWZ and CMASS samples, use the MD-P mocks as the primary sample for tests of the combined sample, and use the QPM mocks as a robustness check on the combined sample results.
QPM $\Omega_{m} = 0.29$ $h=0.7$ $\Omega_bh^2 = 0.02247$ $\Omega_{\nu} = 0$
---------------- ---------------------- ------------------ ------------------------- --------------------
redshift $\alpha_{||}$ $\alpha_{\perp}$ $\alpha$ $\epsilon$
0.38 0.9808 0.9755 0.9773 0.0018
0.51 0.9840 0.9770 0.9793 0.0024
0.61 0.9861 0.9782 0.9808 0.0027
MD-P $\Omega_{m} = 0.307$ $h=0.678$ $\Omega_bh^2 = 0.02214$ $\Omega_{\nu} = 0$
redshift $\alpha_{||}$ $\alpha_{\perp}$ $\alpha$ $\epsilon$
0.38 0.9999 0.9991 0.9993 0.0003
0.51 1.0003 0.9993 0.9996 0.0003
0.61 1.0006 0.9995 0.9999 0.0004
\[tab:baoexp\]
: Cosmology and expected values for BAO parameters for QPM and MultiDark-PATCHY (MD-P) mocks, given we have analyzed them using our fiducial cosmology and each set of mocks has their own cosmology. Each uses a flat geometry and has a density of neutrinos $\Omega_{\nu} = 0$. The exact values used for MD-P are $\Omega_{m} = 0.307115$ and $h=0.6777$, which have been rounded to 3 significant figures below.
Weighting Galaxies Based on Survey Properties {#sec:weights}
=============================================
The methods used to account for various reasons for incompleteness in observations of the BOSS spectroscopic sample are defined and justified in [@Reid15]. These include close pair weights, $w_{\rm cp}$, that are applied to account for fiber collisions and weights, $w_{\rm noz}$, that account for redshift failures. We include these weights as $w_z = w_{\rm cp}+w_{\rm noz}-1$ in all analyses, unless otherwise noted. In the following subsections, we test the projected BOSS galaxy density against observational parameters that affect the imaging data, and define weights to correct for systematic relationships, where identified.
Our results require determining the uncertainty in the relationships between galaxy density and observational parameters, often for samples that are divided in ways that are not possible for our mock samples. Thus, we require some manner of estimating uncertainties that balances cosmic variance and shot-noise but does not rely on the variance of mock realizations. To do so, we weight all galaxy counts by the $w_{\rm FKP}$ weights and treat the resulting counts like Poisson statistics. Such a scheme balances shot-noise and cosmic variance, at the scale used to define the FKP weights. For example, if the FKP weight is 0.5 for all galaxies in the sample, the expected variance in the number of galaxies is twice the number of galaxies (instead of the number of galaxies in the case where the FKP weights are 1). The variance on the FKP-weighted sample would be $0.5N$, while the variance in the pure Poisson case would be $0.25N$ (as the variance of $xN$ is $x^2N$ when $N$ is drawn from a Poisson distribution). In this example, the variance is twice as large as the shot-noise contribution, because there are equal contributions from cosmic variance and shot-noise. We have compared this scheme to the variance of statistics obtained from the CMASS mock samples and found good agreement. Applying this scheme allows uncertainties to be estimated for samples that do not have matching suites of mock catalogs.
Stellar Density
---------------
The projected density of CMASS was found to depend on the local stellar density in [@Ross11]. This finding was confirmed in all subsequent BOSS data sets. We use SDSS DR8 stars with $17.5 < i < 19.9$ to map the stellar density at Healpix resolution Nside$=128$ (0.21 square degrees per pixel). This is the same set of stars used in [@Ross11; @Ross12]. The systematic dependency with stellar density affects only the CMASS sample; as shown in the top panel of Fig. \[fig:nstall\], none of the LOWZ selections exhibit any trend; this is as expected given it is a brighter selection than the CMASS sample (see @Tojeiro14 for further details). Assuming a diagonal covariance matrix, we find the $\chi^2$ of the null test of $n/\langle n\rangle =1 $ to be 9.6, 11.1, and 9.8 for the LOWZ, LOWZE2, and LOWZE3 samples (to be compared to 10 measurement bins). Comparatively, the $\chi^2$ for the CMASS sample is 211. We therefore do not include any stellar density weights for any of the LOWZ samples.
![Projected BOSS galaxy density versus stellar density, measured as the number of $17.5 < i < 19.9$ stars in Healpix pixels with Nside=128. Top panel: the relationships for CMASS and the three LOWZ selections. Middle panel: The relationships for CMASS, split into bins of $i_{\rm fib2}$ magnitude. These are the measurements used to define the stellar density weights applied to clustering measurements. Bottom panel: The relationships for CMASS, split by redshift, before (curves) and after (points with error-bars) stellar density weights are applied. The relationships before any weighting is applied are slightly dependent on redshift, due to a weak correlation between $i_{\rm fib2}$ and redshift. Weighting based on $i_{\rm fib2}$ (illustrated in the middle panel) removes this dependency. []{data-label="fig:nstall"}](nvstDR123pan.pdf){width="83mm"}
In [@Ross11; @Ross12], it was shown that the relationship with stellar density also depends on the surface brightness of the galaxy. The $i_{\rm fib2}$ magnitude of the galaxy is a convenient measure of the surface brightness, as it represents the total flux within a given aperture (convolved with the seeing). The middle panel of Fig. \[fig:nstall\] shows the relationship between the CMASS number density and the stellar density, divided into five ranges of $i_{\rm fib2}$ magnitudes ($i_{\rm fib2} < 20.3; 20.3 < i_{\rm fib2} < 20.6; 20.6 < i_{\rm fib2} < 20.9; 20.9 < i_{\rm fib2} < 21.2; 21.2 < i_{\rm fib2}$). In each bin, we find the best-fit linear relationship $n_{\rm gal} = A(i_{\rm fib2})+B(i_{\rm fib2})n_{\rm star}$. The dashed lines display the best-fit linear relationship in each $i_{\rm fib2}$ bin; the $\chi^2$ of the fits range between 4 and 8, for 8 degrees of freedom. With increasing $i_{\rm fib2}$, the best-fit $A$ and $B$ are $A(i_{\rm fib2}) =$ \[0.959, 0.994, 1.038, 1.087, 1.120\] and $B(i_{\rm fib2}) = [0.826, 0.149, -0.782,-1.83, -2.52] \times 10^{-4}$.
The linear fits to the relationship between galaxy and stellar density in each of the $i_{\rm fib2}$ bins are used to define weights to apply to CMASS galaxies to correct for the systematic dependency on stellar density. To obtain the expected relationship at any $i_{\rm fib2}$, we interpolate between the results in the neighboring $i_{\rm fib2}$ bins, i.e., to find the expected relationship at $i_{\rm fib2} = 20.8$, we interpolate between the results in the $20.3 < i_{\rm fib2} < 20.6$ and $20.6 < i_{\rm fib2} < 20.9$ bins to obtain the slope, $B(i_{\rm fib2})$, and intercept, $A(i_{\rm fib2})$, of the relationship. The weight we apply to the galaxy is then $$w_{\rm star}(n_{\rm star},i_{\rm fib2}) = \left(B(i_{\rm fib2})n_{\rm star}+A(i_{\rm fib2})\right)^{-1},
\label{eq:wstar}$$ i.e., we simply weight by the inverse of the expected systematic relationship.
The surface brightness dependence of the stellar density relationship must be accounted for in order to account for the redshift dependence of the systematic effect. The bottom panel of Fig. \[fig:nstall\] shows the CMASS number density vs. stellar density, after applying $w_{\rm star}$. In each redshift bin, the systematic relationship is removed. After applying the systematic weights, the $\chi^2$ for the null test are 13.5, 8.4, and 11.2 (for 10 degrees of freedom), with increasing redshift; prior to applying the weights, they are 47, 117, and 65. The impact of the stellar density weights on the measured clustering is presented in Section \[sec:xiweights\].
Seeing {#sec:see}
------
There is a relationship between the observed density of BOSS CMASS galaxies and the local seeing due to the star galaxy separation cuts, as explained in [@Ross11]. Weights were previously defined and applied to the DR10 and DR11 CMASS samples to remove this trend, and we repeat such a procedure for DR12, while further investigating any relationship in the LOWZ samples.
The top panel of Fig. \[fig:nvsee\] displays the relationship between observed projected density and seeing for different BOSS selections. For the standard LOWZ selection and the LOWZE2 selection, no strong relationship is observed; the $\chi^2$ values of the null tests are 16.2 and 14.2, respectively, for 10 degrees of freedom. However, for CMASS and especially LOWZE3, clear relationships exist where the galaxy density decreases as the seeing gets worse (the $\chi^2$ values of the null tests are 225 and 877). For each sample, we will define systematic weights to correct for these relationships, and we describe this process throughout the rest of this section..
![The relationship between observed density of BOSS galaxies and $i$-band seeing. Top panel: The relationships for CMASS and the three LOWZ selections. Middle panel: The relationships for CMASS NGC and SGC. The dashed curves display the best-fit relationship used to define the weights that correct for the observed trends. The solid curve displays the measured relationship for the combined NGC+SGC sample, after the weights have been applied. Bottom panel: The relationships for the LOWZE3 sample, split into four bins by $i_{\rm mod}$ magnitude. These relationships are used to define the weights applied the LOWZE3 sample. []{data-label="fig:nvsee"}](nvseeDR123pan.pdf){width="84mm"}
For CMASS, we define weights in a manner similar to that applied in [@alph]. We find the relationship with seeing is more severe in the SGC compared to the NGC, and we therefore determine the weights separately in each region[^8]. We find the best-fit parameters to the following model $$n_g = A_{\rm see}\left[1-{\rm erf}\left(\frac{S_i-B_{\rm see}}{\sigma_{\rm see}}\right)\right],
\label{eq:seemod}$$ where $S_i$ denotes the $i$-band seeing. The middle panel of Fig. \[fig:nvsee\] displays the observed relationships for the data in each hemisphere and the best-fit model. For the NGC (SGC), the best-fit parameters are $A_{\rm see} = 0.5205 (0.5344)$, $B_{\rm see} =2.844 (2.267)$,and $\sigma_{\rm see} = 1.236 (0.906)$. The $\chi^2$ of these best-fit are 5.4 and 6.9 for the NGC and SGC, to be compared to 7 degrees of freedom. The seeing-dependent weights are simply given by the inverses of the best-fit relationships. The combined SGC+NGC relationship, after applying the seeing-dependent weights, is displayed using a solid black curve. The error-bars are suppressed, but the $\chi^2$ of the null test is 7.7 for 10 data points.
For LOWZE3, the inclusion of the $z$-band star/galaxy separation cut introduces a strong relationship between the galaxy density and the seeing. We find the effect is strongly magnitude dependent (we do not find this to be the case for the dependence of the CMASS sample with seeing). We therefore divide the sample by $i_{\rm mod}$ magnitude ($i$- and $z$-band magnitudes are strongly correlated at these redshifts and the SDSS $i$-band is less prone to zero-point fluctuations) and define weights in a manner analogous to how we defined the CMASS stellar density weights as a function of $i_{\rm fib2}$. We divide the LOWZE3 sample into four bins based on the galaxies’ $i_{\rm mod}$ magnitude, $i_{\rm mod} < 17.5$, $17.5 < i_{\rm mod} < 18$, $18 < i_{\rm mod} < 18.5$, and $i_{\rm mod} > 18.5$, and fit a linear relationship to each and then interpolate to obtain the weight as a function of the local $i$-band seeing and the galaxy’s $i_{\rm mod}$ magnitude. The measurement in these four magnitude bins is displayed by the points with error-bars in the bottom panel of Fig. \[fig:nvsee\]. The dashed curves display the best-fit linear relationship to each. We find the slope of the best-fits, $\ell$, is well-approximated by $$\ell = b+m(i_{\rm mod}-16)^{\frac{1}{2}},$$ with $b=0.875$ and $m=-2.226$. Thus, given that the mean seeing over the footprint is 1.25, the relationship between $i$ band seeing, LOWZE3 density ($n_{LE3}$), and $i_{\rm mod}$ is given by $$n_{LE3}(S_i,i_{\rm mod}) = 1+(S_i-1.25)\ell(i_{\rm mod}).
\label{eq:nl3si}$$ We set any $\ell < -2$ to $\ell_{\rm min} = -2$ and take the inverse of equation (\[eq:nl3si\]) in order to apply weights to the LOWZE3 sample, setting any weights greater than 5 to 5.
The total systematic weight (e.g., $w_{\rm star}\times w_{\rm see}$ for CMASS) is normalized such that the weights sum to the total number of galaxies in the sample they are defined for. The impact of the seeing weights we apply on the measured clustering of the CMASS and LOWZE3 samples is presented in Section \[sec:xiweights\].
Sky background, Airmass, Extinction
-----------------------------------
As for previous BOSS data releases, we test against three additional potential systematic quantities, each of which affects the depth of the imaging data: sky background, airmass, and Galactic extinction. These are shown for the CMASS and LOWZ samples in Fig. \[fig:sea\]. For sky-background and airmass, the $\chi^2$ values of the null tests range between 9 (for CMASS against sky background) and 18 (for LOWZ against airmass), to be compared to the 10 data points in each case.
For Galactic extinction, the $\chi^2$ are somewhat larger than expected: 35 for the CMASS sample and 26 for LOWZ (compared to 10 data points). However, these large $\chi^2$ are dominated by the value at the lowest extinction, which is low by 3 per cent for both LOWZ and CMASS[^9]. [@Schlafly11] suggest somewhat different extinction coefficients than those used to target BOSS galaxies. Such a change implies extinction-dependent shifts in the color of the BOSS selection and these shifts can be translated into an expected change in target density as a function of extinction. The expected trend is shown with dashed lines and agrees with the overall trend observed for both LOWZ and CMASS. In terms of $\chi^2$, the LOWZ value is 19 when using this prediction and the CMASS value remains 35 (improvement at the extrema of the range is countered by disagreement at E(B-V)$\sim$0.08). This implies any effect on the measured clustering found when correcting for this predicted relationship would be marginal, and, indeed, we find no significant changes in the measured clustering when applying and extinction-dependent weights. We thus choose not to include any weights to correct for these trends with Galactic extinction.
![The relationship between galaxy density observed density and sky background (in nanomaggies per square arc second), Galactic extinction (in E(B-V)), and airmass, for CMASS and LOWZ. The dashed lines display the predicted relationship with Galactic extinction, based on the difference between the extinction coefficients applied to BOSS imaging data and those found in Schlafly & Finkbeiner (2011).[]{data-label="fig:sea"}](nBOSSdr12vsys.pdf){width="84mm"}
Overall, we do not find any clear trends, given the uncertainty, between the density of BOSS galaxies and sky background, Galactic extinction, or airmass. Therefore, like in previous BOSS analyses, we do not weight BOSS galaxies according to any of these quantities. In the tests that follow, it will become clear that the systematic effects we correct for via weights (stellar density and seeing) would have minimal impact on the final BOSS BAO and RSD results even if they had not been corrected for. Attempts to correct for additional potential systematic effects of marginal significance are thus ill-advised. However, each individual analysis will be affected differently, and it would therefore be prudent for any future studies of the clustering of BOSS galaxies (e.g., primordial non-Gaussianity; @Ross13) at the largest scales to reconsider this choice.
BOSS Galaxy Clustering {#sec:clus}
======================
In this section, we present the configuration-space clustering of BOSS galaxies. We determine the relative importance of the systematic weights we apply, in terms of the impact on the measured correlation functions. We then show BOSS clustering results when the samples are divided by hemisphere (NGC and SGC) and by targeting selection (LOWZ, LOWZE2, LOWZE3, and CMASS). We conclude by showing the clustering of the combined BOSS sample, split by redshift.
Effect of weights {#sec:xiweights}
-----------------
![The change in the measured monopole and quadrupole of the BOSS CMASS (top panels) and LOWZ (bottom panels) correlation functions, when the given systematic weight is applied. ‘LOWZ comb’ refers to the combination of the LOWZ, LOWZE2, and LOWZE3 selections. The grey shaded region displays the 1$\sigma$ uncertainty obtained from mock samples.[]{data-label="fig:xi0sys"}](xi02sysDLOWZCMASSDR12.pdf){width="84mm"}
The CMASS sample contains the most signal-to-noise of any particular BOSS selection, has a significant percentage of unobserved close-pairs and redshift failures (5.4 and 1.8 per cent), and uses weights for both stellar density and seeing to correct for systematic dependencies in the observed number density. We test the impact of these weights by comparing the clustering measured with the weights applied to that without. For the monopole, these differences are displayed in the top panel of Fig. \[fig:xi0sys\]. In order to assess the total potential impact of the weights, we find the total $\chi^2$ difference between the clustering measured with and without the weights. The relative importance of each weight is as one would expect visually: the $\chi^2$ are 13.1, 3.7, 2.1, and 0.1 for stellar density, close pair, redshift failure, and seeing weights.
The importance of the weights is smaller for CMASS $\xi_2$ than $\xi_0$, as one can see in the 2nd to the top panel in Fig. \[fig:xi0sys\]. The $\chi^2$ are 0.5, 2.5, 2.3, and 0.1 for stellar density, close pair, redshift failure, and seeing weights. Unsurprisingly, the weights that affect the radial distribution are most important for $\xi_2$, and the redshift failure weights are slightly more important for $\xi_2$ than for $\xi_0$. For both $\xi_0$ and $\xi_2$, the seeing weights have negligible impact. The $\chi^2$ difference is only 0.1 for both, implying that the [*greatest*]{} difference it could cause in the determination of a model parameter is 0.3$\sigma$ (whereas for stellar density, it is potentially a 3.6$\sigma$ effect) .
For the nominal LOWZ sample, the only systematic weights applied are for close pairs and redshift failures, and these represent only 2.9 and 0.5 per cent of LOWZ targets. Similar to CMASS, the close-pair weights increase the small-scale clustering amplitudes. However, the effect is much smaller, compared to the uncertainty on the measurements, and the $\chi^2$ are only 0.8 and 1.4 for $\xi_0$ and $\xi_2$. For redshift failures, the $\chi^2$ are only 0.2 and 0.1 for $\xi_0$ and $\xi_2$.
For the LOWZE3 sample, selected over the full NGC footprint, we defined a weight based on seeing, in order to reverse a strong effect on the observed number density of the sample. The effect of this weight on the measured clustering of the LOWZE3 selection over the full NGC footprint is shown using circles in the bottom two panel of Fig. \[fig:xi0sys\] (of note, the size of the uncertainty band for LOWZE3 should be larger than for the displayed LOWZ uncertainty, due to the number density being approximately half of LOWZ and the fact that the SGC footprint is not used). It has the strongest effect of any weight we apply.
While the effect of the seeing weights is strong for the LOWZE3 sample over the full NGC footprint, our final sample will only use this selection over 755 deg$^2$. Further, when these data are used, we combine the LOWZ sample with CMASS and use data in the range $0.2 < z < 0.5$. When we consider the impact of the weights on the clustering of this combined sample (denoted ‘LOWZ comb’), we find a $\chi^2$ difference of only 0.2 between the $\xi_{0,2}$ measured with and without the weights applied, this comparison is plotted using triangles in the bottom two panels Fig. \[fig:xi0sys\]. The reason for the sharp decrease in significance is two-fold: 1) the LOWZE3 sample accounts for approximately five per cent of the statistical power of the combined sample with $0.2 < z < 0.5$ and 2) the effect of the weights when restricting to only the 755 deg$^2$ of unique LOWZE3 data is considerably smaller than over the full NGC (presumably due to the particular pattern of seeing in this area). Thus, while its effect is dramatic on the LOWZE3 sample within the full NGC area, the effect of the weights on the combined sample is minor for the combined sample that we use for BOSS science. Notably, the inclusion of the LOWZE3 area allows us to include the CMASS data occupying the same footprint with $0.2 < z < 0.5$ into the combined sample, which increases the statistical power of the region to eight per cent of the total. Our tests suggest that even in the (catastrophic) event that residual systematic effects in the LOWZE3 sample are equal to those we have treated with weights for seeing, the most any derived parameter could be biased is 0.45$\sigma$ (and this is in the specific case that the signal being searched for is exactly mimicked by the systematic effect). The expected variation (assuming Gaussian statistics) when increasing a sample from 92 per cent complete to 100 per cent is 0.4$\sigma$; in this sense the expected gain is approximately equal to the worst-case scenario for the inclusion of the LOWZE3 data. We thus include the 755 deg$^2$ of unique LOWZE3 data in the BOSS combined sample.
Hemisphere
----------
![The clustering of BOSS CMASS (top two panels) and LOWZ (bottom two panels) galaxies, for the two contiguous regions within the SGC and NGC hemispheres. The dotted lines denote the mean of the QPM mock samples.[]{data-label="fig:xicmasslowzNS"}](xicmasslowzNSDR12.pdf){width="84mm"}
As described in Section \[sec:NSdata\], the selection functions for the NGC and SGC BOSS galaxy data are slightly different. Here, we compare the clustering in the two regions. This comparison is shown for CMASS in the top two panels of Fig. \[fig:xicmasslowzNS\] for $\xi_0$ (top panel) and $\xi_2$ (2nd to top panel). In the range $20 < s < 200h^{-1}$, the $\chi^2$ obtained when testing the NGC $\xi_0$ against the SGC $\xi_0$ (determined by summing the two QPM covariance matrices) is 42 for the 36 data points. Restricting to the range $50 < s < 150h^{-1}$, the $\chi^2$ is 25 for 20 points. The CMASS clustering in the two regions agrees to an similar extent as it did for the DR9 data [@Ross12]. The agreement is somewhat worse for $\xi_2$, as we find a $\chi^2$ of 48 for $20 < s < 200h^{-1}$Mpc and 29 for $50 < s < 150h^{-1}$Mpc.
The comparison between NGC and SGC for the LOWZ sample is shown in the bottom panels of Fig. \[fig:xicmasslowzNS\]. The agreement between the $\xi_0$ is quite good; the $\chi^2$ is 28 for the 36 data points with $20 < s < 200 h^{-1}$Mpc. For $\xi_2$, the agreement is worse; the $\chi^2 $ is 50 for the same 36 $s$ bins. The discrepancy is dominated by large-scales, as for the 22 data points with $s < 130h^{-1}$Mpc, the $\chi^2$ is 19, while for the 14 with $s > 130h^{-1}$Mpc, the $\chi^2$ is 29. The difference is such that it serendipitously cancels for the combined NGC+SGC sample. While unusual, no effect studied in this paper has a significant impact on the shape of the LOWZ quadrupole at $s > 130h^{-1}$Mpc and we can offer no explanation beyond a statistical fluctuation (which would be at $\sim2\sigma$ for $\chi^2$/dof$=50/36$). We note that scales $s > 130h^{-1}$Mpc have a negligible impact on RSD structure growth measurements and only a small impact on BAO measurements (see Appendix \[app:rob\]).
We do not find any strong discrepancies between the NGC and SGC configuration-space clustering of BOSS galaxies at scales relevant to BAO or RSD studies. We therefore combine the two hemispheres in our standard analysis, but demonstrate in subsequent sections that the results applied to each hemisphere individually are consistent with the combined constraints and that the BAO results are thus robust to any concerns about combining the NGC and SGC results. [@Acacia] show discrepancies between the two hemispheres are more apparent at small scales when studying the power spectrum. The differences are shown to be a consequence of the color offsets between the two regions, as discussed in Section \[sec:NSdata\]. These differences are not apparent in the correlation function analysis because they are isolated to $s < 20h^{-1}$ in configuration space.
Targeting selection
-------------------
![The clustering of BOSS galaxies, using the four different targeting specifications. The CMASS and LOWZ samples occupy different redshift regimes (see Fig. \[fig:nz\]) and thus some difference in clustering amplitude is to be expected. The dotted lines denote the mean of the QPM mock samples.[]{data-label="fig:xilowzearlycmass"}](xilowzearlycmassDR12.pdf){width="84mm"}
Here, we compare the clustering in the nominal LOWZ selection to the clustering obtained using the LOWZE2 selection (which is the full LOWZ footprint plus the 131 deg$^2$ area where the LOWZE2 selection was applied to targeting) and to the clustering obtained using the LOWZE3 selection (which is the full LOWZ area plus the 755deg$^2$ where the LOWZE3 selection was applied to targeting.) We use the full area available, within the NGC, in order to obtain the best statistics on the galaxies that comprise each selection.
We show this comparison in Fig. \[fig:xilowzearlycmass\], where the CMASS clustering is also shown. The LOWZE2 selection covers the same area as the LOWZ selection, with 131deg$^2$ more area and a lower number density. We should thus expect consistent clustering measurements. Its correlation function is displayed using a solid curve in Fig. \[fig:xilowzearlycmass\]. For both $\xi_0$ and $\xi_2$, LOWZE2 appears consistent with the LOWZ measurements, but with a slightly higher clustering amplitude. Indeed, using the LOWZ covariance matrix, we find a $\chi^2$ of 23 for the monopole and 19 for the quadrupole when testing the range $20 < s < 200$ (36 data points). Multiplying the LOWZ $\xi_0$ by 1.04 reduces the $\chi^2$ to 20. An increase in clustering amplitude is expected, as the LOWZE2 sample applies brighter limits to the selection compared to the nominal LOWZ selection. Applying a factor to the quadrupole does not significantly reduce the $\chi^2$. These $\chi^2$/dof are much less than one, as expected for measurements that are highly correlated.
The LOWZE3 sample covers the same area as the LOWZ footprint, with an additional 755deg$^2$, a lower number density, and large weights that account for variations in target density with seeing. As detailed in Section \[sec:data\], its mean number density is just greater than half that of the nominal LOWZ selection. The LOWZE3 correlation functions are displayed using dashed curves in Fig. \[fig:xilowzearlycmass\]. The measurements appear qualitatively similar to the LOWZ measurements, especially for the quadrupole, but with a slightly greater clustering amplitude for $\xi_0$. However, when repeating the test we applied to the LOWZE2 sample, using the LOWZ covariance matrix to evaluate a $\chi^2$ value for the difference between the LOWZ and LOWZE3 samples, we find the $\chi^2$ is 83 for the monopole, when multiplying the amplitudes by a factor of 1.10, in the range $20 < s < 200h^{-1}$Mpc (36 data points), and that this $\chi^2$ is not significantly better or worse for a particular range of scales (e.g., it is 31 for the 16 data points with $s > 120h^{-1}$Mpc). Similar to LOWZE2, we expect an increase in the clustering amplitude of the LOWZE3 sample compared to LOWZ, as the cuts applied to LOWZ to produce the LOWZE3 sample preferentially remove fainter galaxies. The quadrupole gives somewhat better agreement, as the $\chi^2$ is 50 for the range $20 < s < 200h^{-1}$Mpc (applying a constant factor does not significantly improve the $\chi^2$).
If we increase the diagonal elements of the LOWZ covariance matrix by 10 per cent and repeat the test, we find the $\chi^2$ reduce to 36 for $\xi_0$ and 24 for $\xi_2$ (for the same 1.10 factor for $\xi_0$). Changing the covariance matrix in this manner represents the addition of a pure shot-noise contribution to the covariance matrix that has a variance which is 10 per cent of the LOWZ variance. This is likely conservative, as the LOWZE3 number density is approximately half of the LOWZ number density. When using the value of $P_0 = 10^4h^{3}$Mpc$^{-3}$ adopted to define the FKP weights, a number density of $3\times10^{-4}h^{3}$Mpc$^{-3}$ for the LOWZ sample, and a number density $1.5\times10^{-4}h^{3}$Mpc$^{-3}$ for the LOWZE3 sample, we find the expected increase in the variance is 56 per cent. We therefore conclude that the clustering of the LOWZ and LOWZE3 samples is consistent, when allowing for a 10 per cent increase in clustering amplitude and the extra shot noise imparted by the lower LOWZE3 number density.
The clustering amplitude of the CMASS sample is clearly lower than that of the LOWZ sample on scales $s < 80h^{-1}$Mpc. Again, using the covariance matrix of the LOWZ sample, we find the $\chi^2$ between two measurements, scaling the CMASS result by a constant factor. We find a minimum $\chi^2$ of 34 for a factor 1.12 for the monopole and 41 for the quadrupole, applying a factor of 1.27. This implies the shapes of the measured monopole and quadrupole are consistent between the CMASS and LOWZ samples.
Combined BOSS sample
--------------------
![The measured monopole and quadrupole of the BOSS galaxy correlation function, split into two redshift shells. The dotted lines display the mean of the MultiDark-Patchy samples with the same redshift selections.[]{data-label="fig:xicom"}](xilowzcmasscombzsplitDR12.pdf){width="84mm"}
Finally, we present the clustering of the BOSS galaxy sample, i.e., the combined sample of LOWZ, LOWZE2, LOWZE3, and CMASS, applying all of the weights defined in the previous section. The clustering amplitudes of the individual BOSS samples differ by less than 20 per cent for the CMASS/LOWZ samples and less than 10 per cent for the individual LOWZ samples. The scales we are interested in are less than 150$h^{-1}$ Mpc. Thus any cross-sample pairs of galaxies will be a small percentage of the total entering any particular measurement and we do not expect any significant shift in the amplitude as a function of scale within the scales of interest. Further, we have tested weighting the individual samples such that their density field has the same amplitude in the regions of overlap. We find this weighting has no significant impact on the measured clustering, and we therefore simply add the catalogs (both the galaxy and the random ones, in the correct proportion) to produce the combined sample. The clustering measurements for the combined BOSS sample with $0.2 < z < 0.75$, split into two redshift bins at $z=0.5$, are displayed in Fig. \[fig:xicom\]. One can see that the clustering is similar in the two redshift regimes, with a slightly greater clustering amplitude in the lower redshift sample.
The dotted curves in Fig. \[fig:xicom\] display the mean of the PATCHY mock samples, which are a better match to the BOSS combined sample properties than QPM (one of the biggest differences is the treatment of the lightcone in PATCHY, see @Kitaura15 for full details)[^10]. The covariance between the $s$ bins makes the statistical match between the mean of the mocks and the measured clustering better than might be guessed by eye. For the monopole and $0.2 < z < 0.5$ it is 38 for the 32 measurement bins with $20 < s < 180h^{-1}$Mpc, while for $0.5 < z < 0.75$, it is 31 for the same range of scales. For the quadrupole, it is 35 for $0.2 < z < 0.5$ and 30 for $0.5 < z < 0.75$. Allowing the mean of the mocks to be scaled by a constant value, the $\chi^2$ decreases to 36 for the $0.2 < z < 0.5$ monopole when applying a factor of 0.98. No significant improvement is found for the $0.5 < z < 0.75$ monopole. For the quadrupole, the $\chi^2$ cannot be significantly improved by applying any factor to the mean of the $0.2 < z < 0.5$ mocks and is reduced to 27 when applying a factor of 0.93 to the $0.5 < z < 0.75$ mocks.
For the monopole, the clustering at large scales shows an apparent excess, however it is of marginal statistical significance : for the $0.2 < z < 0.5$ bin the $\chi^2$ is 20 for the 12 data points with $s > 120h^{-1}$ and 17 for the 20 points with $s < 120h^{-1}$, but for $z>0.5$, the $\chi^2$/dof is slightly smaller for $s > 120h^{-1}$ (10/12) than for $s < 120h^{-1}$ (22/20). While all of the data points are greater than the mean of the mocks at large-scales, the large degree of covariance between the measurements makes this fact unremarkable. In Fourier space, [@BeutlerDR12RSD; @GriebDR12RSD] find no apparent excess for $k > 0.01h$Mpc$^{-1}$.
Robustness of BAO Measurements to Observational Treatment {#sec:BAOrob}
=========================================================
Case $\langle\alpha_{||}\rangle$ $S_{||}$ $\langle\sigma_{||}\rangle$ $\langle\alpha_{\perp}\rangle$ $S_{\perp}$ $\langle\sigma_{\perp}\rangle$ $\langle\alpha\rangle$ $S_{\alpha}$ $\langle\epsilon\rangle$ $S_{\epsilon}$
------------------------------ ----------------------------- ---------- ----------------------------- -------------------------------- ------------- -------------------------------- ------------------------ -------------- -------------------------- ---------------- -- -- -- -- -- -- --
600 mocks used:
\(iii) Sub Star, weighted 1.0011 0.0534 0.0567 1.0045 0.0253 0.0266 1.0029 0.0181 -0.0013 0.0220
\(iv) Sub 1.0016 0.0532 0.0564 1.0043 0.0247 0.0266 1.0029 0.0180 -0.0010 0.0217
200 mocks used:
\(i) Fid. 1.0011 0.0510 0.0554 1.0053 0.0241 0.0259 1.0034 0.0165 -0.0015 0.0213
\(ii) Sub Star, not weighted 1.0009 0.0520 0.0550 1.0055 0.0250 0.0257 1.0035 0.0171 -0.0016 0.0217
\[tab:baoresultsmock\]
In this section, we measure the BAO scale for each of the BOSS target samples, and test the robustness of the measurements to our treatment of the selection function. We first test the effect of the stellar density weights by simulating the stellar density systematic in mock samples and then comparing the BAO results to those without any simulation of the stellar density systematic. We then test the BOSS BAO measurements by determining their dependency on the application of the various weights and examining the results we obtain for each Galactic hemisphere.
Tests on mocks {#sec:mocksys}
--------------
We test for the systematic impact the stellar density relationship has on the measured BAO position by simulating the effect in mock CMASS samples and thus determine an observational systematic uncertainty on BOSS BAO measurements. We take the stellar density field observed by SDSS and assume the distribution of stars is the same for each of the mocks. In order to simulate the systematic effect of stellar density observed in the BOSS data, we also must assign $i_{\rm fib2}$ magnitudes to each mock galaxy. We accomplish this by taking the observed distribution of $i_{\rm fib2}$ magnitude as a function of redshift and sampling from this for each mock galaxy redshift, i.e., we estimate $P(i_{\rm fib2}|z)$ based on the BOSS data and use this to assign the $i_{\rm fib2}$ values to each mock galaxy. This allows us to analyze the statistics of the distributions of BAO scale measurements obtained from the following four cases that include different levels of systematic contamination and correction:
1. Fiducial mocks; BAO fits are presented for 200 of these, in order to match the number used in case ii.
2. Mocks that have been randomly sub-sampled in a manner matching the observed stellar density systematic[^11]; the clustering of these has the spurious large-scale power similar to the unweighted data sample; BAO fits have been performed for 200 of these.
3. Mocks that first have the sub-sampling procedure applied in case (ii) and then have stellar density weights calculated and used for their clustering; the stellar density systematic is thus removed, but the weights are calculated on a per-mock basis; BAO fits have been performed for 600 of these.
4. Mocks that have been uniformly sub-sampled by 4% to have the same number density as those sub-sampled according to the stellar density systematic; these are a more-fair comparison to cases (ii) and (iii) than the fiducial mocks: BAO fits have been performed for 600 of these.
Cases iii) and iv) are the most realistic and will be used to determine any additional scatter from the weighting process. We therefore concentrate on performing fits for these tests, while for other tests we simply perform a number sufficient to detect any significant issues.
![The change in the mean measured monopole (top) and quadrupole (bottom) of the correlation function of mock samples, when comparing the fiducial case (without any simulation of observational systematics) to the case where the stellar density systematic has been simulated (‘darkorchid’ diamonds) and when comparing the fiducial case to the case where the stellar density systematic has been simulated and corrected for (azure squares). The grey shaded region displays the 1$\sigma$ uncertainty obtained from mock samples.[]{data-label="fig:xi0mocksys"}](xi02sysDCMASSDR12mock.pdf){width="84mm"}
We use the QPM CMASS NGC mocks and for all tests we use the $\xi_{0,2}$ covariance matrix determined from 1000 realizations of the fiducial case (i). For these, we have assumed the same cosmology as used to construct the QPM mocks (given in Table \[tab:baoexp\]) both when measuring $\xi_{0,2}$ and in the BAO template. These choices match those of [@CuestaDR12]. Thus, the expected $\alpha$ and $\epsilon$ values are 1 and 0.
The results of anisotropic BAO fits are shown in Table \[tab:baoresultsmock\] (‘S’ denotes a standard deviation and $\sigma$ an uncertainty recovered from a likelihood). Compared to the cases with no stellar density systematic, introducing the stellar density systematic shifts the mean recovered value of $\alpha_x$ by at most 0.0005, equivalent to 0.01$\sigma$. This suggests that any potential systematic bias due to stellar density is negligibly small; i.e., if we applied no correction for stellar density systematics, we would still recover un-biased BAO measurements. All of the mean $\sigma$ are very similar (for cases using the same set of mocks), as one might expect given that the same covariance matrix is used in all cases.
Sample Weights $\alpha$ $\chi^2$/dof $\alpha_{||}$ $\alpha_{\perp}$ $\chi^2$/dof
---------------------- --------- ------------------- -------------- ----------------- ------------------ --------------
Pre-reconstruction:
CMASS none $0.985\pm0.013$ 26/15 0.965$\pm$0.035 0.996$\pm$0.020 42/30
CMASS cp $0.986\pm0.012$ 23/15 0.966$\pm$0.034 0.996$\pm$0.020 41/30
CMASS zf $0.985\pm0.012$ 30/15 0.972$\pm$0.034 0.992$\pm$0.020 47/30
CMASS st $0.987\pm0.012$ 24/15 0.971$\pm$0.034 0.996$\pm$0.020 41/30
CMASS all $0.987\pm0.012$ 24/15 0.970$\pm$0.034 0.997$\pm$0.021 40/30
CMASS NGC all $0.985\pm0.013$ 19/15 0.965$\pm$0.037 0.994$\pm$0.026 41/30
CMASS SGC all $1.020\pm0.028$ 27/15 1.020$\pm$0.095 1.014$\pm$0.057 38/30
LOWZ none $0.992\pm0.026$ 18/15 x x x
LOWZ zf $0.993\pm0.026$ 18/15 x x x
LOWZ all $0.993\pm0.025$ 17/15 x x x
LOWZE3 NGC all $1.007\pm0.025$ 38/15 x x x
LOWZE2 NGC all $1.010\pm0.029$ 14/15 x x x
LOWZ NGC all $1.009\pm0.029$ 18/15 x x x
LOWZ SGC all $0.949\pm0.042$ 10/15 x x x
Post-reconstruction:
Sample Weights $\alpha$ $\chi^2$/dof $\alpha_{||}$ $\alpha_{\perp}$ $\chi^2$/dof
CMASS none $0.9843\pm0.0093$ 16/15 0.962$\pm$0.023 0.997$\pm$0.014 30/30
CMASS cp $0.9850\pm0.0083$ 27/15 0.961$\pm$0.022 0.996$\pm$0.013 43/30
CMASS zf $0.9856\pm0.0087$ 33/15 0.962$\pm$0.022 0.998$\pm$0.013 63/30
CMASS st $0.9859\pm0.0086$ 18/15 0.957$\pm$0.021 1.001$\pm$0.013 37/30
CMASS all $0.9832\pm0.0085$ 19/15 0.952$\pm$0.021 1.000$\pm$0.013 46/30
CMASS C16 $0.9849\pm0.0092$ 14/15 0.949$\pm$0.024 1.003$\pm$0.014 30/30
CMASS NGC all $0.975\pm0.010$ 15/15 0.942$\pm$0.022 0.999$\pm$0.016 39/30
CMASS SGC all $1.016\pm0.020$ 15/15 1.005$\pm$0.044 1.013$\pm$0.029 50/30
\[tab:baoresultspr\]
In order to assess whether the weighting process introduces any additional scatter, we have compared the standard deviations recovered from Cases iii) and iv). For both $\alpha_{||}$ and $\alpha_{\perp}$, the standard deviations increase very slightly when the mocks go through the weighting process. We determine the systematic scatter as $S^2_{sys} = S^2_{iii}-S^2_{iv}$ and estimate an uncertainty via a jackknife-like method; we omit blocks of 20 mocks and recalculate $S_{\rm sys}$. The uncertainty on $S$ is then $\sigma^2_{S} = \frac{29}{30}\sum_i (S_{{\rm sys},i}-S_{\rm sys,full} )^2$, with $i$ denoting the sample with 20 mock results removed. We find $S_{sys} = 0.005\pm0.005$ for $\alpha_{||}$ and $S_{sys} = 0.005\pm0.002$ for $\alpha_{\perp}$. The increase in the variance is thus significant for $\alpha_{\perp}$.
The variance on the recovered BAO positions is slightly larger when the mocks have the stellar density systematic applied and corrected for, compared to the case where a uniform sub-sampling has been applied. This not surprising, as the correction procedure has essentially removed the clustering modes that align with stellar density (c.f. @Elsner15). The application of the weights has a larger (relative) effect on the $\alpha_{\perp}$ measurements; this is consistent with the fact that the weighting procedure should largely remove transverse modes that correlate with the distribution of stars in the Galaxy. The results from our mocks tests suggest that uncertainties on $\alpha_{\perp}$ using the CMASS data will be under-estimated by 2 per cent ($\sqrt{0.025^2+0.005^2}/0.025-1$) and that uncertainties on $\alpha_{||}$ by half a per cent ($\sqrt{0.05^2+0.005^2}/0.05-1$). Based on the mode-removal argument, we expect the percentages to stay constant with signal-to-noise (e.g., for post-reconstruction results)[^12].
As demonstrated in the Appendix of [@Ross12], the correction procedure we apply for observational systematics is expected to produce slightly biased clustering measurements.[^13] We test this by comparing the mean $\xi_{0,2}$ for each of the mock cases and we plot the results in Fig. \[fig:xi0mocksys\]. We find the correction procedure produces a nearly indistinguishable change in the mean $\xi_{0,2}$ when compared to the fiducial case (squares); clearly any bias is negligible in comparison to the statistical uncertainty (denoted by the grey shaded regions). In contrast, the mean effect of simulating the stellar density systematic is of clear significance to $\xi_0$ but exhibits a difference that is well within the statistical uncertainty for $\xi_2$ (see the diamonds in Fig. \[fig:xi0mocksys\]). This is similar to the difference between the clustering observed in the CMASS data with and without corrective weights for the stellar density systematic (the triangles in the upper two panels of Fig. \[fig:xi0sys\]).
The conclusion of this subsection is that, as best we can measure, observational systematics impart no bias on BOSS BAO measurements. However, we do find that the known observational systematics slightly reduce the statistical power of the measurements, implying our uncertainties on $\alpha_{\perp}$ are under-estimated by 2 per cent and those on $\alpha_{||}$ by 0.5 per cent. We apply these additional errors to our final results as systematic uncertainties.
Robustness of BOSS data
-----------------------
$z$ bin $\Delta\langle\alpha_{||}\rangle$ $S_{||}$ $\langle\sigma_{||}\rangle$ $\Delta\langle\alpha_{\perp}\rangle$ $S_{\perp}$ $\langle\sigma_{\perp}\rangle$ $\Delta\langle\alpha\rangle$ $S_{\alpha}$ $\Delta\langle\epsilon\rangle$ $S_{\epsilon}$ $\langle \chi^2\rangle$/dof
---------------------------- ----------------------------------- ---------- ----------------------------- -------------------------------------- ------------- -------------------------------- ------------------------------ -------------- -------------------------------- ---------------- ----------------------------- -- -- -- -- -- -- -- -- -- --
pre-reconstruction:
[**QPM**]{}
$0.2 < z < 0.5$ 0.003 0.048 0.049 0.005 0.025 0.027 0.004 0.018 -0.001 0.024 29.4/30
$0.4 < z < 0.6$ 0.001 0.045 0.045 0.007 0.023 0.025 0.005 0.015 -0.002 0.021 29.3/30
$0.5 < z < 0.75$ -0.002 0.042 0.043 0.007 0.023 0.025 0.004 0.015 -0.003 0.020 29.3/30
[**MD-P**]{}
$0.2 < z < 0.5$ 0.001 0.057 0.057 0.008 0.031 0.032 0.005 0.021 -0.002 0.025 29.4/30
$0.4 < z < 0.6$ 0.004 0.056 0.053 0.008 0.028 0.028 0.005 0.018 -0.001 0.025 29.3/30
$0.5 < z < 0.75$ -0.001 0.052 0.050 0.010 0.029 0.028 0.006 0.018 -0.004 0.024 29.3/30
post-reconstruction:
[**QPM**]{}
$0.2 < z < 0.5$ 0.002 0.030 0.031 0.003 0.017 0.017 0.0024 0.0113 -0.0003 0.0138 29.4/30
$0.4 < z < 0.6$ 0.003 0.027 0.029 0.001 0.015 0.016 0.0016 0.0105 0.0005 0.0125 29.7/30
$0.5 < z < 0.75$ 0.002 0.029 0.031 0.002 0.016 0.017 0.0013 0.0112 -0.0001 0.0130 29.7/30
[**MD-P**]{}
$0.2 < z < 0.5$ 0.002 0.034 0.035 -0.001 0.019 0.020 0.0002 0.0128 0.0009 0.0152 29.3/30
$0.4 < z < 0.6$ 0.004 0.031 0.032 0.001 0.017 0.017 0.0014 0.0114 0.0011 0.0140 29.3/30
$0.5 < z < 0.75$ 0.000 0.031 0.033 0.002 0.018 0.019 0.0015 0.0118 -0.0008 0.0145 29.4/30
\[tab:baoresultsmockcomb\]
The results of the previous section (\[sec:mocksys\]) imply that the stellar density systematic, the most dominant systematic (in terms of greatest potential significance), has, at most, a minor effect on the resulting BAO measurements. In this section, we apply similar tests to the BOSS data, and expand them to consider all of the weights applied to BOSS galaxies that are meant to provide the correct selection function. We also compare the results from the NGC and SGC regions separately. All of the measurements in this section use the covariance matrix constructed from 1000 QPM mocks. The results are summarized in Table \[tab:baoresultspr\] and we discuss them below.
The pre-reconstruction CMASS results are shown in the top rows of Table \[tab:baoresultspr\]. We measure both isotropic and anisotropic BAO. Moving down by row, we add weights to the galaxy catalog (the $n(z)$ is re-created for each case). The results are stable; the biggest absolute difference is 0.007 in $\alpha_{||}$ between the cases where no weights are applied and the case where close-pair and redshift-failure weights are applied. The biggest difference in terms of fraction of the uncertainty is 0.25$\sigma$ in $\alpha_{\perp}$ between the cases the close-pair and redshift failure weights have been applied and all weights have been applied. These size changes are consistent with the scatter expected due to statistical fluctuations. For example, the level of scatter we find when applying stellar density weights in the previous section is 0.2$\sigma$ between the weighted and un-weighted data; i.e., the statistical results are consistent with the level to which we expect the weights to alter the relative importance of each given survey mode and thus cause small differences in the recovered measurements. The isotropic NGC/SGC measurements differ by 1.1$\sigma$, and therefore are consistent to this level, given they represent independent volumes. The combined result is slightly closer (by 0.004) to the NGC measurement than one would expect from Gaussian likelihoods.
For LOWZ, pre-reconstruction, we only measure the isotropic BAO scale, due to the relatively low signal to noise. Measurements of the isotropic BAO scale use only the monopole, $\xi_0$. The results are shown in the middle rows of Table \[tab:baoresultspr\]. As expected, the application of close-pair or redshift failure weights has very little impact on the measurements (at most $0.04\sigma$). The difference between the NGC and SGC measurements is 1$\sigma$, but in the opposite direction as the difference found for CMASS. The combined LOWZ measurement is closer to the NGC measurement by 0.003 compared to what would be expected from Gaussian statistics. We find that the BAO measurements obtained from the LOWZE3 and LOWZE2 selections are very similar (within 0.1$\sigma$) to what we find for the nominal LOWZ sample. This agreement helps validate that the LOWZE3 and LOWZE2 samples are indeed faithful tracers of the BAO signal and that their unique areas should be added to the nominal LOWZ footprint in order to obtain the best BAO measurements using low redshift BOSS data.
Finally, we investigate the robustness of the post-reconstruction results, shown in the bottom panels of Table \[tab:baoresultspr\]. We focus on the CMASS sample. The agreement remains quite good, but the differences are larger relative to the uncertainty than they were for the pre-reconstruction results. The biggest difference is 0.5$\sigma$ in $\alpha_{||}$, between the case where close-pair and redshift failure weights are applied and all weights are applied (with the change being shared equally between the addition of the stellar density weights and the seeing weights). A potential explanation is that there is more stochasticity in the reconstruction process; the weighted galaxy field is first used to determine the displacement field and then the weighted galaxy and random positions are displaced. This increases the chance of fluctuations in the resulting measurements. Given that the largest fluctuation we find is 0.5$\sigma$ out of 30 possible comparisons, we find no evidence for concern.
There is a 1.8$\sigma$ discrepancy between the post-reconstruction CMASS isotropic BAO measurement in the NGC and SGC. Such a discrepancy has been observed at similar significance in each BOSS data release. When decomposed, the discrepancy is largest in $\alpha_{||}$, where the difference is 1.3$\sigma$ (it is only $0.4\sigma$, and thus consistent, for $\alpha_{\perp}$). Despite the slight tension, the results recovered when combining the pair-counts of NGC and SGC samples match the expectation for Gaussian likelihoods one obtains when taking the weighted mean of the NGC and SGC results.
Combined Sample BAO Measurements {#sec:BAOres}
================================
The previous subsection demonstrates that the BAO measurements are consistent between the components of BOSS, splitting by targeting algorithm, after correcting for effects due to technical issues in BOSS observations. Here, we present BAO measurements determined using the combined sample data, both for the mock and data samples. We use both the QPM and MD-P mocks, and the covariance matrix determined using them, to analyze this sample. In addition to the $0.2 < z < 0.5$ and $0.5 < z < 0.75$ redshift bins, we present results for a $0.4 < z < 0.6$ redshift bin, which we expect to be largely covariant with the two distinct redshift bins but to provide additional information when assessing the robustness of our results.
Results from mock samples
-------------------------
Table \[tab:baoresultsmockcomb\] displays the results of our BAO fits to both sets of mock correlation functions. For the mean values, we indicate the difference from the expected value, given the cosmology used for the mocks and our fiducial cosmology. These expected values are given in Table \[tab:baoexp\].
All of the results are biased relative to the uncertainty on the ensembles of the 1000 mock realizations (one should divide the $S$ and $\sigma$ numbers by $\sqrt{1000}$ to obtain the uncertainty on the of average of results of 1000 mocks), but by a relatively small amount when compared to the uncertainty expected for one realization. For the pre-reconstruction results, some bias is expected due to mode-coupling from non-linear structure formation (c.f. @PadWhite09). The bias we find is greatest in $\alpha_{\perp}$, where it is 0.006 for QPM and 0.009 for MD-P (averaged across the three redshift bins). These are 0.25$\sigma$ and 0.31$\sigma$. For $\alpha_{||}$, the bias is only 0.001 on average, making it $\ll 0.1\sigma$. The biases are of the order predicted by [@PadWhite09]. Studies (e.g., @BeutlerDR12RSD [@SanchezDR12RSD]) that use the pre-reconstruction data to measure $f\sigma_8$, $\alpha_{||}$, and $\alpha_{\perp}$ employ modeling that takes the predicted shifts into account and are expected to obtain somewhat more accurate results for the pre-reconstruction data. We use the pre-reconstruction results primarily as a basis for comparison to the post-reconstruction results.
Post-reconstruction, as expected, the bias in $\alpha_{\perp}$ is decreased. Considering the mean results across the redshift bins, for QPM, it is 0.002 (i.e., $\sigma/8$) and for MD-P it is 0.001 (i.e., 0.06$\sigma$). For $\alpha_{||}$, it is 0.002 (i.e., $\sim 0.07 \sigma$) for both sets of mocks. In terms of $\alpha/\epsilon$, the biases are 0.16$\sigma$ for the QPM $\alpha$ and 0.08$\sigma$ for the MD-P $\alpha$, while for $\epsilon$ they are both $\ll 0.1\sigma$. The biases vary with redshift bin to a level that is significantly larger than the uncertainty on the ensemble average; for example, in MD-P the difference in $\alpha$ between the low and high redshift bins is 0.0014, while the expected 1$\sigma$ deviation is 0.0006; similarly the difference for $\epsilon$ is 0.0017 compared to an expected 1$\sigma$ deviation of 0.0007. For QPM, the differences are smaller. In terms of the expected deviations, they are $\sim 2\sigma$ for $\alpha$ and less than 1$\sigma$ for $\epsilon$ (though it is 1.5$\sigma$ comparing the $0.2 < z < 0.5$ and $0.4 < z < 0.6$ bins, which should be correlated). The biases thus appear specific to redshift bin, implying they are either related the creation of the mocks and any redshift evolution they include or choices in the reconstruction algorithms related to the expected evolution of the density field. Overall, any bias in our measured BAO parameters is less than 0.16$\sigma$ (using the expected uncertainty for a single realization) and should not impact our conclusions. See [@VargasDR12BAO] for further study of related issues.
In general, the uncertainties recovered from the MD-P mocks are larger than those of the QPM mocks. The differences are in the uncertainties are $\sim$ 10 per cent in $\alpha_{||}$ and are slightly larger ($<$ 13 per cent) in $\alpha_{\perp}$. The differences in the uncertainties are thus at a similar level to the biases we find in the recovered BAO parameters. These biases are absorbed by the theoretical systematic uncertainty budget derived in [@VargasDR12BAO] and applied in @Acacia.
Results from data
-----------------
![The measured post-reconstruction $\xi_0$ and $\xi_2$ and corresponding best-fit BAO models for BOSS galaxies. These best-fit models encode the BAO distance measurements determined in this work and are displayed for the range of scales that have been fit ($50< s < 150h^{-1}$Mpc).[]{data-label="fig:xiBAOfit"}](xibin13recBAOfit02.pdf){width="84mm"}
$z$ bin $\alpha_{||}$ $\alpha_{\perp}$ $\chi^2$/dof
---------------------------- ----------------- ------------------ --------------
pre-reconstruction:
[**QPM**]{}
$0.2 < z < 0.5$ 1.068$\pm$0.035 0.982$\pm$0.020 45/30
$0.4 < z < 0.6$ 1.037$\pm$0.038 1.014$\pm$0.021 46/30
$0.5 < z < 0.75$ 0.963$\pm$0.035 0.999$\pm$0.024 30/30
[**MD-P**]{}
$0.2 < z < 0.5$ 1.051$\pm$0.036 0.983$\pm$0.022 37/30
$0.4 < z < 0.6$ 1.024$\pm$0.042 1.008$\pm$0.022 42/30
$0.5 < z < 0.75$ 0.953$\pm$0.034 1.001$\pm$0.024 28/30
post-reconstruction:
[**QPM**]{}
$0.2 < z < 0.5$ 1.024$\pm$0.024 0.986$\pm$0.013 48/30
$0.4 < z < 0.6$ 0.989$\pm$0.020 0.993$\pm$0.012 27/30
$0.5 < z < 0.75$ 0.962$\pm$0.024 0.991$\pm$0.015 33/30
[**MD-P**]{}
$0.2 < z < 0.5$ 1.025$\pm$0.027 0.988$\pm$0.015 39/30
$0.4 < z < 0.6$ 0.986$\pm$0.024 0.994$\pm$0.014 23/30
$0.5 < z < 0.75$ 0.962$\pm$0.023 0.991$\pm$0.015 32/30
\[tab:baoresultsdatacomb\]
: BAO fits on the BOSS combined sample data, using both the Multidark PATCHY (MD-P) and QPM covariance matrices.
Results for BAO fits on BOSS data, using both the QPM and the MD-P covariance matrices, are displayed in Table \[tab:baoresultsdatacomb\]. The results are similar using the two covariance matrices, but there are notable differences. In general, the uncertainties are smaller when the QPM covariance matrices are used, matching the results on the mocks. Correspondingly, the $\chi^2$ values are consistently higher for the QPM mocks (in five of the six cases to compare). None of the six QPM cases recover a $\chi^2$/dof that is less than 1, while this is the case for two of the MD-P cases. Considering the total $\chi^2$ for the two independent redshift bins, the $\chi^2$/dof for QPM is 75/60 pre-reconstruction and 81/60 post-reconstruction. This can be compared to 65/60 and 71/60 for MD-P. This is suggestive that the MD-P covariance matrix is doing the better job of characterizing the noise in the BOSS combined sample $\xi_{0,2}$ measurements.
Pre-reconstruction, the $\alpha_{||}$ results are consistently greater for the QPM covariance matrix compared to the MD-P covariance matrix. The difference varies between 0.017 and 0.010 and is a 0.5$\sigma$ shift in the most extreme case (the $0.2 < z < 0.5$ redshift bin); given the same data is used and only the covariance matrix is altered this is a fairly large change. The differences are much smaller for $\alpha_{\perp}$, where it is at most 0.006 (0.3$\sigma$) in the $0.4 < z < 0.6$ redshift bin.
Post-reconstruction, the BAO measurements are robust to the choice of covariance matrix. The biggest difference is 0.003 (0.15$\sigma$) in $\alpha_{||}$ for the data in the $0.4 < z < 0.6$ redshift bin; the difference in the uncertainty between the results in this bin is the same. The level of agreement is consistent with the results found from the mock realizations and suggests that the choice of covariance matrix is not a major systematic uncertainty in our analysis. Given the slightly larger uncertainties for the data using the MD-P covariance matrix, we believe they represent the more conservative choice and are what we use for our final results. We use the MD-P results in all comparisons that follow unless otherwise noted.
Fig. \[fig:xiBAOfit\] displays the measured post-reconstruction $\xi_{0,2}$ and the associated best-fit BAO model, using the MD-P covariance matrix. At each redshift, one can observe the strong BAO feature in the monopole, which has been enhanced by the reconstruction process, compared to previous plots. For the quadrupole, reconstruction removes most of the large-scale RSD effects and the overall amplitude is thus decreased. BAO features appear in the quadrupole to the right and left of the peak in the monopole. Such BAO features appear in the quadrupole when $\alpha_{||} \neq \alpha_{\perp}$ (and thus do not present themselves in the mocks as the two $\alpha$ parameters are expected to be nearly equal in our mock analysis). The feature appears to the right in the $0.5 < z < 0.75$ redshift bin, which yields a measurement of $\alpha_{||}$ that is lower than $\alpha_{\perp}$; the reverse is true for the $0.2 < z < 0.5$ bin. See [@Acacia] for further exploration and visualization of these features in the same data.
sample bin center shift $\alpha_{||}$ $\alpha_{\perp}$ $r$ $\chi^2$/dof $\alpha$ $\chi^2$/dof
--------------------------- ----------------------- --------------------------- --------------------------- ------- -------------- ----------------- --------------
[**$0.2 < z < 0.5$:**]{}
post-recon [**combined +sys**]{} 1.022$\pm$0.027$\pm$0.003 0.988$\pm$0.015$\pm$0.003 - -
combined 1.022$\pm$0.027 0.988$\pm$0.015 -0.39 42/30
0 $h^{-1}$Mpc 1.025$\pm$0.027 0.988$\pm$0.015 -0.39 39/30 0.998$\pm$0.010 25/15
1 $h^{-1}$Mpc 1.017$\pm$0.027 0.992$\pm$0.015 -0.39 35/30 1.000$\pm$0.010 20/15
2 $h^{-1}$Mpc 1.022$\pm$0.028 0.990$\pm$0.015 -0.39 40/30 0.999$\pm$0.010 19/15
3 $h^{-1}$Mpc 1.024$\pm$0.028 0.985$\pm$0.015 -0.40 51/30 1.000$\pm$0.010 28/15
4 $h^{-1}$Mpc 1.023$\pm$0.026 0.986$\pm$0.015 -0.40 44/30 1.000$\pm$0.010 26/15
pre-recon 0 $h^{-1}$Mpc 1.051$\pm$0.037 0.983$\pm$0.022 -0.37 37/30 1.004$\pm$0.015 18/15
[**$0.4 < z < 0.6$:**]{}
post-recon [**combined +sys**]{} 0.984$\pm$0.023$\pm$0.002 0.994$\pm$0.014$\pm$0.003 - -
combined 0.984$\pm$0.023 0.994$\pm$0.014 -0.39 30/30
0 $h^{-1}$Mpc 0.986$\pm$0.024 0.994$\pm$0.014 -0.39 23/30 0.991$\pm$0.009 16/15
1 $h^{-1}$Mpc 0.981$\pm$0.022 0.996$\pm$0.014 -0.39 22/30 0.992$\pm$0.009 14/15
2 $h^{-1}$Mpc 0.981$\pm$0.023 0.996$\pm$0.015 -0.39 37/30 0.993$\pm$0.009 19/15
3 $h^{-1}$Mpc 0.988$\pm$0.023 0.994$\pm$0.014 -0.39 38/30 0.993$\pm$0.009 24/15
4 $h^{-1}$Mpc 0.987$\pm$0.024 0.992$\pm$0.014 -0.40 29/30 0.992$\pm$0.009 18/15
pre-recon 0 $h^{-1}$Mpc 1.024$\pm$0.042 1.008$\pm$0.022 -0.49 42/30 1.012$\pm$0.015 22/15
[**$0.5 < z < 0.75$:**]{}
post-recon [**combined +sys**]{} 0.958$\pm$0.023$\pm$0.002 0.995$\pm$0.016$\pm$0.003 - -
combined 0.958$\pm$0.023 0.995$\pm$0.016 -0.41 32/30
0 $h^{-1}$Mpc 0.962$\pm$0.023 0.991$\pm$0.015 -0.42 32/30 0.981$\pm$0.010 14/15
1 $h^{-1}$Mpc 0.957$\pm$0.023 0.999$\pm$0.016 -0.42 26/30 0.982$\pm$0.010 13/15
2 $h^{-1}$Mpc 0.957$\pm$0.023 0.994$\pm$0.016 -0.41 34/30 0.982$\pm$0.010 18/15
3 $h^{-1}$Mpc 0.954$\pm$0.024 0.996$\pm$0.015 -0.41 40/30 0.983$\pm$0.010 18/15
4 $h^{-1}$Mpc 0.957$\pm$0.024 0.994$\pm$0.015 -0.41 29/30 0.982$\pm$0.010 14/15
pre-recon 0 $h^{-1}$Mpc 0.953$\pm$0.035 1.001$\pm$0.024 -0.49 28/30 0.984$\pm$0.015 14/15
\[tab:bincenter\]
![The uncertainty in $\alpha_{||}$ compared to the uncertainty in $\alpha_{\perp}$ for each MultiDark-PATCHY mock realization (open ‘cadetblue’ circles) and the DR12 data (large goldenrod star). We have combined the data in the $0.2 < z < 0.5$ and $0.5 < z < 0.75$ redshift bins, assuming Gaussian likelihoods. The DR12 uncertainties are on the low side, but are within the locus of points representing the mock realizations. []{data-label="fig:errcompmocks"}](DR12BAOsigscattcomb.pdf){width="84mm"}
The uncertainties we obtain are significantly smaller than the mean uncertainties recovered from the mock realizations, by $\sim$ 25 per cent in each redshift bin. [This implies more pronounced BAO features in the data than are present in the typical mock.]{} In order to determine how unusual this is, we combine the results from the $0.2 < z < 0.5$ and $0.5 < z < 0.75$ redshift bins, as they are independent and the expected $\alpha$ values are nearly identical. Fig. \[fig:errcompmocks\] displays the uncertainty in $\alpha_{\perp}$ ($\sigma_{\perp}$) vs. the uncertainty in $\alpha_{||}$ ($\sigma_{||}$) recovered for each mock realization when combining the results of the two redshift bins (blue circles) and the DR12 data (orange star). One can see that the DR12 result is within the locus of points, but at the lower edge. We can quantify the results further by comparing the area of the 1$\sigma$ confidence region in the data to the ensemble of mocks. We find 45 mocks ($\sim$5 per cent), when once more combining the results of the $0.2 < z < 0.5$ and $0.5 < z < 0.75$ redshift bins, have a smaller area contained in their 1$\sigma$ confidence region than we find for the data. Thus, we determine that we have been somewhat lucky in the region of the Universe we have observed with BOSS, but not grossly so. In this sense, these results are similar to those obtained with the previous data set [@alph]. To some degree, the fact that we find better results than the majority of the mock realizations is due to the fact that the grid-scales involved in the creation of the mocks effectively increase the damping of the BAO signal. This is discussed further in [@BeutlerDR12BAO].
In order to produce our final measurements, we combine results across five choices of bin center, each separated by 1$h^{-1}$Mpc. This is similar to what was done in [@alph]. However, given our fiducial bin size is now 5$h^{-1}$Mpc (compared to 8$h^{-1}$Mpc), the variance between the results in each bin center is smaller and to obtain the combined results we simply average the likelihood surfaces for each bin center (rather attempt to determine the optimal combination with a slightly improved uncertainty, as was done for the isotropic results in @alph).
The results for each bin center choice are presented in Table \[tab:bincenter\]. The results from averaging each likelihood are labeled ‘combined’. The difference between the combined results and the fiducial bin center choice (0 $h^{-1}$Mpc) is at most 0.004 in $\alpha_{\perp}$ (0.25$\sigma$) for the $0.5 < z < 0.75$ redshift bin.
We add an observational systematic uncertainty to the combined result to obtain our final results, quoted as [**‘combined+ sys’**]{} in Table \[tab:bincenter\]. Our tests on the mock samples do not suggest any systematic bias is imparted into the measurements due to observational systematic effects. However, we do find that the procedure we apply to correct for a systematic dependence with stellar density removes a small amount of the BAO information from the survey volume. The mocks we used to determine the covariance used for our BAO results do not include this small reduction in information. Thus, to account for this we add to the results a systematic uncertainty. In Section \[sec:mocksys\], the weighting process was found to impart a 2 per cent dilation into the standard deviation on $\alpha_{\perp}$ and a 0.5 per cent dilation on $\alpha_{||}$. We decompose these dilations into individual systematic uncertainties, so that they can be combined with any other systematic uncertainties. For the given dilations, these are $0.1\sigma_{\rm stat}$ for $\alpha_{||}$ and $0.2\sigma_{\rm stat}$ for $\alpha_{\perp}$ (e.g., solving $1.02^2\sigma_{\rm stat}^2 = \sigma_{\rm stat}^2 + \sigma_{\rm sys}^2$). This systematic uncertainty is added in a similar manner to all of the BAO distance measurements that are used to obtain cosmological constraints in [@Acacia]. We emphasize that these systematic uncertainties are purely observational; [@Acacia] presents a full accounting of potential systematic uncertainties affecting BOSS BAO measurements, incorporating theoretical systematic uncertainties (e.g., those relating to the methodology used for BAO fits and to construct the covariance matrix) that are estimated in [@VargasDR12BAO].
Our final measurements determine the radial distance scale to than 2.7 per cent precision (or better) and the transverse distance to 1.6 precision (or better) in each redshift bin. If we consider the two independent redshift bins, we can add the inverse variance on each $\alpha$ parameter to determine an effective combined precision. This yields 1.8 and 1.1 per cent for the radial and transverse distance scales. These measurements are further improved in [@Acacia], where results from the middle redshift bin, power spectrum BAO, and full-shape measurements are optimally combined.
Additional robustness checks are presented in Appendix \[app:rob\], where we find no significant concerns.
Discussion {#sec:disc}
==========
![The allowed 1 and 2$\sigma$ regions (black ellipses) in the Hubble parameter, $H$, and the angular diameter distance, $D_A$, determined from our post-reconstruction anisotropic BAO scale measurements using BOSS galaxies with $0.2 < z < 0.5$ (top panel) and with $0.5 < z < 0.75$ (bottom panel). The colored points represent the 2$\sigma$ allowed region when assuming a flat $\Lambda$CDM cosmology and the the Planck 2015 results, with different colors representing the value of $H$ at $z=0$ (as indicated by the color bar on the right). []{data-label="fig:DAH"}](DA-H-BAODR12_zbin1.pdf "fig:"){width="84mm"} ![The allowed 1 and 2$\sigma$ regions (black ellipses) in the Hubble parameter, $H$, and the angular diameter distance, $D_A$, determined from our post-reconstruction anisotropic BAO scale measurements using BOSS galaxies with $0.2 < z < 0.5$ (top panel) and with $0.5 < z < 0.75$ (bottom panel). The colored points represent the 2$\sigma$ allowed region when assuming a flat $\Lambda$CDM cosmology and the the Planck 2015 results, with different colors representing the value of $H$ at $z=0$ (as indicated by the color bar on the right). []{data-label="fig:DAH"}](DA-H-BAODR12_zbin3.pdf "fig:"){width="84mm"}
Comparison to other DR12 BAO measurements
-----------------------------------------
The final output of this work is the BAO measurements using the post-reconstruction, anisotropic correlation function measurements of the BOSS DR12 galaxy sample in redshift bins $0.2 < z < 0.5$, $0.4 < z < 0.6$, and $0.5 < z < 0.75$. Other studies have made similar measurements using DR12 data. [@CuestaDR12] obtained BAO measurements using the post-reconstruction anisotropic correlation function of the DR12 CMASS and LOWZ samples. In our robustness checks, we made the same measurements for the CMASS sample. Accounting for the difference in the fiducial cosmologies assumed by each analysis, the differences between [@CuestaDR12] and ours are 0.018 for $\alpha_{||}$ and -0.011 for $\alpha_{\perp}$. However, once we adjust to use the same bin size (8$h^{-1}$Mpc) as [@CuestaDR12], the differences reduce to 0.011 for $\alpha_{||}$ and -0.004 for $\alpha_{\perp}$. Each of these represent a difference of less than 0.5$\sigma$ and are likely due to small methodological differences in the BAO fitting. We find smaller uncertainties on $\alpha_{||}$ (for both the data and the mocks) due to these differences.
Both [@BeutlerDR12BAO] and [@VargasDR12BAO] obtain BAO measurements for the same post-reconstruction data set and redshift bins as we use. [@BeutlerDR12BAO] is a Fourier space analysis. Analyzing the same set of mocks, we find our results are correlated with a factor 0.9 and that the differences we obtain on the BOSS data are consistent with this high level of correlation. Both recover nearly identical uncertainties on the anisotropic BAO parameters, for both the data and the mock samples. [@VargasDR12BAO] uses the same configuration space data as presented in this study, but apply slightly different methodology to obtain their BAO measurements; they recover results that are consistent with ours. A more detailed comparison of these results is presented in [@Acacia], where consensus sets of BOSS DR12 BAO and BOSS DR12 BAO + RSD measurements, combined as described in [@SanchezDR12comb], are presented.
Comparison with $\Lambda$CDM
----------------------------
Our measurements of $\alpha_{||}$ and $\alpha_{\perp}$ can be translated into constraints on $D_A(z)(r_{\rm d}^{\rm fid}/r_{\rm d})$ and $H(z)(r_{\rm d}/r_{\rm d}^{\rm fid})$ and thereby test cosmological models. Here, we simply compare our measurements with the allowed parameter space in $\Lambda$CDM as determined by [@Planck2015][^14]. This is show in Fig. \[fig:DAH\] for the $0.2 < z < 0.5$ and $0.5 < z < 0.75$ redshift bins. Our low redshift result is fully consistent with the Planck $\Lambda$CDM prediction. Our high redshift result is in slight tension, as the 1$\sigma$ contours just barely overlap; this is mostly driven by the $H(z)$ measurement. This is similar to what was found in [@alph] for the DR11 CMASS data; the agreement is slightly better in [@BeutlerDR12BAO] and significantly better (to the level there is no tension) when these two post reconstruction results are optimally combined with pre-reconstruction full-shape results in [@Acacia]. Our results for the $0.4 < z < 0.6$ redshift slice (not plotted) are consistent with the Planck $\Lambda$CDM prediction, as one would predict based on the mean of the $0.2 < z < 0.5$ and $0.5 < z < 0.75$ results. The full cosmological context of our measurements, when combined with other BOSS DR12 results, is explored in detail in [@Acacia].
Summary
=======
In this work, we have
- Described and motivated the construction of the selection function for BOSS galaxies;
- Shown how the treatment of the selection function affects the measured clustering;
- Shown that the individual BOSS target samples can be trivially combined into one BOSS sample, allowing arbitrary splitting in redshift;
- Demonstrated that BOSS BAO measurements are robust to the treatment of the selection function and the details of how the BOSS samples are combined;
- Measured the BAO scale transverse to and along the line of sight from the BOSS galaxy correlation function in two independent redshift slices, $0.2 < z < 0.5$ and $0.5 < z < 0.75$, and one overlapping redshift slice. $0.4 < z < 0.6$.
The results of our work on the selection function are included in the BOSS galaxy catalogs described in [@Reid15]. The results of our BAO scale measurements are used in [@Acacia], where they are combined with other BOSS DR12 results and used to evaluate cosmological models.
The main, non-standard, components to the BOSS selection function are the weights that we apply to account for fluctuations in the angular selection function. The angular selection function has been demonstrated to depend on the stellar density and the seeing conditions of the BOSS imaging data that targets are selected from. The weights we have defined correct for these variations in the selection function.
We have assessed the impact of these weights by comparing the clustering of BOSS samples with and without the weights. The stellar density weights have by far the greatest impact. The impact can be quantified by determining the $\chi^2$ difference between the two measurements (using a model that assumes the difference is zero); for the stellar density weights it was 13.1, implying the possibility of parameter estimation being biased by 3.6$\sigma$ when not accounting for the effect of stellar density on the angular selection function. However, we find both for mocks and for the data that BAO measurements are robust to whether or not any weights are included to account for the fluctuations in the selection function. We conclude that our treatment of the BOSS selection function imparts no bias into the resulting BAO measurements.
We note that our conclusions on the lack of any bias are specific to BAO measurements. We recommend that any other kind of measurement conduct a similar analysis as presented here, in order to assess any potential of systematic bias. At the least, we suggest that any configuration space analysis includes a constant term with a free amplitude to be marginalized over (like there is in the BAO model). An analysis demonstrating the robustness of structure growth measurements determined by modeling RSD under such treatment is presented in Appendix \[sec:rsdrobust\]. Our analysis does not attempt to assess the size of possible fluctuations due to calibration uncertainties, like discussed in [@Huterer13], which would need to be accounted for in any analysis where broad-band large-scale power is important (e.g., primordial non-Gaussianity).
While the location of the measured BAO position is robust to the treatment of the selection function, our treatment does add a small degree of statistical uncertainty that is not accounted for in our covariance matrices. The reason is that our methods essentially null clustering modes that are aligned with fluctuations in stellar density. A small fraction of these modes contain BAO information. We find that when approximating our procedure for correcting for the stellar density systematic the standard deviation of mock samples increases by 2 per cent for the transverse BAO measurement and 0.5 per cent for the radial BAO measurement. In terms of the statistical uncertainty, these are 0.14$\sigma_{\rm stat}$ and 0.07$\sigma_{\rm stat}$, respectively.
Fundamentally, the robustness of BAO measurements is due to the fact that the BAO are a localized feature in configuration space and it is difficult for any observational feature to have such a localized effect, especially when angular and radial components are combined. Indeed, it was noted in the review of [@WeinbergDERev] that this nature of BAO studies makes it an especially robust probe of the expansion history of the Universe. The work we have presented shows this to be true in detail. Our results suggest this will remain fact for the next generation of BAO experiments.
acknowledgements {#acknowledgements .unnumbered}
================
AJR is grateful for support from the Ohio State University Center for Cosmology and ParticlePhysics. Nearly all heavy computer processing made use of the facilities and staff of the UK Sciama High Performance Computing cluster supported by the ICG, SEPNet and the University of Portsmouth. Colors made possible by <http://matplotlib.org/examples/color/named_colors.html>; figures made colorblind-friendly (hopefully) by use of Color Oracle software\
C. C. acknowledges support as a MultiDark Fellow and from the Spanish MICINNs Consolider-Ingenio 2010 Programme under grant MultiDark CSD2009-00064, MINECO Centro de Excelencia Severo Ochoa Programme under grant SEV-2012-0249, and grant AYA2014-60641-C2-1-P\
MPI acknowledges support from MINECO under the grant AYA2012-39702-C02-01.\
Hee-Jong Seo’s work is supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics under Award Number DE-SC0014329.\
MV is partially supported by Programa de Apoyo a Proyectos de Investigación e Innovación Tecnológica (PAPITT) No IA102516 and Proyecto Conacyt Fronteras No 281\
Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/.
SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Cambridge University , Carnegie Mellon University, Case Western University, University of Florida, Fermilab, the French Participation Group, the German Participation Group, Harvard University, UC Irvine, Instituto de Astrofisica de Andalucia, Instituto de Astrofisica de Canarias, Institucio Catalana de Recerca y Estudis Avancat, Barcelona, Instituto de Fisica Corpuscular, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Korean Institute for Advanced Study, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Pittsburgh, University of Portsmouth, Princeton University, UC Santa Cruz, the Spanish Participation Group, Texas Christian University, Trieste Astrophysical Observatory University of Tokyo/IPMU, University of Utah, Vanderbilt University, University of Virginia, University of Washington, University of Wisconsin and Yale University.
[99]{}
Abazajian, K., Adelman-McCarthy, J. K., Ag[ü]{}eros, M. A., et al. 2004, AJ, 128, 502
Aihara, H., Allende Prieto, C., An, D., et al. 2011, ApJS, 193, 29
Alam, S., Albareti, F. D., Allende Prieto, C., et al. 2015, ApJS, 219, 12
Alam, S., Ho, S., Vargas-Maga[ñ]{}a, M., & Schneider, D. P. 2015, MNRAS, 453, 1754
Alam, S., Ata, M., Bailey, S., et al. 2016, arXiv:1607.03155 (DR12 cosmological constraints)
Alcock C., Paczynski B., 1979, Nature, 281, 358.
Anderson, L., Aubourg, E., Bailey, S., et al. 2012, MNRAS, 427, 3435
Anderson, L., Aubourg, E., Bailey, S., et al. 2014, MNRAS, 439, 83
Anderson, L., Aubourg, E., Bailey, S., et al. 2014, MNRAS, 441, 24
Beutler, F., Seo, H.-J., Ross, A. J., et al. 2016, arXiv:1607.03149 BAO
Beutler, F., Seo, H.-J., Saito, S., et al. 2016, arXiv:1607.03150 RSD
Bolton, A. S., Schlegel, D. J., Aubourg, [É]{}., et al. 2012, AJ, 144, 144
Burden, A., Percival, W. J., Manera, M., et al. 2014, MNRAS, 445, 3152
Cohn, J. D., White, M., Chang, T.-C., et al. 2016, MNRAS, 457, 2068
Chuang, C.-H., Prada, F., Cuesta, A. J., et al. 2013a, MNRAS, 433, 3559
Chuang, C.-H., Prada, F., Beutler, F., et al. 2013b, arXiv:1312.4889
Chuang, C.-H., Pellejero-Ibanez, M., Rodr[í]{}guez-Torres, S., et al. 2016, arXiv:1607.03151
Colless, M., Peterson, B. A., Jackson, C., et al. 2003, arXiv:astro-ph/0306581
M. Crocce and R. Scoccimarro, 2006, Phys. Rev. D 063520
Cuesta, A. J., Vargas-Maga[ñ]{}a, M., Beutler, F., et al. 2016, MNRAS, 457, 1770
Dawson K., et al., 2013, AJ, 145, 10
Dodelson, S., & Schneider, M. D. 2013, Phys. Rev. D, 88, 063537
Eisenstein, D. J., & Hu, W. 1998, ApJ, 496, 605
Eisenstein D. J., Seo H.-J., Sirko E., Spergel D. N., 2007a, ApJ, 664, 675
D. J. Eisenstein, H. J. Seo and M. J. White, 2007b, ApJ, 664, 660
Eisenstein D.J., et al., 2011, AJ, 142
Elsner, F., Leistedt, B., & Peiris, H. V. 2016, MNRAS, 456, 2095
Feldman, H. A., Kaiser, N., & Peacock, J. A. 1994, ApJ 426, 23
Font-Ribera, A., McDonald, P., Mostek, N., et al. 2014, JCAP, 5, 023
Fukugita, M., Ichikawa, T., Gunn, J. E., Doi, M., Shimasaku, K., Schneider, D. P., 1996, AJ, 111, 1748
Gil-Mar[í]{}n, H., Percival, W. J., Cuesta, A. J., et al. 2016, MNRAS, 460, 4210
Gil-Mar[í]{}n, H., Percival, W. J., Brownstein, J. R., et al. 2016, MNRAS, 460, 4188
Grieb, J. N., S[á]{}nchez, A. G., Salazar-Albornoz, S., et al. 2016, arXiv:1607.03143 (P(k) wedges RSD)
Gunn, J. E., et al., 1998, AJ, 116, 3040
Gunn, J. E., et al. 2006, AJ, 131, 2332
Hartlap, J., Simon, P., & Schneider, P. 2007, A&A, 464, 399
Huterer, D., Cunha, C. E., & Fang, W. 2013, MNRAS, 432, 2945
Kazin, E. A., S[á]{}nchez, A. G., & Blanton, M. R. 2012, MNRAS, 419, 3223
Kazin, E. A., S[á]{}nchez, A. G., Cuesta, A. J., et al. 2013, MNRAS, 435, 64
Kitaura F.-S., Yepes G., Prada F., 2014, MNRAS, 439, L21
Kitaura, F.-S., Rodr[í]{}guez-Torres, S., Chuang, C.-H., et al. 2016, MNRAS, 456, 4156
Landy S. D., Szalay A. S., 1993, ApJ, 412, 64
Lewis A., Bridle S., 2002, PhRvD, 66, 103511
Manera, M., Scoccimarro, R., Percival, W. J., et al. 2013, MNRAS, 428, 1036
T. Matsubara, 2008, Phys. Rev. D 77, 063530
Osumi, K., Ho, S., Eisenstein, D. J., & Vargas-Maga[ñ]{}a, M. 2015, arXiv:1505.00782
Padmanabhan, N., & White, M. 2009, Phys. Rev. D, 80, 063508
Padmanabhan, N., Xu, X., Eisenstein, D. J., et al. 2012, MNRAS, 427, 2132
Percival, W. J., Ross, A. J., S[á]{}nchez, A. G., et al. 2014, MNRAS, 439, 2531
Planck Collaboration, Ade, P. A. R., Aghanim, N., et al. 2015, arXiv:1502.01589
Reid, B. A., & White, M. 2011, MNRAS, 417, 1913
Reid B.A., et al., 2012, MNRAS, 426, 2719
Reid, B., Ho, S., Padmanabhan, N., et al. 2016, MNRAS, 455, 1553
Ross A. J., et al., 2011, MNRAS, 417, 1350
Ross A. J., et al., 2012, MNRAS, 428, 1116
Ross, A. J., Percival, W. J., Carnero, A., et al. 2013, MNRAS, 428, 1116
Ross, A. J., Samushia, L., Burden, A., et al. 2014, MNRAS, 437, 1109
Ross, A. J., Percival, W., & Manera, M., 2015, MNRAS, 451, 1331
Rykoff, E.S., et al., 2014, ApJ, 785, 104
Samushia, L., Reid, B. A., White, M., et al. 2014, MNRAS, 439, 3504
S[á]{}nchez, A. G., Kazin, E. A., Beutler, F., et al. 2013, MNRAS, 433, 1202
S[á]{}nchez, A. G., Montesano, F., Kazin, E. A., et al. 2014, MRNAS, 440, 2692
S[á]{}nchez, A. G., Scoccimarro, R., Crocce, M., et al. 2016, arXiv:1607.03147 ($\xi$ wedges RSD)
S[á]{}nchez, A. G., Grieb, J. N., Salazar-Albornoz, S., et al. 2016, arXiv:1607.03146, (combining the likelihoods)
Satpathy, S., Alam, S., Ho, S., et al. 2016, arXiv:1607.03148 ($\xi$ multipoles RSD)
Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, ApJ, 500, 525
Schlafly E. F., Finkbeiner D. P., 2011, ApJ, 737, 103
Seo H.-J., Beutler F., Ross A. J., Saito S., 2015, arXiv, arXiv:1511.00663
Smee, S. A., Gunn, J. E., Uomoto, A., et al. 2013, AJ, 146, 32
Stoughton, C., Lupton, R. H., et al. 2002, AJ, 123, 485
Thepsuriya, K., & Lewis, A. 2015, JCAP, 1, 034
Tojeiro, R., Ross, A. J., Burden, A., et al. 2014, MNRAS, 440, 2222
Vargas-Maga[ñ]{}a, M., Ho, S., Xu, X., et al. 2014, MNRAS, 445, 2
Vargas-Maga[ñ]{}a et al. 2016, arXiv:1610.03506
Weinberg, D. H., Mortonson, M. J., Eisenstein, D. J., et al. 2013, Physics Reports, 530, 87
White M., Tinker J. L., McBride C. K., 2014, MNRAS, 437, 2594
White, M. 2015, MNRAS, 450, 3822
Xu, X., Cuesta, A. J., Padmanabhan, N., Eisenstein, D. J., & McBride, C. K. 2013, MNRAS, 431, 2834
York, D.G., et al. 2000, AJ, 120, 1579
Zehavi, I., Zheng, Z., Weinberg, D. H., et al. 2011, ApJ, 736, 59
Choosing a bin size and range of scales {#app:binsize}
=======================================
![Statistics of 2D BAO fits on 1000 QPM CMASS post-reconstruction mocks, as a function of the bin size. Red diamonds show results for $\alpha_{\perp}$ and blue circles show the results for $\alpha_{||}$. The bias of the mean alpha, multiplied by 10, is shown with solid lines; one can see it is never greater that 0.1$\sigma$. The standard deviation of the mock results is shown with dotted lines (and open symbols) and the mean likelihood error with dashed lines.[]{data-label="fig:baomockbinsize"}](mockbaobinsize.pdf){width="84mm"}
In this appendix, we motivate the choices for the bin-size and range of scales used to obtain our BAO measurements. We thus present the results of BAO constraints obtained from the post-reconstruction CMASS sample as a function of the bin-size and the range of scales used in the analysis. All statistics are derived from the mean and variance of fits to $\alpha_{||},\alpha_{\perp}$ obtained from the QPM mocks. See [@VargasDR12BAO] for a more detailed study on similar tests.
We have tested the BAO constraints obtained from the post-reconstruction CMASS sample as a function of bin-size (holding the fitting range fixed to $50 < s < 150$). This is a repeat of the tests done in [@Per14]; naively, the results would only improve as the bin-size is decreased, but this decrease increases the size of the data vector and thus the noise in the inverse covariance matrix. The results are summarized by Fig. \[fig:baomockbinsize\]. One can see that the trends are not strong, so any choice of bin size in the range $4-8h^{-1}$Mpc would be reasonable. The tests on the mocks suggest the correlations between results from different bin sizes are $\sim$0.95 for both $\alpha_{\perp}$ and $\alpha_{||}$. Based on these results, we choose to use a bin size of 5$h^{-1}$Mpc. Such a bin choice requires combining across less bin centers than was the case for BOSS DR11 analyses, which used a bin size of 8$h^{-1}$Mpc [@alph].
![Statistics of 2D BAO fits on 1000 QPM CMASS post-reconstruction mocks, as a function of the minimum (top) and maximum (bottom) scale used.. Red diamonds show results for $\alpha_{\perp}$ and blue circles show the results for $\alpha_{||}$. The bias of the mean alpha, multiplied by 10, is shown with solid lines. The standard deviation of the mock results is shown with dotted lines (and open symbols) and the mean likelihood error with dashed lines. []{data-label="fig:scalerange"}](mockbaominscale.pdf "fig:"){width="84mm"} ![Statistics of 2D BAO fits on 1000 QPM CMASS post-reconstruction mocks, as a function of the minimum (top) and maximum (bottom) scale used.. Red diamonds show results for $\alpha_{\perp}$ and blue circles show the results for $\alpha_{||}$. The bias of the mean alpha, multiplied by 10, is shown with solid lines. The standard deviation of the mock results is shown with dotted lines (and open symbols) and the mean likelihood error with dashed lines. []{data-label="fig:scalerange"}](mockbaomaxscale.pdf "fig:"){width="84mm"}
Similarly, we have tested the minimum and maximum scale used in the BAO fits. The results are summarized in Fig. \[fig:scalerange\]. These results motivate our choice of using the range $50 < s < 150h^{-1}$Mpc. At scales $s < 50h^{-1}$Mpc, we do not recover un-biased measurements of $\alpha_{||}$. This is due to our ability to model the post-reconstruction quadrupole at such scales. A minimum scale $r > 70h^{-1}$Mpc causes a decrease in the statistical power of the measurements. Likewise, a maximum scale $r < 150h^{-1}$ increases both the statistical uncertainty and the bias of the results.
Robustness tests on combined sample {#app:rob}
===================================
test $\alpha_{||}$ $\alpha_{\perp}$ $\chi^2$/dof
--------------------------- ----------------- ------------------ --------------
[**$0.2 < z < 0.5$:**]{}
bin size:
3 $h^{-1}$Mpc 1.022$\pm$0.028 0.987$\pm$0.015 51/54
4 $h^{-1}$Mpc 1.026$\pm$0.029 0.985$\pm$0.015 55/38
5 $h^{-1}$Mpc 1.025$\pm$0.027 0.988$\pm$0.015 39/30
6 $h^{-1}$Mpc 1.020$\pm$0.028 0.988$\pm$0.015 34/24
7 $h^{-1}$Mpc 1.026$\pm$0.028 0.985$\pm$0.015 26/18
8 $h^{-1}$Mpc 1.027$\pm$0.028 0.982$\pm$0.015 28/16
10 $h^{-1}$Mpc 1.027$\pm$0.029 0.986$\pm$0.016 14/10
$s > 70h^{-1}$Mpc 1.023$\pm$0.026 0.990$\pm$0.014 32/26
$s < 170h^{-1}$Mpc 1.027$\pm$0.027 0.987$\pm$0.015 44/34
$A_0 = 0$ 1.029$\pm$0.028 0.986$\pm$0.015 46/33
$A_2 = 0$ 1.021$\pm$0.028 0.991$\pm$0.015 43/33
$A_{\ell} = 0$ 1.023$\pm$0.028 0.990$\pm$0.015 50/36
$B_0$ free 1.025$\pm$0.027 0.988$\pm$0.015 39/30
$B_2$ free 1.025$\pm$0.027 0.988$\pm$0.015 39/30
NGC 1.035$\pm$0.031 0.997$\pm$0.016 36/30
SGC 0.999$\pm$0.043 0.942$\pm$0.034 38/30
[**$0.4 < z < 0.6$:**]{}
bin size:
3 $h^{-1}$Mpc 0.991$\pm$0.024 0.995$\pm$0.014 54/54
4 $h^{-1}$Mpc 0.991$\pm$0.024 0.991$\pm$0.014 38/38
5 $h^{-1}$Mpc 0.986$\pm$0.024 0.994$\pm$0.014 23/30
6 $h^{-1}$Mpc 0.984$\pm$0.023 0.995$\pm$0.014 23/24
7 $h^{-1}$Mpc 0.985$\pm$0.023 0.992$\pm$0.013 16/18
8 $h^{-1}$Mpc 0.989$\pm$0.024 0.993$\pm$0.013 13/16
10 $h^{-1}$Mpc 0.982$\pm$0.024 0.995$\pm$0.014 8/10
$s > 70h^{-1}$Mpc 0.985$\pm$0.022 0.994$\pm$0.013 17/26
$s < 170h^{-1}$Mpc 0.984$\pm$0.025 0.995$\pm$0.014 37/34
$A_0 = 0$ 0.986$\pm$0.026 0.993$\pm$0.015 30/33
$A_2 = 0$ 0.982$\pm$0.024 0.996$\pm$0.014 25/33
$A_{\ell} = 0$ 0.982$\pm$0.025 0.995$\pm$0.015 31/36
$B_0$ free 0.986$\pm$0.024 0.994$\pm$0.014 23/30
$B_2$ free 0.986$\pm$0.024 0.994$\pm$0.014 22/30
NGC 0.972$\pm$0.028 0.995$\pm$0.016 21/30
SGC 1.025$\pm$0.057 0.990$\pm$0.036 30/30
[**$0.5 < z < 0.75$:**]{}
bin size:
3 $h^{-1}$Mpc 0.962$\pm$0.023 0.993$\pm$0.015 55/54
4 $h^{-1}$Mpc 0.957$\pm$0.023 0.995$\pm$0.015 37/38
5 $h^{-1}$Mpc 0.962$\pm$0.023 0.991$\pm$0.015 32/30
6 $h^{-1}$Mpc 0.961$\pm$0.023 0.995$\pm$0.016 25/24
7 $h^{-1}$Mpc 0.963$\pm$0.025 0.990$\pm$0.015 13/18
8 $h^{-1}$Mpc 0.955$\pm$0.023 0.995$\pm$0.015 16/16
10 $h^{-1}$Mpc 0.963$\pm$0.023 0.989$\pm$0.015 12/10
$s > 70h^{-1}$Mpc 0.964$\pm$0.022 0.990$\pm$0.014 23/26
$s < 170h^{-1}$Mpc 0.963$\pm$0.023 0.989$\pm$0.015 41/34
$A_0 = 0$ 0.963$\pm$0.027 0.992$\pm$0.017 43/33
$A_2 = 0$ 0.955$\pm$0.022 0.994$\pm$0.015 35/33
$A_{\ell} = 0$ 0.954$\pm$0.025 0.996$\pm$0.017 49/36
$B_0$ free 0.962$\pm$0.024 0.991$\pm$0.015 31/30
$B_2$ free 0.962$\pm$0.023 0.990$\pm$0.015 31/30
NGC 0.944$\pm$0.025 0.986$\pm$0.017 31/30
SGC 1.020$\pm$0.048 1.010$\pm$0.035 32/30
\[tab:binsize\]
: Post-reconstruction combined sample 2D BAO fits as a function of bin-size, choice of fitting range, choices for nuisance parameters, and Galactic hemisphere.
test $\alpha_{||}$ $\alpha_{\perp}$ $\chi^2$/dof
--------------------------------- ----------------- ------------------ --------------
[**$0.2 < z < 0.5$:**]{}
fiducial 1.025$\pm$0.027 0.988$\pm$0.015 39/30
$\Sigma_{\perp} = 0$ 1.026$\pm$0.027 0.987$\pm$0.014 39/30
$\Sigma_{\perp} = 5.0h^{-1}$Mpc 1.024$\pm$0.027 0.991$\pm$0.016 42/30
$\Sigma_{||} = 0$ 1.024$\pm$0.026 0.988$\pm$0.015 39/30
$\Sigma_{||} = 8.0h^{-1}$Mpc 1.029$\pm$0.028 0.988$\pm$0.015 42/30
$\Sigma_{s} = 0$ 1.024$\pm$0.027 0.988$\pm$0.015 39/30
$\Sigma_{s} = 8.0h^{-1}$Mpc 1.035$\pm$0.030 0.988$\pm$0.015 44/30
[**$0.4 < z < 0.6$:**]{}
fiducial 0.986$\pm$0.024 0.994$\pm$0.014 23/30
$\Sigma_{\perp} = 0$ 0.986$\pm$0.023 0.994$\pm$0.014 21/30
$\Sigma_{\perp} = 5.0h^{-1}$Mpc 0.985$\pm$0.024 0.995$\pm$0.016 27/30
$\Sigma_{||} = 0$ 0.985$\pm$0.022 0.995$\pm$0.014 21/30
$\Sigma_{||} = 8.0h^{-1}$Mpc 0.991$\pm$0.027 0.992$\pm$0.014 27/30
$\Sigma_{s} = 0$ 0.987$\pm$0.024 0.992$\pm$0.014 28/30
$\Sigma_{s} = 8.0h^{-1}$Mpc 0.994$\pm$0.029 0.994$\pm$0.014 24/30
[**$0.5 < z < 0.75$:**]{}
fiducial 0.962$\pm$0.023 0.991$\pm$0.015 32/30
$\Sigma_{\perp} = 0$ 0.962$\pm$0.023 0.990$\pm$0.015 31/30
$\Sigma_{\perp} = 5.0h^{-1}$Mpc 0.962$\pm$0.024 0.991$\pm$0.017 35/30
$\Sigma_{||} = 0$ 0.960$\pm$0.022 0.992$\pm$0.015 30/30
$\Sigma_{||} = 8.0h^{-1}$Mpc 0.968$\pm$0.027 0.988$\pm$0.015 36/30
$\Sigma_{s} = 0$ 0.963$\pm$0.024 0.990$\pm$0.015 32/30
$\Sigma_{s} = 8.0h^{-1}$Mpc 0.971$\pm$0.029 0.987$\pm$0.015 38/30
\[tab:damping\]
: Post-reconstruction combined sample 2D BAO fits, varying the choice of damping parameters that enter the template.
Here, we report the results of a number of robustness checks on the BAO fits to the BOSS combined sample data. Table \[tab:binsize\] presents measurements for different bin sizes. The variation between the results is small and consistent with that found in the mock samples in the previous section. We have also tested changing the range of scales that are fit within the region, increasing the minimum and maximum scale each by 20$h^{-1}$Mpc individually, as the mock tests suggest our results should be equally valid under this change; indeed we find no significant change.
Table \[tab:binsize\] also presents tests where we have changed the way nuisance parameters are treated. We test allowing each of the bias terms to be completely free (i.e., with no prior on $B_{\ell}$; denoted by ‘$B_{\ell}$ free’) and find no significant changes in the results. We have also tested removing the polynomial terms from the fits (denoted by ‘$A_{\ell}= 0$’); the motivation of these polynomial terms is to isolate the BAO feature and ensure broadband effects, such as incomplete modeling of the post-reconstruction quadrupole and observational systematics, do not affect the recovered results. Even without these terms, the results in the table show that we recover nearly the same results. The biggest change is in the $0.5 < z < 0.75$ redshift bin, where not including the polynomial terms shifts the results by $\sim 0.3\sigma$ for both $\alpha{||,\perp}$ values (in opposite directions). Thus, despite their (well-motivated) inclusion, the polynomial terms have only a minor effect on the recovered results.
The results presented in Table \[tab:binsize\] are for where we individually fit the BAO scale in the NGC and SGC. In the $0.2 < z < 0.5$ redshift bin, the differences are greatest in terms of the measurement of $\alpha$, where the discrepancy is $\sim$1.5$\sigma$. A similar difference is found in the high redshift bin, except that the difference is in the opposite direction. These results are therefore consistent with those presented for the CMASS and LOWZ samples in Section 6.2.
Table \[tab:damping\] presents tests where we have significantly altered the fiducial damping scales in the template. We have either set the damping scale to 0 or doubled its size. Setting the damping scale to zero alters the results by at most $0.13\sigma$ ($\alpha_{\perp}$ in the $0.4 < z < 0.6$ bin) and the changes are otherwise $<0.1\sigma$. Doubling the damping scale for $\Sigma_{||}$ or $\Sigma_s$ has a larger effect, mainly on $\alpha_{||}$. The most extreme change is $0.33\sigma$, when doubling $\Sigma_s$ in the $0.2 < z < 0.5$ redshift bin. The size of the change in the other redshift bins is similar; increasing $\Sigma_s$ results in an increase in $\alpha_{||}$. The same is true for increasing $\Sigma_{||}$, though changes are smaller ($<0.2\sigma$). The changes are generally coupled with small decreases in $\alpha_{\perp}$, implying that in terms of $\alpha$,$\epsilon$, the changes would be observed in $\epsilon$. These results are consistent with those of [@VM14; @VargasDR12BAO], where template choices are studied in detail using mock galaxy catalogs and the results of which set systematic uncertainty applied to the results in [@Acacia]. Notably, none of the results that cause more than a $0.1\sigma$ shift in the best-fit BAO position are preferred in terms of the minimum $\chi^2$ of the fit.
Information distribution with respect to the line of sight
==========================================================
In the spherically symmetric case, with no RSD, information is expected to be divided equally as a function of the cosine of the angle to the line of sight, $\mu$. In [@Ross152D], it was found that the BAO information in the BOSS DR11 mock samples was nearly constant a a function of $\mu$. A speculative argument explaining this fact is that any boost in information along the line of sight due to linear RSD is canceled by non-linear RSD and finger of God effects. Here, we test the distribution of BAO information in the MultiDark Patchy mocks, compared to the DR12 data. We do this by dividing the data into five bins by $\mu$ (or ‘wedges’; @Kazin12), with $\Delta\mu = 0.2$.
![The mean BAO uncertainty as a function of $\mu$ for post-reconstruction MultiDark-Patchy mocks (solid lines/open symbols) compared to the results for the BOSS galaxy data (dashed lines/filled symbols). []{data-label="fig:baomu"}](DR12BAOsigvmu.pdf){width="84mm"}
In each $\mu$ bin, we apply the same BAO model described in Section 2.3, but with the template determined via integration over the particular $\mu$ range. The results for both the data and the mean results from the mocks are presented in Fig. \[fig:baomu\]. For the mocks, the mean uncertainty is approximately constant with $\mu$, except in the $\mu > 0.8$ bin, where it is about 20 per cent greater than the $\mu$ bin with the lowest uncertainty. This is a bigger difference than was found in [@Ross152D], where differences were at most 15 per cent and the uncertainties were the same in the low and high $\mu$ bins. It is possible the differences are due to differences between the MultiDark-Patchy mocks and the PTHalos [@Manera13] mocks used in the [@Ross152D] analysis. Regardless, the fundamental result that the uncertainty is approximately constant with $\mu$ remains. We find no clear trend in the uncertainty on the data. This is not overly surprising, as it is a single realization.
![The measured BAO scale as a function of $\mu$, measured from the post-reconstruction BOSS galaxy correlation function, in $\mu$ bins of thickness 0.2. The solid lines represent the prediction based on the $\alpha_{||},\alpha_{\perp}$ measured from $\xi_0,\xi_2$. []{data-label="fig:baomud"}](DR12baovsmu.pdf){width="84mm"}
Finally, we have looked at the measured BAO position as a function of $\mu$. These measurements can be compared a prediction based on $\alpha(\mu) = \sqrt{\mu^2\alpha_{||}^2+(1-\mu^2)\alpha_{\perp}^2}$ and our measurements of $\alpha_{||},\alpha_{\perp}$. Fig. \[fig:baomud\] shows this comparison. The curves are consistent with the measured points, as one would expect.
\[lastpage\]
[^1]: The pair-counts are tabulated using a bin width of 1 $h^{-1}$Mpc and then summed into 5 $h^{-1}$Mpc bins, allowing different choices for bin centres.
[^2]: camb.info
[^3]: This is essentially the [@AP] effect on the BAO feature.
[^4]: Code to produce the BOSS catalogs, [MKSAMPLE]{}, is available from the main SDSS web site http://www.sdss.org/surveys/boss
[^5]: 996 mocks are MD-P used for the MD-P results and 1000 for QPM
[^6]: Both include no neutrino mass, rather than the minimal allowed mass adopted for our fiducial cosmology, but as shown by [@ThepLewis], this is expected to have minimal impact on BAO analyses.
[^7]: We have found no indications that any conclusions would be altered if the tests are repeated with the final MD-P mocks.
[^8]: The difference in this dependency with seeing between the two regions must be related to another variable that differs considerably between the two regions, but a thorough investigation was unable to determine this variable.
[^9]: Masking the data at the lowest extinction values does not cause any significant change in the clustering results.
[^10]: The QPM mocks are a good match
[^11]: E.g., if the density is expected to be 0.95 that of the nominal density, each mock galaxy is tested and kept in the sample if a randomly generated number between 0 and 1 is less than 0.95.
[^12]: We have focused the mock tests on pre-reconstruction results due to the computational demands of analyzing the post-reconstruction samples
[^13]: See [@Elsner15] for analytic descriptions of similar effects in spherical harmonic space.
[^14]: Specifically, the results from the ’base\_plikHM\_TT\_lowTEB\_lensing’ chains.
|
---
abstract: 'Electrons in graphene with heavy adatoms (such as In or Tl) have been predicted to form a 2D topological insulator phase with a substantial spectral gap potentially suitable for future practical applications. In order to facilitate the ongoing experimental efforts to identify this phase we perform a theoretical study of its spectral properties in a model graphene system with randomly distributed adatoms. Our extensive modeling shows that random heavy adatoms produce a full spectral gap (as opposed to a mobility gap) accompanied by distinctive quasiparticle interference patterns observable by means of Fourier-transform scanning tunneling spectroscopy.'
author:
- Paul Soulé
- 'M. Franz'
title: Quasiparticle spectroscopy as a probe of the topological phase in graphene with heavy adatoms
---
=1
Despite their pivotal role in the “topological revolution” that transpired in condensed mater physics in recent years [@moore_rev; @hasan_rev; @qi_rev; @franz_book] 2D topological insulators (TIs) have thus far largely failed to deliver on their promise to become a testbed for fundamental new concepts and a platform for exciting practical applications. The reason behind this lies in the lack of widely available 2D TI materials. The existing known 2D TIs include HgTe/CdTe quantum wells [@konig] and InAs/GaSb quantum wells [@knez], which however require specialized fabrication techniques and have not, thus far, caught on as convenient and widely available platforms for broad experimentation. This is in contrast to 3D TIs [@franz_book] where dozens of confirmed materials exist and the prototype Bi$_2$Se$_3$, Bi$_2$Te$_3$ materials are straightforward to grow and widely available.
The historically first and conceptually simplest 2D TI system is based on the Kane-Mele model [@kane1] for graphene with spin-orbit coupling (SOC). Although the intrinsic SOC strength is too small to bring about this phase in pristine graphene it has been suggested that the effect can be amplified manyfold by depositing a dilute concentration of certain heavy adatoms. Specifically, graphene with a modest $\sim 6$% concentration of In and Tl adatoms is predicted to form a TI with an estimated gap of 7 and 21 meV, respectively.[@weeks1] These adatoms’ outer electrons are in $p$ shells and in essence act as local sources of strong SOC for low-energy Dirac electrons in graphene. Potentially much larger gaps can be achieved by using transition metal elements with active $d$ orbitals such as Ir and Os, although the detailed microscopic mechanism is somewhat different here.[@hu1]
Although conceptually simple and straightforward to implement, the proposal to generate a 2D TI from graphene with adatoms has not yet been experimentally realized. Transport experiments[@folk1] on graphene flakes with very small concentrations of In ($<0.02$%) have confirmed the predicted doping dependence (each In adatom donates $\sim 1$ electron) but were unable to confirm the transition into the topological state which one expects only at higher adatom densities. Preliminary scanning tunneling microscopy[@burke1] (STM) studies of Tl on graphene grown on SiC substrate indicated the ‘hollow’ adsorption site (in the middle of the hexagonal plaquette) as predicted but could not resolve the spectral gap characteristic of a 2D TI. Angle resolved photoemission[@dama1] (ARPES) on similar samples observed the effect of doping as well as increased line broadening, but again failed to discern any clear signature of an excitation gap.
In order to assist the ongoing experimental efforts aimed at identifying the 2D topological phase in graphene with adatoms we undertake here a program of theoretical modeling of its spectral properties in the experimentally relevant regime of [*randomly distributed*]{} adatoms. Aside from detailed predictions that we develop for STM and ARPES our study yields two important qualitative insights. First, we find that SOC generated by randomly distributed heavy adatoms produces a [*full spectral gap*]{} (as opposed to a mobility gap). This feature was not apparent from the original transport calculations in the disordered regime[@weeks1; @hu1; @shevtsov1] although more recent work[@niu1] indicated that this might be the case. Second, we identify unique signatures of the SOC observable by Fourier-transform scanning tunneling spectroscopy (FT-STS). These take the form of quasiparticle scattering patterns that are prohibited by symmetries in graphene with ordinary potential scatterers.[@shon] Our results thus identify ARPES in combination with FT-STS as ideal tools for observing the topological phase in graphene with heavy adatoms.
In this study we focus on the simpler and physically more transparent model appropriate for In and Tl adatoms[@weeks1] defined by the lattice Hamiltonian $H=H_t+\sum_I\delta H_{I}$ with $$\begin{aligned}
\label{h1}
H_t &=& -t \sum_{\langle {\bf r r'}\rangle}(c^\dagger_{{\bf r}}c_{{\bf r'}} + {\rm h.c.}) +\sum_{{\bf r}}w_{{\bf r}}c^\dagger_{{\bf r}}c_{{\bf r}},
\\
\delta H_I&=&-\delta \mu\sum_{{\bf r}\in I} c^\dagger_{{\bf r}}c_{{\bf r}}
+\lambda_{\rm so} \sum_{\langle\langle {\bf r r'}\rangle\rangle\in I}(i \nu_{\bf r r'}c^\dagger_{{\bf r}}s^z c_{{\bf r'}} + {\rm h.c.}).
\nonumber\end{aligned}$$ Here $H_t$ describes the usual nearest-neighbor electron hopping on the graphene honeycomb lattice with $t\simeq 2.7$eV and $w_{{\bf r}}$ denoting weak random disorder (unrelated to adatoms) coming from the substrate or other sources. ${{\bf r}}$ denotes the lattice site and the electron spin is treated implicitly (i.e. we view $c_{{\bf r}}$ as a two-component spinor, $s^z$ is the Pauli matrix). In the second line $I$ labels the random plaquettes occupied by adatoms. The first term in $\delta H_I$ describes the chemical potential that screens charge from the adatoms, while the second term captures the local intrinsic spin-orbit coupling induced by electrons hopping from graphene to an adatom and back. We neglect the Rashba coupling, which has been shown unimportant.[@weeks1] In addition, $\nu_{{{\bf r}}{{\bf r}}'}=+1$ for hops clockwise around the plaquette and $-1$ counterclockwise. Realistic parameters for Tl adatoms are $\lambda_{\rm so}=0.02t$ and $\delta\mu=0.1t$.
According to the previous work[@weeks1; @niu1] we expect the SOC induced by In and Tl adatoms to open a gap in the electron excitation spectrum at the Dirac point. It is thus useful to start our discussion by considering the effective low-energy theory obtained by projecting Hamiltonian (\[h1\]) to the vicinity of the two Dirac momenta $\pm{{\bf K}}=\pm(4\pi/3\sqrt{3}a,0)$ with $a$ the separation between nearest carbon atoms. The low-energy Hamiltonian reads $H^{\rm eff}=\int d^2r \psi^\dagger({{\bf r}})(h_0+h')\psi({{\bf r}})$ with $$\begin{aligned}
h_0 &=& -i\hbar v\left(
\tau^z\sigma^x\partial_x+\sigma^y\partial_y\right),
\label{h0}\\
h'&=&\sum_j\left(-3\delta\mu+\Lambda_{\rm so}\tau^z\sigma^z s^z\right)S_0\delta({{\bf r}}-{{\bf R}}_j).
\nonumber\end{aligned}$$ Here ${\bm\tau}$ and ${\bm\sigma}$ are Pauli matrices acting in the valley and sublattice space, respectively, $v$ represents the Fermi velocity, $\Lambda_{\rm so}=3 \sqrt{3} \lambda_{\rm so}$, $S_0$ is the area of the unit cell and ${{\bf R}}_j$ denotes the random adatom positions. The 8-component spinor $\psi({{\bf r}})$ describes the low-energy electron field in combined valley, sublattice and spin space. For simplicity we neglect the substrate disorder here but we come back to it later. Upon Fourier transforming the Hamiltonian takes the standard form of a disorder problem,[@doniach1] $$\label{heff}
H^{\rm eff}=\sum_{{\bf k}}\psi^\dagger_{{\bf k}}h_{{\bf k}}\psi_{{\bf k}}+\sum_{{{\bf k}}{{\bf q}}}\psi^\dagger_{{{\bf k}}+{{\bf q}}}\rho_{{\bf q}}U_{{\bf q}}\psi_{{\bf k}},$$ with $h_{{\bf k}}=v(\tau^z\sigma^x k_x+\sigma^y k_y)$, $\rho_{{\bf q}}=\sum_je^{-i{{\bf R}}_j\cdot{{\bf q}}}$ and $U_{{\bf q}}=(-3\delta\mu+\Lambda_{\rm so}\tau^z\sigma^z s^z)S_0/S$ and $S$ the area of the system.
We are interested in the disorder-averaged electron propagator $$\label{geff}
g({{\bf k}},\omega)=\left[g_0({{\bf k}},\omega)^{-1}-\Sigma({{\bf k}},\omega)\right]^{-1}$$ where $g_0({{\bf k}},\omega)=(\omega+i\delta-h_{{\bf k}})^{-1}$ is the propagator of the clean system with $\delta=0^+$ while $\Sigma({{\bf k}},\omega)$ represents the disorder self energy. For weak disorder we can evaluate the latter using the standard Born series, which corresponds to the expansion in powers of $U_{{\bf q}}$. To first order we obtain simply[@doniach1] $$\label{sig1}
\Sigma^{(1)}({{\bf k}},\omega)=N_IU_{{{\bf q}}=0}=n_I(-3\delta\mu+\Lambda_{\rm so}\tau^z\sigma^z s^z),$$ where $N_I$ is the total number of impurities (adatoms) and $n_I=N_I(S_0/S)$ is their number density. The key point to notice here is that while the scalar term $-3\delta\mu$ in $\Sigma^{(1)}$ merely shifts the overall chemical potential the SOC term opens up a spectral gap at the Dirac point with the amplitude $\Delta_{\rm so}=n_I\Lambda_{\rm so}$. Therefore, the first order Born correction, which is often neglected as unimportant for scalar disorder potential, leads to an important qualitative change in the spectral properties of the system. Furthermore, to this order the effective disorder averaged Hamiltonian $h_{{\bf k}}^{(1)}=h_{{\bf k}}+\Sigma^{(1)}({{\bf k}},0)$ is identical to the Kane-Mele model [@kane1] and describes a $Z_2$ topological insulator with bulk gap and protected gapless edge states.
The second order Born expansion gives $$\begin{aligned}
\label{sig2}
\Sigma&^{(2)}({{\bf k}},\omega)=N_I\sum_{{\bf q}}U_{{{\bf k}}-{{\bf q}}}g_0({{\bf q}},\omega)U_{{{\bf q}}-{{\bf k}}}\\
&=n_I\left({3\delta\mu-\Lambda_{\rm so}\tau^z\sigma^z s^z\over \Lambda}\right)^2{\omega\over 4\pi}\left\{\ln{\omega^2\over\Lambda^2}-i\pi\rm{sgn}(\omega)\right\}, \nonumber\end{aligned}$$ where $\Lambda=v/\sqrt{S_0}\simeq t$ is the high-energy cutoff for Dirac fermions. For the relevant frequencies $\omega\simeq \Delta_{\rm so}$ we observe that $\Sigma^{(2)}$ represents a small correction to $\Sigma^{(1)}$ as long as $n_I(\delta\mu/\Lambda)^2,n_I(\Lambda_{\rm so}/\Lambda)^2\ll 1$, which we expect to be always true. Higher terms in the Born expansion will be down by additional powers of these small parameters and can therefore be neglected. We conclude on this basis that random distribution of heavy adatoms will indeed open a gap $\Delta_{\rm so}\simeq n_I\Lambda_{\rm so}$ in the spectrum of Dirac fermions. In addition, the disorder induces quasiparticle lifetime broadening $\Gamma={\rm Im}\Sigma$ already apparent in Eq. (\[sig2\]). We expect the disordered system to remain in the topological phase as long as $\Gamma\lesssim\Delta_{\rm so}$ and the chemical potential stays inside the gap.
The gap predicted to exist in graphene with randomly distributed heavy adatoms should be directly observable by various spectroscopies such as ARPES and STS and in transport measurements. Such observation alone would provide a strong support for the notion of the topological phase but would not constitute a definitive proof. Detection of quantized edge transport would provide definitive evidence but is complicated by the need to position the chemical inside the gap. As a plausible alternative to transport measurements we study here quasiparticle interference patterns, observable by FT-STS, which we show contain unique signatures of the SOC origin of the spectral gap.
An FT-STS experiment[@crommie1; @davis1] probes the local density of states, $n({{\bf r}},\omega)$, at a large number of real-space locations ${{\bf r}}$ on the sample surface. The spatial Fourier transform of this signal $n({{\bf q}},\omega)$, referred to as FT-LDOS, can be related to the full electron propagator $G({{\bf r}},{{\bf r}}';\omega)$ as $$\label{n1}
n({{\bf q}},\omega)=-{1\over \pi}\Im\int d^2r e^{-i{{\bf r}}\cdot{{\bf q}}}{{\rm Tr}}[G({{\bf r}},{{\bf r}};\omega)].$$ Here the trace is taken over spin and orbital quantum numbers and $\Im$ denotes the strength of the branch cut across the real frequency axis $\Im f(\omega)\equiv [f(\omega+i\delta)-f(\omega-i\delta)]/2i$. In the limit of weak random potential, the interesting ${{\bf q}}$-dependent part of the FT-STS signal can be expressed in a simple factorized form,[@capriotti1] $$\begin{aligned}
\delta n({{\bf q}},\omega) &=&-{1\over \pi} \rho_{{\bf q}}{\Im} [\Lambda({{\bf q}},\omega)],\label{born1} \\
\Lambda({{\bf q}},\omega) &=&\sum_{{\bf k}}{{\rm Tr}}[ G_0({{\bf k}},\omega)U_{{\bf q}}G_0({{\bf k}}-{{\bf q}},\omega)],
\label{lam1}\end{aligned}$$ where $G_0({{\bf k}},\omega)$ is the electron propagator in the absence of disorder. Since $\rho_{{\bf q}}$ is the Fourier transform of a random potential one expects it to be a featureless function of ${{\bf q}}$. $\Lambda({{\bf q}},\omega)$, on the other hand, represents the response of the underlying [*clean*]{} system and contains, in general, prominent features as a function of ${{\bf q}}$ that can be used to study its properties.
Compared to the standard theoretical treatment[@capriotti1] of FT-LDOS where disorder can be neatly separated from the underlying ‘clean’ system our problem exhibits a slight difficulty in that adatoms provide both the source of disorder and of the spectral gap that we would like to probe. To address this complication we follow a two-pronged strategy. First, we use an analytical approach in which we focus on the low-energy theory (\[h0\]) and take the first-order disorder-averaged Hamiltonian $h_{{\bf k}}^{(1)}$ to describe the underlying clean system. We then assume that residual disorder, not contained in the first-order Born approximation, plus any disorder not related to adatoms (e.g. substrate) is sufficiently weak and permits the use of Eq. (\[born1\]) to calculate the interference pattern. Second, to confirm the validity of this approximate analytical treatment, we consider the full lattice Hamiltonian (\[h1\]) with realistic parameters. We perform exact numerical diagonalizations on finite clusters for specific random adatom configurations and compute FT-STS response with no approximations directly from Eq. (\[n1\]).
![Grayscale plots of $|\Im \Lambda^{\tau\tau'}({{\bf q}},\omega)|$ from Eq. (\[Intra2\]) and (\[Inter2\]). Top panels a) and b) show the intravalley and intervalley $\omega$-$q_x$ maps for pristine graphene ($\Delta_{\rm so}=0$). Bottom panels c) and d) represent the same maps for $\Delta_{\rm so}=1$. Panels a’), b’), c’), and d’) display transverse sections in the $q_x$-$q_y$ plane of the corresponding plots for $\omega=1.5$. We use one grayscale for all intravalley features, and an other one for intervalley plots.[]{data-label="fig1"}](./Fig_Analytic.pdf)
The analytical approach consists of evaluating the momentum sum in Eq. (\[lam1\]) with $G_0({{\bf k}},\omega)=[\omega+i\delta-h_{{\bf k}}^{(1)}]^{-1}$ in the low energy approximation. If $|q| \ll a^{-1}$ only scattering within the same valley contributes to the sum, whereas scattering from one valley to another appears for ${{\bf q}}$ close to the corners of the Brillouin zone. To calculate $\Lambda^{\tau\tau'}({{\bf q}},\omega)$ we use the unperturbed one-particle Green’s function $G_0({{\bf k}},\omega)=(\omega+i\delta-h_{{\bf k}}-\Delta_{\rm so}\tau \sigma^z s^z)^{-1}$ where we have subsumed the shift $-n_I3\delta\mu$ into the bulk chemical potential and $\tau=\pm 1$ is the valley index. We assume here for simplicity that the disorder potential is non-magnetic and slowly varying on the lattice spacing scale such that $U_{{\bf q}}=u_0\id$ in Eq. (\[lam1\]).
For the intravalley term, switching to Matsubara frequencies $i \omega_n$, we obtain $$\begin{gathered}
\label{Intra1}
\Lambda^{++}({{\bf q}},i\omega_n)= 8 \sum \limits_k \frac{(i \omega_n)^2+\Delta_{\rm so}^2+ v^2 {{\bf k}}({{\bf k}}- {{\bf q}}) }{D}, \\
\nonumber
D = \left( \omega_n^2 + \Delta_{\rm so}^2 + v^2 {{\bf k}}^2 \right) \left( \omega_n^2 + \Delta_{\rm so}^2 + v^2 ({{\bf k}}-{{\bf q}})^2 \right). \end{gathered}$$ Integrals of this type can be computed in a similar way as for pristine graphene[@tami3] by means of Feynman parametrization[@peskin; @tami1]. We find $$\label{Intra2}
\Lambda^{++}= \frac{2S}{\pi v^2} \left[ \ln (\frac{\Lambda^2}{\omega_n^2+\Delta_{\rm so}^2} ) + 2 g(z) - \frac{8 \Delta_{\rm so}^2}{v^2 q^2}f(z) \right],$$ where $z=4 [(i \omega_n)^2 - \Delta_{\rm so}^2]/{v^2 q^2}$ and we defined functions $f(z)=\frac{1}{\sqrt{z-1}}\arctan \bigl( \frac{1}{\sqrt{z-1}} \bigr) $ and $g(z)=(z-1)f(z)$. We emphasize that $f(z)$ has a singularity at $z=1$ whereas $g(z)$ does not. Therefore, in the absence of SOC, the FT-LDOS has no singularities in the intravalley response[@tami3; @bena1]. However, the term proportional to $\Delta_{\rm so}^2$ is singular when $z=1$ or equivalently when $\varepsilon(q/2)=\omega$, where $\varepsilon(k)=\pm\sqrt{v^2k^2+\Delta_{\rm so}^2}$ is the dispersion relation of $h^{(1)}$. Those singularities arise from elastic backscattering terms in the sum (\[lam1\]) when ${{\bf q}}=2{{\bf k}}$ and $\omega=\varepsilon({{\bf k}})=\varepsilon({{\bf k}}-{{\bf q}})$. Pseudospin chirality conservation prohibits this intravalley backscattering in pristine graphene because incoming and outgoing quasiparticles have an opposite pseudospin direction.[@shon; @mallet1] In the presence of the SOC mass term $\Lambda_{\rm so}$, however, the chirality conservation is broken and intravalley backscattering close to the gapped region is allowed.
The intervalley component for $|{{\bf q}}-{{\bf K}}| \ll a^{-1}$ is obtained from Eq. (\[lam1\]) using $\tau=+1$ and $\tau'=-1$ for the left and right $G_0$ term, respectively. We find $$\label{Inter2}
\Lambda^{+-}({{\bf q}},i\omega_n)= \frac{S}{\pi v^2} \left[2 \frac{q_x^2}{q^2}\left(1-zf(z)\right) -1 \right].$$ Here, the surface $z=1$ is singular even without SOC, but the amplitude is angle dependent. Singularities arise from scattering of quasiparticles from one valley to the other, but here the overlap of incoming and outgoing quasiparticles’ pseudospins depends on $\mathbf q$ direction.
In Fig. \[fig1\] we plot the FT-STS signal $|\Im[\Lambda^{\tau\tau'}]|$ based on Eqs. (\[Intra2\]) and (\[Inter2\]). Without SOC, the intravalley signal is non-singular and barely visible whereas a linear dispersion with slope $v/2$ appears in the intervalley signal. When the SOC is present, we see a qualitative change in the maps. Now a parabolic dispersion is clearly visible both in the intra- and the intervalley FT-LDOS with a gap $2\Delta_{\rm so}$ separating the two bands. We have also computed numerically $\Lambda({{\bf q}},i \omega)$ from Eq. (\[lam1\]) away from the low energy approximation of $h_0$, and checked that the characteristic features described above remain unchanged as long as $\Delta_{\rm so} \lesssim t$. Finally, a rapidly oscillating disorder potential might have different amplitudes on $A$ and $B$ sublattices such that $U_{{\bf q}}= u_0\id+ \alpha_{{\bf q}}\sigma^z$ in Eq. (\[lam1\]). One can check, however, that the $\sigma^z$ term does not contribute to the intravalley response and affects only the amplitude of the singularities in the intervalley term, in such a way that our above statements remain true.
![Numerical computation of $|n({{\bf q}},\omega)|$ for the lattice model of Eq. (\[h1\]) on a $80\times80$ periodic cluster with $\lambda_{\rm so}=0.04t$ and $\delta\mu=0.2t$. Panel a) and b) show closeups of the intravalley and intervalley FT-LDOS, respectively. Panel c) represents the spectral function $A({{\bf q}},\omega)$ and d) the total density of states $n(\omega)$.[]{data-label="fig2"}](./Fig_Numerics.pdf)
Even though our computations above were performed for an averaged adatom distribution, we believe that the characteristic signal of the topological phase can be observed in FT-STS experiments. In order to support this claim, we carried out exact numerical simulations based on the lattice model of Eq. (\[h1\]) for specific disorder configurations. These computations have the advantage of not relying on the weak disorder or low energy approximations. In addition, no average over disorder configurations is performed before computing the FT-STS signal, just like in real experiments. The FT-LDOS is evaluated from Eq. (\[n1\]) which can be manipulated into the more convenient expression $$n({{\bf r}},\omega)=-{1\over \pi}\Im \sum \limits_i \frac{|\Psi_i({{\bf r}})|^2}{\omega+i\delta-E_i}.$$ where $\Psi_i({{\bf r}})$ and $E_i$ are the eigenvectors and eigenvalues of our one-body Hamiltonian (\[h1\]) computed by means of exact numerical diagonalization. In Fig. \[fig2\], we present our results computed on clusters of $80\times80$ unit cells for parameters $\lambda_{\rm so}=0.04t$ and $\delta\mu=0.2t$, close to realistic values. We consider here an adatom coverage of $n_I=0.2$ and an uncorellated random potential $w_r\in [-0.04t, 0.04t]$. The latter has in fact little effect because it remains much smaller than the disorder induced by adatoms whose variance is about $3 n_I \delta\mu^2$. In order to achieve better resolution, we show the FT-LDOS and the spectral function signal as angular averages over circular regions around the ${{\bf q}}=0$ or ${{\bf q}}={{\bf K}}$ points. In addition, we average each quantity over 10 independent realization of disorder.
Fig. \[fig2\]a,b shows a clear energy gap in the intra- and intervalley components of FT-LDOS. This gap is somewhat smaller than $2\Delta_{\rm so}=6\sqrt{3} n_I \lambda_{\rm so} \approx 0.083 t$ obtained in our approximate analytical calculation, but remains open for each of our ten disorder configurations. Moreover, this gap also appears in the spectral function and in the total density of states. This indicates that a random distribution of adatoms on the graphene sheet not only opens a mobility gap as demonstrated by Weeks and coworkers[@weeks1], but produces a full spectral gap observable through ARPES and FT-STS experiments. One can also perceive the parabolic electron dispersion in the most intense regions of FT-LDOS plots, even if the strong disorder of the $\delta\mu$ term and finite size effects broaden the singularity to some extent. The gap does not close when we vary continuously the disorder strength $w_r$ from zero to its final value and vary the adatom concentration from $n_I=1.0$ to its value $n_I=0.2$. This demonstrates that the system is in the same topological phase as the Kane-Mele model[@kane1] and that the spectral gap has topological origin.
In conclusion, our approximate analytical and exact numerical calculations based on the graphene/adatom model Eq. (\[h1\]) provide strong evidence for substantial SOC-induced spectral gap opening at the Dirac points in the physically relevant regime of randomly distributed adatoms. Such a gap should be observable in various spectroscopies such as ARPES and STS. In addition, Fourier transform STS should be able to discern unique patterns characteristic of SOC (Fig. \[fig1\]) in the intravalley channel where the signal in pristine graphene is absent due to symmetry considerations.
The authors are indebted to C. Ast, S.A. Burke, A. Damascelli, J.A. Folk, J.E. Hoffman, A. Khademi and B.M. Ludbrook for insightful discussions. This work was supported by NSERC and CIfAR. P. S. thanks the Erasmus Mundus program TEE which made this collaboration possible.
[10]{}
J. E. Moore, [Nature (London) [**464**]{}, 194 (2010)](http://dx.doi.org/10.1038/nature08916).
M.Z. Hasan, C.L. Kane, [Rev. Mod. Phys. [**82**]{}, 3045 (2010)](http://dx.doi.org/10.1103/RevModPhys.82.3045).
X.-L. Qi, S.-C. Zhang, [Rev. Mod. Phys. [**83**]{}, 1057 (2011)](http://dx.doi.org/10.1103/RevModPhys.83.1057).
edited by M. Franz and L. Molenkamp ([Elsevier,Oxford, England,2013](https://www.elsevier.com/books/topological-insulators/franz/978-0-444-63314-9)).
M. Konig, S. Wiedmann, C. Brune, A. Roth, H. Buhmann, L. W. Molenkamp, X.-L. Qi, and S.-C. Zhang, [Science, [**325**]{}, 766 (2007)](http://dx.doi.org/10.1126/science.1148047).
I. Knez, R.-R. Du, and G. Sullivan, [Phys. Rev. Lett., [**107**]{}, 136603 (2011)](http://dx.doi.org/10.1103/PhysRevLett.107.136603).
C. L. Kane and E. J. Mele, [Phys. Rev. Lett., [**95**]{}, 226801 (2005)](http://dx.doi.org/10.1103/PhysRevLett.95.226801).
C. Weeks, J. Hu, J. Alicea, M. Franz, and R. Wu, [Phys. Rev. X, [**1**]{}, 021001 (2011)](http://dx.doi.org/10.1103/PhysRevX.1.021001).
J. Hu, J. Alicea, R. Wu, M. Franz, [Phys. Rev. Lett. [**109**]{}, 266801 (2012)](http://dx.doi.org/10.1103/PhysRevLett.109.266801).
A. Khademi and J. Folk (unpublished).
A. Macdonald and S.A. Burke (unpublished).
B.M. Ludbrook and A. Damascelli (unpublished).
O. Shevtsov, P. Carmier, C. Groth, X. Waintal, D. Carpentier, [, 245441, (2012)](http://dx.doi.org/10.1103/PhysRevB.85.245441).
H. Jiang, Z. Qiao, H. Liu, J. Shi, Q. Niu, [Phys. Rev. Lett. [**109**]{}, 116803 (2012)](http://dx.doi.org/10.1103/PhysRevLett.109.116803).
N.H. Shon, and T. Ando, [J. Phys. Soc. Jpn. [**67**]{}, 2421 (1998)](http://dx.doi.org/10.1143/JPSJ.67.2421).
See e.g. [*Green’s Functions for Solid State Physicists,*]{} S. Doniach, E. H. Sondheimer ([Imperial College Press, Reading, MA, 1998](http://www.worldscientific.com/worldscibooks/10.1142/p067)).
M.F. Crommie, C.P. Lutz, D.M. Eigler, [Nature (London)[**363**]{}, 524 (1993)](http://dx.doi.org/10.1038/363524a0).
J. Lee, K. Fujita, A.R. Schmidt, C.K. Kim, H. Eisaki, S. Uchida, J.C. Davis, [Science [**325**]{}, 1099 (2009)](http://dx.doi.org/10.1126/science.1176369), and references therein.
L. Capriotti, D.J. Scalapino and R.D. Sedgewick, [, 014508 (2003)](http://dx.doi.org/10.1103/PhysRevB.68.014508).
T. Pereg-Barnea and A.H. MacDonald, [, 014201 (2008)](http://dx.doi.org/10.1103/PhysRevB.78.014201).
See e.g. M. E. Peskin and D. V. Schroeder, [*An Introduction to Quantum Field Theory*]{} (Addison-Wesley,Cambridge,MA,1995).
T. Pereg-Barnea and M. Franz, [, 180506(R) (2003)](http://dx.doi.org/10.1103/PhysRevB.68.180506); [Int. J. Mod. Phys. B [**19**]{}, 731 (2005)](http://dx.doi.org/10.1142/S0217979205026658).
C. Bena, [, 076601 (2008)](http://dx.doi.org/10.1103/PhysRevLett.100.076601); [, 125427 (2009)](http://dx.doi.org/10.1103/PhysRevB.79.125427).
P. Mallet, I. Brihuega, S. Bose, M. M. Ugeda, J. M. Gómez-Rodríguez, K. Kern, J. Y. Veuillen, [, 045444 (2012)](http://dx.doi.org/10.1103/PhysRevB.86.045444).
|
---
address:
- 'Department of Mathematics, Florida Institute of Technology, Melbourne, FL 32901'
- 'Department of Mathematics, Florida Institute of Technology, Melbourne, FL 32901'
author:
- 'Ugur G. Abdulla'
- Jian Du
- Adam Prinkey
- Chloe Ondracek
- Suneil Parimoo
bibliography:
- 'references.bib'
title: Evolution of Interfaces for the Nonlinear Double Degenerate Parabolic Equation of Turbulent Filtration with Absorption
---
[^1]
Acknowledgement {#acknowledgement .unnumbered}
===============
This research was funded by National Science Foundation: grant \#1359074–REU Site: Partial Differential Equations and Dynamical Systems at Florida Institute of Technology (Principal Investigator Professor Ugur G. Abdulla).
[^1]: Department of Mathematics, Florida Institute of Technology, Melbourne, FL 32901
|
---
abstract: 'We report 1.3$\,$cm and $6\,$cm continuum observations toward the massive proto-stellar candidate G11.11$-$0.12P1 using the Karl G. Jansky Very Large Array (VLA). We detect a string of four unresolved radio continuum sources coincident with the mid-IR source in G11P1. The continuum sources have positive spectral indices consistent with a thermal (free-free) ionized jet. The most likely origin of the ionized gas are shocks due to the interaction of a stellar wind with the surrounding high-density material. We also present NIR United Kingdom Infrared Telescope (UKIRT) archival data which show an extended structure detected only at K-band ($2.2~\mu$m), which is oriented perpendicular to the jet, and that may be scattered light from a circumstellar disk around the massive protostar. Our observations plus the UKIRT archival data thus provide new evidence that a disk/jet system is present in the massive protostellar candidate located in the G11.11$-$0.12P1 core.'
author:
- 'V. Rosero$^{1}$, P. Hofner$^{1,}$, M. McCoy$^{1}$, S. Kurtz$^{2}$, K. M. Menten$^{3}$, F. Wyrowski$^{3}$, E. D. Araya$^{4}$, L. Loinard$^{2}$, C. Carrasco-González$^{2}$, L. F. Rodríguez$^{2}$, R. Cesaroni$^{5}$, S. P. Ellingsen$^{6}$'
title: 'Weak and Compact Radio Emission in Early Massive Star Formation Regions: An Ionized Jet Toward G11.11$-$0.12P1'
---
Introduction
============
The role of jets in massive star formation is not yet fully understood. Unlike their low-mass counterparts, the current sample of known massive young stellar objects (MYSOs) associated with collimated jets is very small (see @2010ApJ...725..734G for a summary). MYSOs are difficult to detect since they are located at large distances, with a tendency to form in complicated cluster environments, and evolve on a much shorter evolutionary timescale compared to low-mass stars. It is important, therefore, to identify more candidates of massive stars in early evolutionary stages to ascertain whether jets are present, and if so, to study their role during the formation process. Infrared dark clouds (IRDCs) are potentially a good place to find molecular cores which might harbor the earliest stages of massive star formation (e.g. @2000ApJ...543L.157C). IRDCs are cold (T$<$ 25 K), high column density ($\sim$ 10$^{23}$ $-$ 10$^{25}$ cm$^{-2}$) molecular condensations, with high gas densities ($>10^{5}$ cm$^{-3}$) and a large amount of extinction (A$_{\textrm{v}}\sim $ 200 mag, @2014ApJ...782L..30B), which causes them to appear as dark silhouettes against the Galactic mid-infrared background [@1998ApJ...508..721C; @2005IAUS..227...23M; @2006ApJ...641..389R].
G11.11$-$0.12P1 (hereafter G11P1) is a compact dust continuum source located in the filamentary IRDC G11.11$-$0.12 at a kinematic distance of 3.6 kpc (@1998ApJ...508..721C [@2000ApJ...543L.157C]; @2003ApJ...588L..37J). Figure \[f1\] shows a *Spitzer* IRAC GLIMPSE three-color image of the G11.11$-$0.12 IRDC. The right panel shows the G11P1 core, along with our VLA 6 cm image (see below). Several indicators show that G11P1 is an active star forming region: [*i)*]{} compact sub-mm dust continuum (450 $\mu$m and 850 $\mu$m; @2000ApJ...543L.157C), [*ii)*]{} point-like mid-IR emission (8 $\mu$m, @2000ApJ...543L.157C; 24 $\mu$m, ), [*iii)*]{} H$_{2}$O and class II CH$_{3}$OH maser emission (, hereafter P06), and [*iv)*]{} outflow indicators such as 4.5 $\mu$m excess emission [@2008AJ....136.2391C].
The luminosity of G11P1 estimated from a spectral energy distribution (SED) model is $\sim$1200 L$_{\odot}$ (P06). The SED peaks in the far-IR but also has a mid-IR component that P06 attribute to an accretion disk. observed G11P1 using *Herschel* PACS at 70 $\mu$m, 100 $\mu$m and 160 $\mu$m. Their SED model suggests a dust temperature of 24 K and a core mass of 240 M$_{\odot}$, corresponding to a luminosity of 1346 L$_{\odot}$, the largest of the sources in the G11.11$-$0.12 IRDC. G11P1 has also been detected in the dense gas tracers NH$_{3}$ (P06), C$^{34}$S as well as in several thermally excited (i.e. non-maser) transitions of CH$_3$OH .
P06 detected a strong (22 Jy for the brightest peak) class II methanol maser at 6.7 GHz in G11P1 using the Australian Telescope Compact Array (ATCA). They reported a velocity structure with a linear trend which they interpreted as a disk around a highly embedded massive protostar. In addition, a 2MASS NIR emission structure detected 2$^{\prime\prime}$ from the maser supports the circumstellar disk scenario (P06; more discussion in this regard is given in §\[near\_ir\]). P06 also detected a weak ($\sim$0.3 Jy) water maser at 22.2 GHz using the VLA in the D configuration. In this case the velocity structure of the maser spot is not spatially resolved; the water maser is slightly offset ($\sim$ 1$^{\prime\prime}$) from the methanol maser position. Both maser species are indicators of the earliest stages of massive star formation, and in particular the 6.7 GHz CH$_3$OH maser has only been found in regions were massive stars form .
In a recent paper @2014MNRAS.439.3275W presented SMA and VLA continuum and molecular line observations toward G$11.11-0.12$, which showed that the P1 core contains 6 condensations with masses in excess of the thermal Jeans masses. They also reported the discovery of an East-West outflow, which is most clearly seen in the SiO(5–4) line.
None of the previous observations were sufficiently sensitive to detect the cm continuum towards G11P1. Our new high sensitivity VLA observations presented in this paper show the presence of cm continuum sources associated with the mid-IR point source. All of the features discussed above make G11P1 a strong candidate for an embedded massive young stellar object (MYSO) in an early stage of formation, and likely hosting an outflow/disk system.
In this paper, we present sensitive sub-arcsecond resolution continuum observations of G11P1 at 6 cm and 1.3 cm using the Karl G. Jansky Very Large Array (VLA)[^1]. These observations were made as part of a larger survey to search for weak, compact radio emission in young, high-mass star forming regions. The results of the survey will be presented elsewhere (Rosero et al. in preparation); here we present the results for G11P1. We describe our VLA observations and data reduction in §\[data\_red\], in §\[results\] we present our observational results of the radio continuum data, in §\[analysis\] we present an analysis of our cm detections and of the NIR emission, in §\[discussion\] we discuss the nature of the massive protostar in G11P1, and in §\[conclusions\] we summarize our findings.

Observations and Data Reduction {#data_red}
===============================
VLA continuum observations (project code 10B-124) at 6 and 1.3 cm were obtained for the core region G11P1. The pointing center was RA(J2000)=$18^{h}10^{m}28{\rlap.}^{s}40$, Dec(J2000)$=-$$19^{\circ}22^{\prime}29{\rlap.}^{\prime\prime}0 $. The observations where made in different configurations — A-configuration for 6 cm and B-configuration for 1.3 cm — to obtain similar angular resolution at the different frequencies. Following is a detailed description of the observations.
6 cm Observations
-----------------
The observations were made in the A-configuration on 2011 July 27 covering two 1 GHz wide basebands centered at 4.9 and 7.4 GHz, respectively. Each band was divided into 8$\times$128 MHz spectral windows (SPWs). Therefore, the data were recorded in 16 unique SPWs, each of these with 64 channels (resolution $=2$ MHz), i.e. a total bandwidth of 2048 MHz. The SPWs were configured to avoid the strong methanol maser emission at 6.7 GHz. For flux calibration we observed 3C286 and the phase calibrator was J1820$-$2528. Alternating observations between G11P1 and the phase calibrator were made with on-source times of 900 s and 180 s, respectively. The total observing time was $\sim\,$1 hr, of which $\sim\,$40 minutes were on-source. All 27 antennas were available after flagging.\
The data were processed using NRAO’s Common Astronomy Software Applications (CASA[^2]). Eight channels at the edges of each baseband were flagged due to substantial roll-off (and therefore loss of sensitivity). In addition, a large amount of radio frequency interference (RFI) was flagged throughout the observing band (approximately 20$\%$ of the total data). The bandpass solution was formed using 3C286. This solution was applied when solving for the complex gains. The flux density for 3C286 was adopted from the Perley–Butler 2010 flux calibration standards, and the derived flux density for the phase calibrator at 6.086 GHz was 1.026 $\pm$ 0.002 Jy with spectral index of $-$0.29. The gain solutions were then applied to the target source G11P1. The images were made using Briggs ${\tt ROBUST}=0.5$ weighting. Owing to the low S/N of the detections ($<$ 20), no self-calibration was attempted.
As a consistency check, and to ensure the absence of line contamination or RFI, we imaged and inspected each SPW separately. Each 1 GHz baseband was imaged separately to provide a better estimate of spectral index. Finally, a combined image was made, including all data from both basebands. The synthesized beam of this combined image is $0.49^{\prime\prime} \times 0.27^{\prime\prime}$, position angle PA $=172^{\circ}$, and rms noise $\sim 5\,\mu$Jy beam$^{-1}$.\
1.3 cm Observations
-------------------
The observations were made in the B-configuration on 2011 March 20 covering two 1 GHz wide bands centered at 21 and 25.5 GHz. Each band was divided into 8$\times$128 MHz SPWs. The SPWs were configured to avoid the strong water maser emission at 22 GHz. The same number of SPWs and channels were used as in the 6 cm observations. For flux calibration we observed 3C286 and the phase calibrator was J1820$-$2528. Alternating observations between the target and the phase calibrator source were made with times of 270 and 90 s, respectively. The total on-source time was $\sim\,$42 minutes. After flagging, only 23 antennas were available. Pointing corrections were obtained separately and applied during the observations.\
The data reduction was done in the same fashion as for the 6 cm observations. The flux density for 3C286 was adopted from the Perley–Butler 2010 flux calibration standards, and the derived flux density for the phase calibrator at 23.186 GHz was 0.91 $\pm$ 0.01 Jy with spectral index of $-$0.57. The images were made using natural weighting. Opacity corrections were applied during calibration.
The absence of line contamination and RFI was confirmed by imaging each SPW separately. As at 6 cm, we imaged each baseband individually (for spectral index) and together (for morphology and improved S/N). The synthesized beam of the combined map is $0.75^{\prime\prime} \times 0.28^{\prime\prime}$, PA $=146.6^{\circ}$, and rms noise $\sim8\,\mu$Jy beam$^{-1}$.\
$
\begin{tabular}{cc}
\hspace{-1. cm}\includegraphics[scale=0.5]{f2_a.eps}&\hspace{-1.0 cm}
\includegraphics[scale=0.5]{f2_b.eps}
\end{tabular}$
Results
=======
We detected radio continuum emission at all observed frequencies. The emission is clearly associated with G11P1 (see Figure 1, right panel). In Figure \[f2\] we show contour plots of G11P1 at 6 and 1.3 cm. Four and three components are detected in the 6 cm and 1.3 cm maps, respectively. As indicated in figure \[f2\], we refer to these components, from east to west, as A, B (bright central source), C, and D. The components lie in a linear structure with a PA of $\sim55^{\circ}$. The outermost sources (A and D) are separated by an angular distance of $\sim2.5^{\prime \prime}$ (9000 AU at the distance of 3.6 kpc). Component D is not detected at 1.3 cm.\
Table \[tab2\] lists the peak positions and flux densities of components A – D, as determined by gaussian fits using the IMFIT CASA routine. The astrometric accuracy of the VLA is better than 0.1$^{\prime\prime}$. Components A and B have consistent peak positions, but component C appears slightly offset between $1.3$ and $6\,$cm. Such an offset can occur if the continuum optical depth in the source varies strongly between the two observing frequencies; however since the offset is smaller than our resolution element, for simplicity we will treat the two emission peaks as a single source, component C. The radio components are mostly unresolved, implying upper limits on the size of the emitting regions of about $1800\,$AU. This small size of the emitting regions is also reflected in the low measured brightness temperatures within a synthesized beam, which are $\leq 17\,$K for all components.
Figure \[f3\] shows the fluxes and power law fits of the form $S_{\nu} \propto \nu^{\alpha}$, where $\alpha$ is the spectral index and $\nu$ is the frequency for each detected component in G11P1. Components A and C have a rising spectral index indicative of thermal emission from ionized gas from a stratified medium, and component B has a flat behavior which is consistent with emission from optically thin ionized gas. For component D the 1.3 cm detection limits together with the $6\,$cm data are consistent with a flat spectrum, but a falling spectral index, as expected for non-thermal emission, cannot be excluded.
Our measured flux densities are consistent with the non-detection at $8.64\,$GHz by P06. Also, extrapolating our flux densities to 3 mm with a spectral index of 0.6 results in values far below the 12 mJy reported by P06, thus confirming their result that the 3 mm emission is likely due to dust.
[c c c c c c]{}
A & 4.9 & 18 10 28.39 & $-$19 22 29.8 & 20.0(4.0) & $+$0.6(0.2)\
& 7.4 & 18 10 28.40 & $-$19 22 30.1 & 38.0(12.0) &\
& 20.9 & 18 10 28.40 & $-$19 22 29.9 & 41.8(7.1) &\
& 25.5 & 18 10 28.40 & $-$19 22 30.1 & 78.0(14.0) &\
B & 4.9 & 18 10 28.33 & $-$19 22 30.5 & 97.3(8.1) & $+$0.1(0.2)\
& 7.4 & 18 10 28.33 & $-$19 22 30.5 & 64.4(6.7) &\
& 20.9 &18 10 28.33 & $-$19 22 30.7 & 96.0(10.0) &\
& 25.5 &18 10 28.34 & $-$19 22 30.7 & 105.0(27.0) &\
C & 4.9 & 18 10 28.29 & $-$19 22 30.7 & 53.3(7.7) & $+$0.6(0.2)\
& 7.4 & 18 10 28.29 & $-$19 22 30.8 & 71.0(20.0) &\
& 20.9 &18 10 28.29 & $-$19 22 30.9 & 109.0(20.0) &\
& 25.5 &18 10 28.28& $-$19 22 30.7 & 160.0(39.0) &\
D & 4.9 &18 10 28.27 &$-$19 22 31.1 & 27.0(5.8) & $<$0.2\
& 7.4 &18 10 28.26 & $-$19 22 31.2 & 21.7(6.3) &\
& 20.9 & & & $<33\tablenotemark{a} $ &\
& 25.5 & & & $<36\tablenotemark{a} $ &\

Analysis
========
The above results clearly show that a string of radio sources is associated with the massive proto-stellar object in G11P1. In this section we will discuss the physical nature of the emission, and present archival infrared data.
Radio Continuum {#rad_cont}
---------------
Components B and C appear connected at $1.3\,$cm, however this bridge of emission is not detected at $6\,$cm. For our analysis we will not consider this bridging structure, thus component B and C are treated as individual sources. First, we might consider that radio components A to D are manifestations of individual massive stars which ionize their surroundings, i.e. ultra or hyper compact HII regions (UCHII or HCHII). Due to the much improved continuum sensitivity of the VLA, it should be now possible to explore photo-ionized regions around stars of spectral type later than B2 throughout the Galaxy. The orientation of the 4 putative stars is approximately along the dark filament as might be expected for star formation in this environment. A similar alignment of proto-stellar objects along the dark cloud has for instance been observed in the region G28.34+0.06 (e.g. @2011ApJ...735...64W). However, in the latter case the sources have separations of the order of $0.1\,$pc, whereas in G11P1 there are 4 sources within a distance of $9000\,$AU (0.04 pc at the distance of G11P1). Several massive objects can in fact be found at such small and even smaller separation in young clusters (e.g. Orion Trapezium; NGC$\,$2071: @2012ApJ...746...71C). We will thus first consider the formation of 4 massive stars aligned along the filament.
An argument against the hypothesis that the 4 radio sources are ionized by 4 individual stars can be made if we consider the implied luminosities from 4 individual stars. Assuming optically thin free-free emission and neglecting absorption of ionizing photons within the UC/HCHII regions we calculated the Lyman continuum luminosity for each component using the formulas given in @1994ApJS...91..659K. Using the tabulation in @2005IAUS..227..389C, the corresponding spectral types are approximately B2/B3 for each radio component. Due to the assumptions made in the calculation, these values are lower limits. Such stars have a luminosity of $> 1000\,$L$_{\odot}$, hence for four such stars we would predict a luminosity of the region of more than $4000\,$L$_{\odot}$, which is much larger than the measured luminosity of the region of about 1200 L$_{\odot}$ (P06) or 1346 L$_{\odot}$ . Therefore the hypothesis of 4 individual UC/HCHII regions can be excluded.
Next, we can ask whether external photoionization can explain the 4 radio sources. In this scenario 4 clumps are externally ionized by an unseen massive protostar. The position of the putative accretion disk traced by $6.7\,$GHz methanol masers (P06) lies somewhat offset from the line defined by the 4 radio sources (see Figure \[f4\]). We calculated the necessary flux of ionizing photons correcting for the ratio of solid angle $\Omega/4\pi$ of the radio sources as seen from the position of the methanol maser source. For the calculation we assumed source sizes of half of the synthesized beam, likely an overestimate resulting in a lower limit on the corrected ionizing flux. To externally ionize the four components we find that a single star of spectral type B1 or earlier would be required. Such stars have luminosities of $>5000$ L$_{\odot}$, which is also in conflict with the measured luminosity of the region. A calculation placing the star at the peak position of radio continuum source B gives similar results. We conclude that direct photoionization of the cm components from a single massive proto-stellar objects is unlikely.
We suggest therefore that the radio continuum emission detected towards G11P1 is produced by shock ionization. This could be the result of either accretion shocks caused by supersonic infall onto an accretion disk, or shocks caused by the interaction of a stellar wind with surrounding molecular core matter. The expected radio continuum emission from accretion shocks has been calculated by @1996ApJ...471L..45N; at the distance of G11P1, and assuming a mass of $8\,$M$_\odot$ (P06), their model predicts a $4.8\,$GHz flux density of below $1\,\mu$Jy for an accretion rate of $10^{-4}\,$M$_\odot\,$yr$^{-1}$. Thus, unless one wants to accept an unusually large accretion rate, the accretion shock scenario seems to be ruled out. On the other hand, a scenario where a neutral wind driven by the embedded massive protostar shocks against surrounding high-density matter and produces free-free emission (@1987RMxAA..14..595C and references therein), appears more likely.
Before discussing this scenario in more detail below, we note that the above luminosity argument could also be made consistent with the data if we assume that only one of the four radio sources is a UC/HCHII region and the other sources are shock-ionized. The spectral behavior of component B is close to that of an optically thin HII region, hence we have considered this possibility as well. P06 and have estimated a molecular hydrogen density of $7\times 10^5\,$cm$^{-3}$, and a temperature of $60\,$K for the G11P1 central core. Including also the turbulent pressure of the molecular gas given by the FWHM of the hot NH$_3$ component ($4\,$km$\,$s$^{-1}$), we have calculated the size of an UC/HCHII region around a B3 ZAMS star given by the condition of pressure equilibrium between molecular and ionized gas using the formulas of @1996ApJ...473L.131X. We obtain a size of about $400\,$AU, which is consistent with the region being unresolved in our observations. Thus, our data do not exclude a UC/HCHII region interpretation for component B (only), plus cm emission from 3 shocked regions. These 3 continuum sources could then be either separate jets from individual proto-stars which are unresolved, or several shocks from a single jet, likely caused by episodic matter ejection. We also note that if one adopts the empirical correlation between radio and bolometric luminosity of @2007ApJ...667..329S an interpretation of the radio sources as four independent lower mass stars is not excluded by the measured luminosity.
Because of the alignment and orientation of the 4 radio sources with respect to several disk tracers (see below) we will favor shock ionization from a single jet as the likely physical scenario for the cm emission. In this picture a massive star in or near component B drives a bipolar jet which causes the observed radio emission when the ejecta interact with the surrounding core matter. Assuming then that the radio emission detected at G11P1 originates from a jet, we use the standard model of @1986ApJ...304..713R for free-free emission of a collimated, ionized flow or wind, with constant velocity, temperature and ionization fraction. Reynolds’ model suggests that the observed flux density and the angular size depend on frequency as $S_{\nu} \propto \nu^{1.3-0.7/\epsilon}$ and $\theta_{maj} \propto \nu^{-0.7/\epsilon}$, where $\epsilon$ depends on the geometry of the jet and is the power-law index that describes the dependence of the jet half-width on the distance from the jet origin [@1986ApJ...304..713R]. For the G11P1 component B, the observed dependence of the flux density with frequency gives a value of $\epsilon \sim0.6$, which within the uncertainties, is in agreement with a collimated ionized jet. The angular size dependence with frequency cannot be determined for any of the components since they are all unresolved with our angular resolution. Using equation 19 from @1986ApJ...304..713R we can make a rough estimate of the mass-loss rate ($\dot{M}$) of the G11P1 B-component, assuming parameter values that are typical of jets associated with luminous objects ($v_{wind}$= 700 km s$^{-1}$; $\theta_{0}=1\, $rad; T$_{e}=10^{4}$ K, $i=45^{\circ}$, $x_{0}=0.1$; e.g. @1994ApJ...430L..65R). The estimated mass loss rate of G11P1 component B observed at 25.5 GHz is $\dot{M} \sim 3 \times 10^{-6}$ M$_{\odot}$yr$^{-1}$. We can get an estimated value of the momentum rate ($\dot{P}$) by multiplying the mass loss rate by the typical velocity of the wind in massive stars, which gives $\dot{P} \sim 2
\times 10^{-3}$ M$_{\odot}$yr$^{-1}$ kms$^{-1}$. On the other hand, if we assume that component B is a jet produced by shock induced ionization with a shock efficiency ($\eta$) $\sim$0.1 and an optical depth of the emission at 25.5 GHz of 0.02, the estimated momentum rate is $\dot{P} \sim 7 \times 10^{-3}$ M$_{\odot}$yr$^{-1}$ kms$^{-1}$. The fact that $\dot{P}$ estimated from the shock ionization mechanism is $\sim4$ times bigger than the one predicted by Reynolds’ model suggest that the 25.5 GHz emission is not completely due to shocks in the jet. However, the momentum rate of the jet could ionize itself if the jet velocity is $\sim$1600 kms$^{-1}$ or the shock efficiency is $\sim$0.4.
![6 cm (black) and 1.3 cm (red) contours of the VLA observations toward G11.11$-$0.12P1. The blue filled circle indicates the CH$_{3}$OH maser at RA(J2000)=$18^{h}10^{m}28{\rlap.}^{s}25$, Dec(J2000)$=-$$19^{\circ}22^{\prime}30{\rlap.}^{\prime\prime}45 $ and the blue filled triangle indicates the H$_{2}$O maser at RA(J2000)=$18^{h}10^{m}28{\rlap.}^{s}29$, Dec(J2000)$=-$$19^{\circ}22^{\prime}30{\rlap.}^{\prime\prime}5 $, detected by P06. The black cross corresponds to the position of the IR source SSTGLMC G011.1089-00.1144 detected at 3.6 $\mu$m from the IRAC GLIMPSE images (Figure \[f1\])](f4.eps)
.\[f4\]
Near-IR Sources {#near_ir}
---------------
P06 proposed from their study of the spectral energy distribution of G11P1 the presence of an accretion disk around the central massive young star. From 2MASS data (resolution $\sim$2$^{\prime \prime}$), P06 detected three faint sources in the J and H bands with only one of those sources detected at K-band. In their analysis they found that these sources cannot be explained by reddening and they proposed that those NIR detections are knots of scattered light that escape from the star into an optically thin cone above a circumstellar disk.
We retrieved data from the UKIRT Infrared Deep Sky Survey (UKIDSS) GPS and compared them with the corresponding 2MASS data analyzed and discussed by P06. The UKIDSS project is defined in @2007MNRAS.379.1599L. UKIDSS uses the UKIRT Wide Field Camera (WFCAM; ). The photometric system is described in @2006MNRAS.367..454H, and the calibration is described in @2009MNRAS.394..675H. The pipeline processing and science archive are described in @2009Icar..203..287I and @2008MNRAS.384..637H. The UKIDSS data are three magnitudes deeper and have higher angular resolution ($\sim0.4 ^{\prime \prime}$) compared to 2MASS data. The astrometric accuracy of the UKIDSS data is about 50 mas.
We found a total of six sources associated with the G11P1 core, two of which are only seen at K-band (see Figure \[f5\]). The sources are labeled as UK1 to UK6. Based on their positions in a JHK color-color diagram we found that UK1 and UK2 can be explained as main sequence stars with a visual extinction $\le 7$ mag. Thus, we suggest that these two components are foreground stars. Not much can be said about UK3 since its H-band magnitude is not available. On the other hand, the position of UK4 in the JHK color-color diagram indicates intrinsic IR excess emission and therefore we suggest that UK4 is likely a young star. Thus, the 2MASS sources from P06 associated with UK1, UK2 and UK4 do not appear to be knots of scattered light, but appear to be of stellar nature. On the other hand, components UK5 and UK6 are only detected at K-band which indicates very high extinction. UK5 is an extended source oriented in the direction toward UK6, along a PA of $\sim 130^{\circ}$. These two sources are separated by an angular distance of $\sim 2.2^{\prime \prime}$ ($\sim$7900 AU at the distance of 3.6 kpc). Figure \[f5\] shows that these 2 components are oriented roughly perpendicular to the axis defined by the radio continuum components which we believe is caused by a bipolar jet. We also note that the Mid-IR [*Spitzer*]{} IRAC data are clearly offset from the NIR sources and peak closer to the radio data. Considering the excellent astrometrical agreement between [*Spitzer*]{} IRAC and UKIDSS data, we are convinced that the offset is real, which would speak against a YSO nature of UK5 and UK6, but is consistent with their K-band emission coming from scattered light from an accretion disk. Therefore, while different in detail, the higher quality UKIDSS data are supportive of the interpretation of P06 for the presence of a disk-like structure in the G11P1 core.\
Another interpretation for UK5 and UK6 is that they are scattered light at the inner wall cavity produced by the molecular outflow. Figure \[f5\] shows that the outflow cavity appears brighter to the east which is consistent with the blueshifted SiO outflow from Wang et al. 2014. In this picture, the redshifted side at NIR (i.e. UK6) is fainter due to dependence of cavity brightness with inclination [e.g. @2008ApJ...679.1364T].

Discussion
==========
Several authors (e.g. P06, ) have interpreted the G11P1 central object as a massive protostar in a very early stage of evolution. The detection of a mid-IR point-source and the measured luminosity of around $1000\,$L$_\odot$ clearly indicate the presence of a stellar object, whose energy output is comparable to an $8\,$M$_\odot$ ZAMS star (P06). The presence of the $6.7\,$GHz CH$_{3}$OH and $22\,$GHz H$_{2}$O masers are strong indicators of massive star formation, and the presence of a massive core (e.g. 240 M$_{\odot}$, ) allows in principle to accrete more mass. The relative youth of this system is demonstrated by the fact that most of the molecular gas in the core appears to be quite cold: P06 report a temperature of only $15.4\,$K for the overall molecular core based on NH$_{3}$ observations. Whether or not the central object in G11P1 is in fact accumulating more mass and will grow to a massive star can in principle be observationally decided by the detection of outflow activity, because flows and jets are thought to be intimately linked to mass accretion. The molecular line observations of resulted in the detection of non-gaussian line wings, and a possible outflow traced by the CH$_{3}$OH(2$_{k} - 1_{k}$) lines. This was recently confirmed by @2014MNRAS.439.3275W who found an East–West outflow in the SiO(5–4) line. In the previous section we have argued that the radio continuum emission from G11P1 is best explained by an ionized jet, and we hence add to the picture an outflow tracer very near the protostar.
How do the results described in this paper fit into the picture of a massive protostar with a disk/jet system as defined by previous observations? We first note that the $6.7\,$GHz CH$_{3}$OH maser is located not on the axis defined by the radio continuum sources (see Fig. \[f4\]), but is offset by about $0.8^{\prime\prime}$. According to P06 the astrometrical uncertainty of their measurement is of that order, so that the maser could in fact be located nearer to the jet axis as expected if the maser spots arise in an accretion disk. Fitting individual gaussians to different channels within the maser line P06 find a linear structure of length $0.2^{\prime\prime}$ oriented approximately North–South. Such linear structures have often been observed for the $6.7\,$GHz CH$_{3}$OH maser line (e.g. ), however the interpretation as disk tracer is not unique, as similar structures are expected in shock fronts associated with outflows. In fact, in the scenario where the NIR knots (UK5 and UK6) are scattered light at the outflow cavities, a linearly distributed maser emission might be then tracing the walls of the outflow cavities [e.g. @2005ApJ...628L.151D; @2011MNRAS.410..627T]. We also note that there is a maser listed in the $6.7\,$GHz methanol multi-beam maser catalogue of @2010MNRAS.409..913G which position is different by more than $1^{\prime\prime}$ from the position given in P06. To clarify these issues, higher angular resolution observations of this maser with high astrometric precision are needed.
As mentioned above, the CH$_{3}$OH and SiO line observations of and @2014MNRAS.439.3275W, respectively, suggest a molecular flow from the massive star in G11P1 along the East–West direction, and this orientation is consistent with the North-South orientation of the CH$_3$OH maser disk postulated by P06. The outflow direction defined by the cm continuum sources is closer to NE-SW, and in this case we also have a perpendicular component tracing a possible disk, namely the NIR emission discussed in the previous section. Additional evidence for this disk component comes from the recent SMA and VLA observations of @2014MNRAS.439.3275W. Their Figure 11 shows that the $880\,\mu$m dust emission is oriented nearly perpendicular to the ionized jet, and the NH$_3$(2,2) line shows a clear velocity gradient along that same direction. Both dust emission and NH$_3$(2,2) are more likely to trace dense matter present in a disk/torus system than a molecular outflow.
This apparent contradiction of the different flow orientations can then be explained in at least two ways. First, it is possible that a second protostar is responsible for the East-West outflow, whereas the flow associated with the radio jet has not been detected. Second, a change of alignment of flow axis on different length scales is a known phenomenon, e.g. in the case of the massive protostar in IRAS$\,$20126$+$4104, the jet axis changes from a NW–SE orientation on arc-second scales to a North–South direction when probed by CO on arc-minutes scales [@2000ApJ...535..833S]. Another well known case where a misalignment of outflow axis is observed is the protostar NGC$\,$7538 IRS1; describe a number of possible disk precession mechanisms which could explain such changes in the flow axis. If this is the the case for the G11P1 protostar, the outflow angle would have to change by $\sim50^{\circ}$ from the $1^{\prime\prime}$ scale of the ionized jet, to the $10^{\prime\prime}$ scale where the molecular flows have been detected in SiO and CH$_3$OH.
Previous studies have shown that all high-mass YSOs associated with ionized jets are also associated with large scale, high velocity collimated molecular outflows, and that there exists a correlation between the radio luminosity, and the momentum rate of the molecular outflows (e.g. @1992ApJ...395..494A). The $4.9\,$GHz radio luminosity of G11P1 is relatively large with S$_{\nu}$d$^{2}=2.6$ mJy$\,$kpc$^{2}$, near the lower range of what is observed for jets from massive protostars [@2008AJ....135.2370R], but much larger than radio luminosities from low mass stars. In section \[rad\_cont\] we have estimated mass loss and momentum rates for the jet in G11P1. The resulting values are lower than what is found for jets from massive protostars like IRAS$\,$16547$-$4247 [@2003ApJ...587..739G], or IRAS$\,$16562$-$3959 [@2010ApJ...725..734G], but there are still many uncertain assumptions made in the estimate of these quantities. Furthermore, these sources are much more luminous than G11P1, and we can speculate that the lower values for mass loss and momentum rates are due to the earlier evolutionary state of the protostar in G11P1. If we assume that the molecular flow observed by @2014MNRAS.439.3275W is related to the radio luminosity, we find that the jet/flow data of G11P1 fall close to the radio luminosity/momentum rate relation of [@1992ApJ...395..494A].
Conclusion {#conclusions}
==========
Previous observations have established the stellar source in the G11P1 core as a new candidate for a massive protostar in a very early evolutionary stage. The upgraded VLA provides the high sensitivity to detect these types of very early massive protostars in the radio continuum down to a rms of few $\mu$Jy/beam. Our VLA continuum observations reveal four weak, and unresolved sources, centered on the mid-IR source, which are aligned in a NE-SW direction. The spectral indices determined for each component are consistent with partially optically thick ($\alpha > -0.1$) free-free emission from ionized gas arising from a thermal jet [@1998AJ....116.2953A], where the mechanism of ionization is most likely by shock ionization . We also present archival NIR data from UKIRT (resolution of $\sim 0.4^{\prime \prime}$). These data reveal an extended structure only visible in K-band, which is oriented perpendicular to the orientation of the radio continuum data. This structure can be interpreted as scattered light from an accretion disk. Our observations thus provide new evidence that a disk/jet system is present in the protostar in G11P1.
We thank T. Pillai and L. Gómez for helpful discussions. PH acknowledges partial support from NSF grant AST-0908901. We thank J. Marvil, U. Rau and E. Momjian at NRAO, Socorro for stimulating technical discussions about the VLA capabilities. Some of the data reported here were obtained as part of the UKIRT Service Program. The United Kingdom Infrared Telescope is operated by the Joint Astronomy Centre on behalf of the UK Particle Physics and Astronomy Research Council. We thank the anonymous referee whose suggestions improved this manuscript.
[43]{} natexlab\#1[\#1]{}
, G., [Rodriguez]{}, L. F., [Canto]{}, J., [Estalella]{}, R., & [Torrelles]{}, J. M. 1992, , 395, 494
, G., [Villuendas]{}, E., [Estalella]{}, R., [et al.]{} 1998, , 116, 2953
, S. L., [Ellingsen]{}, S. P., [Contreras]{}, Y., [et al.]{} 2013, , 435, 524
, M. J., [Tan]{}, J. C., & [Kainulainen]{}, J. 2014, , 782, L30
, S. J., [Clark]{}, F. O., [Egan]{}, M. P., [et al.]{} 1998, , 508, 721
, S. J., [Feldman]{}, P. A., [Redman]{}, R. O., [et al.]{} 2000, , 543, L157
, C., [Osorio]{}, M., [Anglada]{}, G., [et al.]{} 2012, , 746, 71
, M., [Adamson]{}, A., [Alves de Oliveira]{}, C., [et al.]{} 2007, , 467, 777
, P. A. 2005, in IAU Symposium, Vol. 227, Massive Star Birth: A Crossroads of Astrophysics, ed. R. [Cesaroni]{}, M. [Felli]{}, E. [Churchwell]{}, & M. [Walmsley]{}, 389–396
, S., [Canto]{}, J., & [Rodriguez]{}, L. F. 1987, [Revista Mexicana de Astronomia y Astrofisica]{}, 14, 595
, S., [Rodriguez]{}, L. F., [Bohigas]{}, J., [et al.]{} 1989, Astrophysical Letters and Communications, 27, 299
, C. J., [Whitney]{}, B. A., [Holden]{}, E., [et al.]{} 2008, , 136, 2391
, J. M., & [Minier]{}, V. 2005, , 628, L151
, G., [Brooks]{}, K. J., [Mardones]{}, D., & [Norris]{}, R. P. 2003, , 587, 739
, L., [Wyrowski]{}, F., [Pillai]{}, T., [Leurini]{}, S., & [Menten]{}, K. M. 2011, , 529, A161
, J. A., [Caswell]{}, J. L., [Fuller]{}, G. A., [et al.]{} 2010, , 409, 913
, A. E., [Garay]{}, G., & [Brooks]{}, K. J. 2010, , 725, 734
, N. C., [Collins]{}, R. S., [Cross]{}, N. J. G., [et al.]{} 2008, , 384, 637
, T., [Linz]{}, H., [Krause]{}, O., [et al.]{} 2010, , 518, L95
, P. C., [Warren]{}, S. J., [Leggett]{}, S. K., & [Hodgkin]{}, S. T. 2006, , 367, 454
, S. T., [Irwin]{}, M. J., [Hewett]{}, P. C., & [Warren]{}, S. J. 2009, , 394, 675
, P. G. J., [Teanby]{}, N. A., & [Davis]{}, G. R. 2009, , 203, 287
, K. G., [Shepherd]{}, D. S., [Robitaille]{}, T. P., & [Wood]{}, K. 2013, , 551, A43
, D., [Fiege]{}, J. D., [Redman]{}, R. O., [Feldman]{}, P. A., & [Carey]{}, S. J. 2003, , 588, L37
, S., [Balega]{}, Y., [Elitzur]{}, M., [et al.]{} 2006, , 455, 521
, S., [Churchwell]{}, E., & [Wood]{}, D. O. S. 1994, , 91, 659
, A., [Warren]{}, S. J., [Almaini]{}, O., [et al.]{} 2007, , 379, 1599
, K. M., [Pillai]{}, T., & [Wyrowski]{}, F. 2005, in IAU Symposium, Vol. 227, Massive Star Birth: A Crossroads of Astrophysics, ed. R. [Cesaroni]{}, M. [Felli]{}, E. [Churchwell]{}, & M. [Walmsley]{}, 23–34
, V., [Booth]{}, R. S., & [Conway]{}, J. E. 2000, , 362, 1093
, V., [Ellingsen]{}, S. P., [Norris]{}, R. P., & [Booth]{}, R. S. 2003, , 403, 1095
, D. A., & [Hollenbach]{}, D. J. 1996, , 471, L45
, T., [Wyrowski]{}, F., [Menten]{}, K. M., & [Kr[ü]{}gel]{}, E. 2006, , 447, 929
, J. M., [Jackson]{}, J. M., & [Simon]{}, R. 2006, , 641, 389
, S. P. 1986, , 304, 713
, L. F., [Garay]{}, G., [Curiel]{}, S., [et al.]{} 1994, , 430, L65
, L. F., [Moran]{}, J. M., [Franco-Hern[á]{}ndez]{}, R., [et al.]{} 2008, , 135, 2370
, D. S., [Yu]{}, K. C., [Bally]{}, J., & [Testi]{}, L. 2000, , 535, 833
, Y. L., [Claussen]{}, M. J., [Bourke]{}, T. L., [Young]{}, C. H., & [Blake]{}, G. A. 2007, , 667, 329
, J. J., [Hartmann]{}, L., [Calvet]{}, N., & [D’Alessio]{}, P. 2008, , 679, 1364
, J. M., [Patel]{}, N. A., [Curiel]{}, S., [et al.]{} 2011, , 410, 627
, K., [Zhang]{}, Q., [Wu]{}, Y., & [Zhang]{}, H. 2011, , 735, 64
, K., [Zhang]{}, Q., [Testi]{}, L., [et al.]{} 2014, , 439, 3275
, T., [Mundy]{}, L. G., [Vogel]{}, S. N., & [Hofner]{}, P. 1996, , 473, L131
[^1]: The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
[^2]: http://casa.nrao.edu
|
---
abstract: 'We report the measurement of inelastic three-body and two-body collisional decay rates for a two-component Fermi gas of $^6$Li, which are highly suppressed by the Pauli exclusion principle. Our measurements are made in the BEC-BCS crossover regime, near the two-body collisional (Feshbach) resonance. At high temperature (energy) the data shows a dominant three-body decay process, which is studied as a function of bias magnetic field. At low energy, the data shows a coexistence of two-body and three-body decay processes near and below the Feshbach resonance. Below resonance, the observed two-body inelastic decay can arise from molecule-atom and molecule-molecule collisions. We suggest that at and above resonance, an effective two-body decay rate arises from collisions between atoms and correlated (Cooper) pairs that can exist at sufficiently low temperature.'
author:
- 'X. Du, Y. Zhang, and J. E. Thomas'
title: 'Inelastic Collisions of a Fermi Gas in the BEC-BCS Crossover'
---
Quantum statistics dramatically affects the inelastic collision rates that determine the lifetime of cold atomic gases. In an inelastic three-body collision, two of the colliding atoms decay to a bound molecular state, releasing energy. Interactions between atoms can be strongly enhanced by tuning a bias magnetic field near a collisional (Feshbach) resonance [@Feshbachpeople1; @Feshbachpeople2]. In a Bose gas, this enhancement is accompanied by an inelastic collision rate that increases by two or three orders of magnitude compared to that obtained away from resonance [@Roberts2000], and a correspondingly short lifetime of just a few ms at typical atomic densities. In contrast, for a Fermi gas in a mixture of one or two different spin states, the probability of three atoms colliding is highly suppressed by the Pauli exclusion principle. The lifetime of the cloud is on the order of 0.1 s for fermionic $^{40}$K [@Regal2003; @Regal2004] and 50 s for $^6$Li [@Dieckmann2002; @Bourdel2004]. The long lifetime of Fermi gases is essential to the study of strongly interacting Fermi gases [@O'Hara2002; @RMP2008], which offers unprecedented opportunities to test nonperturbative theoretical techniques that apply to exotic systems ranging from high temperature superconductors to nuclear matter. Determination of the inelastic collision rate coefficients in the strongly interacting regime of a Fermi gas provides new tests of few-body theories [@Braaten2006; @Bedaque2000; @Esry1999; @Esry2005; @Nielsen1999; @Petrov2003; @Petrov2004; @Stoof2008; @Helfrich2009].
In this Letter we report on the precision measurement of three-body inelastic collision rate constants $K_3$ for an ultracold two-component Fermi gas in the BEC-BCS crossover regime near a Feshbach resonance. We also observe two-body inelastic decay below the Feshbach resonance, which arises from molecules [@Bourdel2004; @Petrov2004]. From the data, we estimate the corresponding rate constants $K_2$. Finally, we observe two-body decay at and just above the Feshbach resonance. We suggest that this process arises from correlated pairs, which is a many-body effect. We load a Fermi gas from a single beam CO$_2$ laser trap into a CO$_2$ laser standing wave that is formed by the incoming and retro-reflected beam. The standing wave produces a potential with a period of 5.3 $\mu$m that is four times deeper than that of the single beam trap and tightly confining in the axial direction (along the standing wave). The corresponding atomic density is up to $10^{14}$/cm$^3$, $\sim 20$ times higher than that obtained in the single optical trap. This dramatically increases the inelastic collision rates, making precise measurement of the rate constants feasible.
For two-component Fermi gases, three-body inelastic collisions arise in the BEC-BCS crossover for processes of the form $F+F+F'\rightarrow F+(FF')$, where $F$ and $F'$ are fermions in different states and $(FF')$ is a bound molecular state. On the BEC side of the Feshbach resonance where the scattering length $a>0$, the three-body decay rate is predicted to scale as $a^6$ [@Petrov2003; @Esry2005], while on the BCS side ($a<0$), it should scale as $|a|^{2.455}$ [@Esry2005]. By contrast, two-body inelastic collisions can arise from the decay of real molecules, which exist on the BEC side. These processes take the form either $(FF')+F\rightarrow(FF')_- + F$ or $(FF')+(FF')\rightarrow(FF')_-+(FF')$, where $(FF')_-$ is a deeply bound molecular state. The theory predicts that the decay rate scales as $a^{-3.33}$ for atom-molecule collisions or $a^{-2.55}$ for molecule-molecule collisions [@Petrov2004].
In the experiments, a sample of $^6$Li atoms in a 50-50 mixture of the two lowest hyperfine states is loaded into a CO$_2$ laser trap with a bias magnetic field of 840 G, where the two states are strongly interacting. Evaporative cooling is performed to lower the temperature of the sample [@O'Hara2002]. The magnetic field is then changed in 0.8 seconds to a final magnetic field where we perform the measurement. Subsequently, the gas is adiabatically loaded into a CO$_2$ laser standing wave by slowly turning on the retro-reflected CO$_2$ laser beam. A quasi-two-dimensional Fermi gas is then formed and absorption images are taken at various times after the formation of the 2-D system to determine the inelastic decay rate.
At the final optical trap depth, the measured trap oscillation frequencies in the standing wave are $\omega_{\perp}=2\pi\times3250$ Hz in the transverse directions and $\omega_{z}=2\pi\times83.5$ kHz in the axial direction. The corresponding frequencies in the single beam trap are $\omega_{\perp}=2\pi\times1650$ Hz and $\omega_{z}=2\pi\times56$ Hz, respectively. Our measurements indicate very good standing wave alignment, as the transverse frequency is nearly twice that of the single beam trap, as expected.
The total energy of the gas obeys the virial theorem [@ThomasVirial09] when the bias magnetic field is tuned to a broad Feshbach resonance, where the Fermi gas is unitary. Since the trap depth is large compared to the energy of the cloud, the confining potential $U$ is approximately harmonic. Then the total energy is $E=2\langle U\rangle=E_z+E_\perp$, where $E_z$ is the axial energy and $E_\perp$ is the transverse energy, referred to the trap minimum. We determine only the transverse energy $E_\perp=2m\omega_{\perp}^2{\langle x^2\rangle}$, by measuring the mean square transverse cloud size $\langle x^2\rangle$. For reference, the transverse energy for the ground state of an ideal two dimensional Fermi gas is $E_{I\perp}= \frac{2}{3}E_{F\perp}$, where $E_{F\perp}$ is the transverse Fermi energy, $E_{F\perp}=\hbar\omega_{\perp}N_s^{1/2}$. Here $m$ is atomic mass of $^6$Li and $N_s$ is the total atom number in one site. For our experiments in the unitary gas, we measure $E_\perp/E_{F\perp}\sim 1.8$ with $N_s=2,600$ and $E_\perp/E_{F\perp}\sim 0.7$ with $N_s=1,600$. If the 2D unitary gas has the same effective mass as the 3D case, the 2D ground state transverse energy would be $2 E_{F\perp}\sqrt{1+\beta}/3\simeq
0.42\,E_{F\perp}$, using $\beta = -0.60$ [@LuoJLTP]. In this case, our lowest energy would be significantly above the ground state value.
In general, for magnetic fields away from resonance where the scattering length is finite, the total energy is dependent on the scattering length [@Werner]. In this case, we measure the number-independent mean square transverse cloud size $\langle
x^2\rangle/x_{F\perp }^2$, where $x_{F\perp}^{2}$ is defined by $2m\omega_{\perp}^2x_{F\perp}^2\equiv E_{F\perp}$. For an ideal gas in the ground state, we note that $\langle
x_0^2\rangle=\frac{2}{3}x_{F\perp}^{2}$.
![Atom number versus time. Data were taken at 834 G and $E_{\perp}/E_{F\perp}=1.8$. $N$ is total atom number and $N_0$ is initial atom number in the observed region of the cloud. Blue dots: Experimental data; Red solid curve: Three-body decay fit; Green dashed line: Two-body decay fit.[]{data-label="fig:k3at834G"}](k3_834G){width="3.5in"}
We measure inelastic collision rates by measuring the time dependence of the atom number and the radial cloud size. The atom number $N$ as a function of time is [@Roberts2000] $$\label{decay}
\frac{dN}{dt}=-\Gamma N-\int K_2\, n^2\, d^3x - \int K_3\, n^3\, d^3x,$$ where $n$ is the atomic density. On the right side, the first term arises from background collisions with a density-independent rate $\Gamma$ ($1/\Gamma=64$ s for our trap). The second term arises from loss due to two-body inelastic collisions with a rate coefficient $K_2$, while the third term arises from loss due to three-body collisions with a rate coefficient $K_3$.
For the conditions of our experiments, where $E_{F\perp}/\hbar\omega_z\simeq 1.5$, the ground axial state contains 90% of the atoms for an ideal Fermi ga at zero temperature. For simplicity, assume that the 2-D Fermi gas is primarily in the ground axial state of a single site. Then, the atomic density is $$\label{density}
n(\rho,z)=\frac{2}{\pi^{3/2}}\frac{N(z)}
{\sigma_\perp^2\sigma_z}\left(1-\frac{\rho^2}{\sigma_\perp^2}\right)
\, \exp\left(-\frac{z^2}{\sigma_z^2}\right),$$ for $0\leq \rho\leq \sigma_\perp$. Here, $N(z)$ is atom number in the site at position $z$. $\sigma_\perp$ is transverse width for a fit of a Thomas-Fermi distribution to the atomic density profile in the transverse directions, $\sigma_z=(\frac{\hbar}{m\omega_z})^{1/2}$ is axial width for the ground state (along the standing wave), and $\omega_z$ is the corresponding axial trap frequency.
![Three-body inelastic collision rate coefficient $K_3$ versus atomic density for $E_{\perp}/E_{F\perp}=1.8$. Blue dots: Experimental data. Error bars indicate statistical errors; Red dashed line: Fit to the data with $K_3=(8.44\pm1.04)\times10^{-28}$cm$^6$/s.[]{data-label="fig:k3vsdensity"}](k3_vs_atomic_density){width="3.5in"}
In our experiments, $N(z)$ varies as a gaussian distribution function of $z$ with width $L_z$ over the whole cloud in the axial direction. Strictly speaking, $\sigma_\perp$, $\sigma_z$ and $\omega_z$ also vary with $z$ since the depth $U(z)$ of the potential for a site at $z$ is a Lorentzian function of $z$. However, we measure a restricted part of the cloud from $z=-0.83\,L_z$ to $z=0.83\,L_z$ over which $U(z)$ varies less than 10%. Hence, to good approximation, $\sigma_\perp$, $\sigma_z$ and $\omega_z$ are spatially constant.
Integrating the atomic density over each well and then over the restricted region of the cloud, we obtain from Eq. \[decay\] $$\label{decay2}
\frac{dN_c}{dt}=-\Gamma N_c-\alpha_2 K_2 \frac{N_c^2}{\sigma_\perp^2(t)\sigma_z}
- \alpha_3 K_3 \frac{N_c^3}{\sigma_\perp^4(t)\sigma^2_z},$$ where $N_c$ is total number of atoms in the restricted region. Here $\alpha_2=\frac{2\sqrt{2}}{3} \pi^{-3/2}$ and $\alpha_3=\frac{2}{\sqrt{3}}\pi^{-3}$. Note that $\sigma_\perp(t)$ is a function of time since heating leads to an increase in temperature and hence the width of the cloud during the atom loss process. Typically $\sigma^2_\perp(t)$ and $\sigma^4_\perp(t)$ can be fit well to exponential curves, $\propto\exp( \gamma t)$. Note that at the highest energies used in our experiments, a significant fraction of atoms can occupy the first axial excited state. If we assume a 50% fraction, the coefficient $\alpha_3$ is decreased by a factor $0.78$, while $\alpha_2$ is decreased by a factor $0.88$. These systematic corrections are smaller than the statistical uncertainty in our data, so we neglect them in our initial analysis. We then can assume that the axial width is time independent.
In the first set of experiments, we have measured atom number as a function of time in the unitary regime at the Feshbach resonance (834 G), as shown in Fig. \[fig:k3at834G\]. The trap depth is set at 20% of the maximum attainable by reducing the laser intensity. The measured transverse energy of the cloud is $E_\perp/E_{F\perp}=1.8$. We observe a significant ($>60\%$) loss of the atoms in $\sim$ 20 sec. The data is fit with Eq. \[decay2\]. We find that a three-body decay curve fits the data very well while a two-body decay curve does not. This indicates that three-body inelastic collisions play a dominant role in the atom loss.
Fig. \[fig:k3vsdensity\] shows the inelastic decay rate coefficient $K_3$ as a function of atomic density, at the Feshbach resonance, for $E_\perp/E_{F\perp}=1.8$. The atomic density is varied by varying the final trap depth. Data are fit to three-body decay curves, from which we determine $K_3$. A constant value of $K_3$ over factor of 10 in atomic density indicates the atom loss is indeed a three-body decay process. By fitting all of the data with the same $K_3$, we obtain $K_3=(8.44\pm1.04)\times10^{-28}$cm$^6$/s.
We have also measured $K_3$ as a function of magnetic field for $\langle x^2\rangle /x_{F\perp}^2 =
1.8$, which corresponds to the transverse energy $E_\perp/E_{F\perp}=1.8$ at resonance. The fitted $K_3$ is plotted as a function of interaction strength $1/k_{F\perp}a$, Fig. \[fig:k3B\]. Here $k_{F\perp}=(2mE_{F\perp})^{1/2}/\hbar$ is the two dimensional Fermi wave vector for an ideal gas at the trap center and $a$ is the s-wave scattering length. By tuning the magnetic field from 790 G to 1200 G, we vary $1/k_{F\perp}a$ from 0.20 to -0.56, using the known values of $a(B)$ [@GrimmScattLength]. A factor of $\sim 40$ decrease in $K_3$ is observed as the bias magnetic field is tuned from the BEC regime to the BCS regime. We fit our data on the BCS side of the Feshbach resonance with the function of $K_3=C|a|^n$ and find $n=0.79\pm0.14$. The result is in significant disagreement with the theoretical prediction $n=2.455$ [@Esry2005]. On the BEC side, $K_3$ increases as the magnetic field is tuned away from the Feshbach resonance, instead of peaking on the resonance. This is consistent with the experiments by other groups [@Dieckmann2002; @Bourdel2004; @Regal2004].
![$K_3$ versus interaction strength $1/k_{F\perp}a$ at $\langle x^2\rangle /x_{F\perp}^2 =
1.8$. Bars denote statistical error. Varying the magnetic field from 790 G to 1200 G changes $1/k_{F\perp}a$ from 0.20 to -0.56. We observe a factor of 40 change in $K_3$, from $(17.3\pm3.2)\times10^{-28}$cm$^6$/s at 790 G to $(0.44\pm0.22)\times10^{-28}$cm$^6$/s at 1200 G. []{data-label="fig:k3B"}](k3_vs_B.eps){width="3.5in"}
We have repeated the measurement of atom number versus time, at resonance in the unitary regime, but at a lower energy $E_\perp/E_{F\perp}=0.7$, Fig. \[fig:k2k3\]. Neither two-body decay alone nor three-body decay alone fits the data. Instead, the combination of two-body and three-body decay fits the data well, which indicates two-body and three-body decays both contribute to the atom loss.
We suggest that the two-body process is related to correlated pairs that can exist at low energy (temperature). At higher energy, only single atoms exist while pairs are broken. In that case, the Fermi gas can only decay through three-body inelastic collisions of free atoms. By contrast, at low energy, pair-atom or pair-pair inelastic collisions are possible. Therefore, both two-body decay and three-body decay processes can play a role in the atom loss.
By measuring atom loss as a function of time at $E_\perp/E_{F\perp}=0.7$, we find $K_3=(3.30\pm1.81)\times10^{-28}$cm$^6$/s and $K_2=(0.42\pm0.16)\times10^{-14}$cm$^3$/s. It appears that $K_3$ is approximately three times smaller than that at $E_\perp/E_{F\perp}=1.8$. This suppression cannot arise from Pauli blocking, as the energetic final states are unoccupied.
The observed scaling of $K_3$ with transverse energy is consistent with the prediction of Ref. [@Esry2005], where $K_3\propto E$ for the lowest order process. We observe $K_3(E_{F\perp}=1.8)/K_3(E_{F\perp}=0.7)=8.44/3.30=2.56$, in very good agreement with the predicted ratio, $1.8/0.7=2.57$.
Although the data indicates a linear scaling of $K_3$ with energy, a decrease in $K_3$ can also arise from a reduction in the number of available single atoms, due to pair formation. Defining $f$ as the fraction of atoms which are paired, the three-body decay rate is proportional to $(1-f)^3N^3$. For pair-atom collisions, a two-body rate would scale as $f(1-f)N^2$, while for pair-pair collisions, the corresponding rate would be proportional to $f^2N^2$.
![Atom number versus time. Data were taken at $E_{\perp}/E_{F\perp}=0.7$ in the unitary regime. Blue dots: Experimental data; Red solid curve: Combination fit including two-body and three-body decay; Violet dotted line: Three-body decay fit; Green dashed line: Two-body decay fit.[]{data-label="fig:k2k3"}](k2_and_k3_834G.eps){width="3.5in"}
Using these assumptions, we can rewrite the rate constants that appear in Eq. \[decay2\] as $$\begin{aligned}
\label{eq:decay3}
\nonumber K_3&\equiv& (1-f)^3 K_{3}^0\\
K_2&\equiv& f\,K_2^0\equiv f^2K_{2,pp}^0+f(1-f)K_{2,pa}^0.\end{aligned}$$ Here $K_{2,pa}^{0}$ is the pair-atom inelastic collision rate coefficient and $K_{2,pp}^{0}$ is the pair-pair inelastic collision rate coefficient. At $E_\perp/E_{F\perp}=1.8$, we observe pure three-body decay so that $f=0$. Hence we have $K_3^0=K_3= (8.44\pm1.04)\times 10^{-28}$ cm$^6$/s.
If we make the extreme assumption that $K_3^0$ is independent of energy, then we can reinterpret the fitted values of $K_3$ and $K_2$ for $E_\perp/E_{F\perp}=0.7$ using Eq. \[eq:decay3\] for the rate constants. $K_3=(1-f)^3K_3^0$ yields $f=(30\pm15)\%$ and $K_2=fK_2^0$ then requires $K_2^0=(1.72\pm1.04)\times10^{-14}$cm$^3$/s. As the fraction of pairs appears large, it is more likely that the reduction in $K_3$ arises at least in part from energy scaling, which agrees with predictions [@Esry2005], and that the true fraction of pairs is smaller.
At a magnetic field of 790 G, we first analyze the data to determine $K_3$ and $K_2$ of Eq. \[decay2\]. For $\langle x^2\rangle /x_{F\perp}^2 = 1.8$, we find $K_3=(17.3\pm3.2)\times10^{-28}$cm$^6$/s. At $\langle x^2\rangle /x_{F\perp}^2 =
0.9$, we obtain, $K_3=(9.35\pm3.06)\times10^{-28}$cm$^6$/s, which is consistent with the predicted linear scaling with energy [@Esry2005]. The corresponding two-body decay rate constants are $K_2=0$ at $\langle x^2\rangle /x_{F\perp}^2 =
1.8$ and $K_2=(0.57\pm0.22)\times10^{-14}$cm$^3$/s at $\langle x^2\rangle /x_{F\perp}^2 =
0.9$.
If we again assume instead that $K_3^0$ of Eq. \[eq:decay3\] is independent of energy, we have $K_3^0=(17.3\pm3.2)\times10^{-28}$cm$^6$/s. Using $K_3=(9.35\pm3.06)\times10^{-28}$cm$^6$/s for $\langle x^2\rangle /x_{F\perp}^2 =0.9$, we require the molecular fraction to be $f=(19\pm7)\%$. Then, we obtain $K_2^0=(3.22\pm0.60)\times10^{-14}$cm$^3$/s. Note that, on the BEC side, two-body inelastic collisions are expected to be molecule-atom or molecule-molecule, as predicted [@Petrov2004]. The increased two-body rate arising from molecules on the BEC side supports our assumption that the two-body rate at and just above resonance arises from correlated pairs. In this case, a many-body theory of inelastic collisions will be needed to replace the few-body theory that is valid far from resonance.
Above the Feshbach resonance, we do not observe a two-body decay process for $1/(k_{F\perp}a)\leq -0.09$, i.e., $B>860$ G. This suggests that no pairs are formed for $B>860$ G at the lowest energy $E_\perp/E_{F\perp}=0.7$ we achieve.
By comparing the data at high energy and low energy over a wide range of density, we are able to distinguish between two-body and three-body processes. This method may provide a probe to determine the fraction of pairs or molecules in the Fermi gas, once the energy scaling of $K_3$ is fully established. In the unitary regime, investigation of the energy (or temperature [@LuoJLTP]) dependence of $K_3$, as well as the pair fraction, will be an important topic of future work.
This research is supported by the Physics Divisions of the Army Research Office and the National Science Foundation, and the Chemical Sciences, Geosciences and Biosciences Division of the Office of Basic Energy Sciences, Office of Science, U.S. Department of Energy. We are indebted to Le Luo and Bason Clancy for help in the initial stages of this work.
[22]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, , , , ****, ().
, , , ****, ().
, , , , ****, ().
, , , , ****, ().
, , , ****, ().
, , , , , , ****, ().
, , , , , , , , , ****, ().
, , , , , ****, ().
, , , ****, ().
, ****, ().
, , , ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, ****, ().
(), .
, ****, ().
, ****, ().
, ****, ().
, , , , , , , , ****, ().
|
---
author:
- 'Guoliang Yu[^1]'
date:
title: 'Hyperbolic groups admit proper affine isometric actions on $l^p$-spaces'
---
Introduction
============
Let $X$ be a Banach space and $\Gamma$ be a countable discrete group. An affine and isometric action $\alpha$ of $\Gamma$ on $X$ is said to be proper if $lim_{g\rightarrow \infty} \| \alpha(g)
\xi \| =\infty$ for every $\xi\in X$. If $\Gamma$ admits a proper isometric affine action on Hilbert space, then $\Gamma$ is said to be of Haagerup property \[9\] or a-T-menable \[12\].
Bekka, Cherix and Valette proved that an amenable group admits a proper affine isometric action on Hilbert space \[3\]. This result has important applications to K-theory of group $C^*$-algebras \[13\] \[14\].
It is well known that an infinite Property (T) group doesn’t admit a proper affine isometric action on Hilbert space. The purpose of this paper is to prove the following result.
If $\Gamma$ is a hyperbolic group, then there exists $2\leq
p<\infty$ such that $\Gamma$ admits a proper affine isometric action on an $l^p$-space.
We remark that the constant $p$ depends on the hyperbolic group $\Gamma$ (in the special case that $\Gamma$ is the fundamental group of a negatively curved compact manifold, $p$ depends on the dimension of the manifold), and $p$ is strictly greater than $2$ if the hyperbolic $\Gamma$ is infinite and has Property (T). Recall that a theorem of A. Zuk states that hyperbolic groups are generically of Property (T) \[22\].
In \[1\], Bader and Gelander studied Property (T) for $L^p$-spaces. Their work has extremely interesting applications in Fisher and Margulis’ theory of local rigidity \[6\]. Bader and Gelander raised the question if any affine isometric action of a Property (T) group on an $L^p$-space has a fixed point (Question 12 in \[1\]). Theorem 1.1 implies that the answer to this question is negative for infinite hyperbolic groups with Property (T).
The proof of Theorem 1.1 is based on a construction of Igor Mineyev \[18\] and is reminiscent of Alain Connes’ construction of Chern character of finitely summable Fredholm modules for rank one groups \[5\].
The author wishes to thank Igor Mineyev for very helpful comments on the exposition of this note, Erik Guentner for bringing \[1\] to my attention, and Nigel Higson for pointing out that an unpublished result Y. Shalom implies that $Sp(n,1)$ admits a proper affine isometric action on some uniformly convex Banach space.
Hyperbolic groups and bicombings.
=================================
In this section, we recall the concepts of hyperbolic groups and bicombings.
Hyperbolic groups.
------------------
Let $\Gamma$ be a finitely generated group. Let $S$ be a finite generating set for $\Gamma$. Recall that the Cayley graph of $\Gamma$ with respect to $S$ is the graph $G$ satisfying the following conditions:
- the set of vertices in $G$, denoted by $G^{(0)}$, is $\Gamma$;
- the set of edges is $\Gamma \times S$, where each edge $(g,s)\in \Gamma\times S$ spans the vertices $g$ and $g s.$
We endow $G$ with the path metric $d$ induced by assigning length 1 to each edge. Notice that $\Gamma$ acts freely, isometrically and cocompactly on $G$. A geodesic path in $G$ is a shortest edge path. The restriction of the path metric $d$ to $\Gamma$ is called the word metric.
A finitely generated group $\Gamma$ is called hyperbolic, if there exists a constant $\delta\geq 0$ such that all the geodesic triangles in $G$ are $\delta$-fine in the following sense: if $a$, $b$, and $c$ are vertices in $G$, $[a,b]$, $[b,c]$, and $[c,a]$ are geodesics from $a$ to $b$, from $b$ to $c$, and from $c$ to $a$, respectively, and points $\bar{a}\in [b,c]$, $v,\bar{c}\in
[a,b]$, $w,\bar{b}\in [a,c]$ satisfy $$d(b,\bar{c})=d(b,\bar{a}),\quad d(c,\bar{a})=d(c,\bar{b}),\quad
d(a,v)=d(a,w)\leq d(a,\bar{c})=d(a,\bar{b}),$$ then $d(v,w)\leq
\delta$.
The above definition of hyperbolicity does not depend on the choice of the finite generating set $S$. See \[8\] for other equivalent definitions.
For vertices $a$, $b$, and $c$ in $G$, the Gromov product is defined by $$(b|c)_a := d(a,\bar{b})= d(a,\bar{c})=
\frac{1}{2}\Big[d(a,b)+d(a,c)-d(b,c)\Big].$$ The Gromov product can be used to measure the degree of cancellation in the multiplication of group elements in $G$.
Bicombings.
-----------
Let $\Gamma$ be a finitely generated group. Let $G$ be its Cayley graph with respect to a finite generating set. A bicombing $q$ in $G$ is a function assigning to each ordered pair $(a,b)$ of vertices in $G$ an oriented edge-path $q[a,b]$ from $a$ to $b$. A bicombing $q$ is called geodesic, if each path $q[a,b]$ is geodesic, i.e. a shortest edge path. A bicombing $q$ is $\Gamma$-equivariant if $q[g\cdot a, g\cdot b]= g\cdot q[a,b]$ for each $a,b\in G^{(0)}$ and each $g\in \Gamma$.
A construction of Mineyev.
==========================
The purpose of this section is to recall Mineyev’s contruction for hyperbolic groups and its properties \[18\].
Let $\Gamma$ be a hyperbolic group and $G$ be a Cayley graph of $\Gamma$ with respect to a finite generating set. We endow $G$ with the path metric $d$, and identify $\Gamma$ with $G^{(0)}$, the set of vertices of $\Gamma$. Let $\delta\ge 1$ be a positive integer such that all the geodesic triangles in $G$ are $\delta$-fine.
The ball $B(x,R)$ is the set of all vertices at distance at most $R$ from the vertex $x$. The sphere $S(x,R)$ is the set of all vertices at distance $R$ from the vertex $x$. Pick an equivariant geodesic bicombing $q$ in $G$. By $q[a,b](t)$ we denote the point on the geodesic path $q[a,b]$ at distance $t$ from $a$. Recall that $C_0 (\Gamma,\mathbb{Q})$ is the space of all finitely supported $0$-chains (in $\Gamma=G^{(0)}$) with coefficients in $\mathbb{Q}$, i.e. $C_0 (\Gamma,\mathbb{Q})= \{\sum_{\gamma\in
\Gamma} c_{\gamma}\gamma: c_{\gamma}\in \mathbb{Q}\}$, where $\sum_{\gamma\in \Gamma} c_{\gamma}\gamma $ is finitely supported.
For each $p\geq 1$, endow $C_0(\Gamma,\mathbb{Q})$ with the $l^p$-norm $\|\cdot\|_p$. We identify $\Gamma$ with the standard basis of $C_0(\Gamma,\mathbb{Q})$. Therefore the left action of $\Gamma$ on itself induces a left action on $C_0(G,\mathbb{Q})$.
For $v, w\in \Gamma$, the flower at $w$ with respect to $v$ is defined to be $$Fl(v, w) := S(v, d(v,w))\cap B(w,\delta)\subseteq \Gamma.$$
For each $a\in \Gamma$, we define $pr_a: \Gamma\rightarrow \Gamma$ by:
- $pr_a (a) := a;$
- if $b\neq a$, $pr_a(b) := q[a,b](t)$, where $t$ is the largest integral multiple of $10\delta$ which is strictly less than $d(a,b)$.
Now for each pair $a,b\in \Gamma$, we define a $0$-chain $f(a,b)$ in $\Gamma$ inductively on the distance $d(a,b)$ as follows:
- if $d(a,b)\leq 10\delta$, $f(a,b) :=b$;
- if $d(a,b) >10\delta$ and $d(a, b)$ is not an integral multiple of $10\delta$, let $f(a,b) := f(a, pr_a (b));$
- if $d(a,b)> 10\delta$ and $d(a,b)$ is an integral multiple of $10\delta$, let $$f(a,b) := \frac{1}{\# Fl(a,b)}\sum_{x\in Fl(a,b)} f(a, pr_a (x)).$$
The following result is due to Mineyev \[18\].
The function $f:\Gamma \times \Gamma \to C_0(\Gamma,\mathbb{Q})$ defined above satisfies the following conditions.
- For each $a,b\in \Gamma$, $f(b,a)$ is a convex combination, i.e. its coefficients are non-negative and sum up to 1.
- If $d(a,b)\geq 10\delta$, then $supp\,f(b,a)\subseteq B(q[b,a](10\delta),\delta)\cap
S(b,10\delta)$.
- If $d(a,b)\leq 10\delta$, then $f(b,a)=a$.
- $f$ is $\Gamma$-equivariant, i.e. $f(g\cdot b, g\cdot a)= g\cdot f(b,a)$ for any $g,a,b\in \Gamma$.
- There exist constants $L\geq 0$ and $0\leq \lambda<1$ such that, for all $a,a',b\in \Gamma$, $$\|f(b,a)-f(b,a')\|_1\leq L\,\lambda^{(a|a')_b}.$$
Let $p\geq 2$. For each pair $b,a \in \Gamma$, define $$h(b,a) =\frac{1}{ \| f(b,a)\|_{p}} f(b,a),$$ where $f$ is as in Proposition 3.1.
The function $h:\Gamma \times \Gamma \to C_0(\Gamma,\mathbb{Q})$ defined above satisfies the following conditions.
- For each $a,b\in \Gamma$, $\|h(b,a)\|_{p}=1$.
- If $d(a,b)\geq 10\delta$, then $supp\,h(b,a)\subseteq B(q[b,a](10\delta),\delta)\cap
S(b,10\delta)$.
- If $d(a,b)\leq 10\delta$, then $h(b,a)=a$.
- $h$ is $\Gamma$-equivariant, i.e. $h(g\cdot b, g\cdot a)= g\cdot h(b,a)$ for any $g,a,b\in \Gamma$.
- There exist constants $C\geq 0$ and $0\leq \rho<1$ such that, for all $a,a',b\in \Gamma$, $$\|h(b,a)-h(b,a')\|_{p}\leq C\, \rho^{(a|a')_b}.$$
[[*Proof:*]{}]{} (1), (2), (3) and (4) of Corollary 3.2 follow from Proposition 3.1.
By (2) of Proposition 3.1, we have $$\# supp \,h(b,a) \leq \# S(b,10 \delta), \,\, \# supp \,h(b,a') \leq \# S(b,10
\delta).$$ It follows that $$\|h(b,a)-h(b,a')\|_{p}\leq \,\,2(\#
S(b,10\delta))^{\frac{1}{p}}\,\,\, \|h(b,a)-h(b,a')\|_{1}.$$ Now (5) of Corollary 3.2 follows from (5) of Proposition 3.1.
Proof of the main result.
=========================
In this section, we prove Theorem 1.1.
[[*Proof of Theorem 1.1:*]{}]{}
Let $\upsilon>0$ such that $\# B(x, r)\leq \upsilon^r$ for all $x\in \Gamma$ and $r>0$. Let $\rho$ be as in Corollary 3.2. Choose $p\geq 2$ such that $\rho^p \upsilon < \frac{1}{2}.$
Let $l^p(\Gamma)$ be the completion of $C_0(\Gamma, \mathbb{Q})$ with respect to the norm $\|\cdot \|_p$. Notice that the $\Gamma$ action on $C_0(\Gamma, \mathbb{Q})$ can be extended to an isometric action on $l^p(\Gamma)$.
Let $$X=\{\xi:\Gamma\rightarrow l^p(\Gamma): \|\xi\|_p=(\sum_{\gamma \in \Gamma}
\| \xi (\gamma)\|^p )^{\frac{1}{p}}<\infty\}.$$ Observe that $X$ is isometric to $l^p(\Gamma \times \Gamma).$
Let $\pi$ be the isometric action of $\Gamma$ on $X$ defined by:
$$(\pi(g)\xi)(\gamma)=g(\xi (g^{-1}\gamma))$$ for all $\xi\in X$ and $g, \gamma\in \Gamma.$
Define $\eta\in X$ by: $$\eta(\gamma)= h(\gamma,e)$$ for all $\gamma\in \Gamma$, where $e$ is the identity element in $\Gamma$.
For each $g\in \Gamma$, by Corollary 3.2 and the choice of $p$, we have: $$\|\pi(g)\eta-\eta\|_{p}^p= \sum_{\gamma\in\Gamma} \| g
(h(g^{-1}\gamma,e))- h(\gamma,e)\|_{p}^p~~~~~~~~~~~~$$ $$\leq \sum_{\gamma\in\Gamma} \|
h(\gamma,g)- h(\gamma, e)\|_{p}^p$$ $$\leq \sum_{\gamma\in\Gamma} C^p\rho^{p (d(\gamma,e)-d(g,e))}~~~~$$ $$\leq \sum_{n=0}^{\infty} C^p \rho^{p (n-d(g,e))}
\upsilon^n~~~~$$ $$\leq \,\,\,2C^p\,\,\, \rho^{-p
d(g,e)}\,\,\,\,\,\,\,\,\,.\,\,\,\,\,\,\,\,\,\,\,\,$$ It follows that $\pi(g)\eta-\eta$ is an element in $X$ for each $g\in
\Gamma$.
We now define an affine isometric action $\alpha$ on $X$ by $\Gamma$ by:
$$\alpha(g) \xi=\pi (g) \xi + \pi(g)\eta -\eta$$ for all $\xi \in X$ and $g\in\Gamma$.
If $\gamma$ is a vertex on the oriented geodesic $q[g,e]$ satisfying $d(\gamma,e)\geq 10\delta$ and $d(\gamma, g)\geq
10\delta$, we have $$B(q[\gamma,e](10\delta), \delta) \cap B(q[\gamma,g](10\delta),
\delta)=\varnothing.$$
Otherwise if there exists $z \in B(q[\gamma,e](10\delta),
\delta) \cap B(q[\gamma,g](10\delta), \delta),$ then $$\begin{aligned}
d(g, e)&\leq &d(g, z)+ d(z,e)\\
& \leq &(d(g, q[\gamma,g](10\delta))+\delta))+
(\delta+d(q[\gamma,e](10\delta),e)\\
&=&((d(g,\gamma)-10\delta)+\delta)+(\delta + (
d(\gamma,e)-10\delta))\\
& =& d(g,e)-18\delta.\end{aligned}$$
This is a contradiction since $\delta>0$.
By (2) of Corollary 3.2, we have $$supp \,\,h(\gamma, g) \cap supp \,\,h(\gamma,e)=\varnothing$$ if $\gamma$ is a vertex on the oriented geodesic $q[g,e]$ satisfying $d(\gamma,e)\geq 10\delta$ and $d(\gamma, g)\geq
10\delta$.
It follows that there exist at least $d(g,e)-100\delta$ number of vertices $\gamma$ on the oriented path $q[g, e]$ such that $$\| g (h(g^{-1}\gamma,e))-
h(\gamma,e)\|_{p}=\|h(\gamma,g)-h(\gamma,e)\|_p\geq 1.$$
Hence $$\|\pi(g)\eta-\eta\|_p^p\geq d(g,e)-100\delta$$ for all $g\in \Gamma.$
As a consequence, for every $\xi \in X$, we have $$\|\alpha (g) \xi-\pi (g) \xi \|_p \rightarrow \infty$$ as $g\rightarrow \infty$.
This, together with the fact that $\pi (g)$ is an isometry, implies that $\alpha$ is proper.
We should mention that it remains an open question if $SL(n,\mathbb{Z})$ admits a proper affine isometric action on some uniformly convex Banach space for $n\geq 3$. A positive answer to this question would have interesting applications to K-theory of group $C^*$-algebras \[16\].
[99]{}
U. Bader and T. Gelander, Propert (T) and unitary representations on $L_p$. Preprint, 2004.
P. Baum and A. Connes, K-theory for discrete groups, Operator Algebras and Applications, (D. Evans and M. Takesaki, editors), Cambridge University Press (1989), 1–20. Bekka, M. E.
M. E. B. Bekka, P.-A. Cherix, and A. Valette, Proper affine isometric actions of amenable groups. Novikov conjectures, index theorems and rigidity, Vol. 2 (Oberwolfach, 1993), 1–4, London Math. Soc. Lecture Note Ser., 227, Cambridge Univ. Press, Cambridge, 1995.
N. Brown and E. Guentner, Uniform embedding of bounded geometry spaces into reflexive Banach spaces. Preprint, 2003.
A. Connes, Noncommutative Geometry, Academic Press, 1994.
A. Connes and H. Moscovici, Cyclic cohomology, the Novikov conjecture and hyperbolic groups, Topology 29 (1990), 345–388.
D. Fisher and G. A. Margulis, Almost isometric actions, Property (T), and local rigidity. Preprint, 2004.
M. Gromov, Hyperbolic groups, MSRI Publ. 8, 75-263, Springer, 1987.
M. Gromov, Asymptotic invariants for infinite groups, Geometric Group Theory, (G. A. Niblo and M. A. Roller, editors), Cambridge University Press, (1993), 1–295.
M. Gromov, Problems (4) and (5), Novikov Conjectures, Index Theorems and Rigidity, Vol. 1, (S. Ferry, A. Ranicki and J. Rosenberg, editors), Cambridge University Press, (1995), 67.
M. Gromov, Spaces and questions. GAFA 2000 (Tel Aviv, 1999). Geom. Funct. Anal. 2000, Special Volume, Part I, 118–161.
U. Haagerup, An example of a nonnuclear $C\sp{*} $-algebra, which has the metric approximation property. Invent. Math. 50 (1978/79), no. 3, 279–293.
N. Higson and G. G. Kasparov, Operator K-theory for groups which act properly and isometrically on Hilbert space, Electronic Research Announcements, AMS 3 (1997), 131–141.
N. Higson and G. G. Kasparov, $E$-theory and $KK$-theory for groups which act properly and isometrically on Hilbert space. Invent. Math. 144 (2001), no. 1, 23–74.
G. Kasparov and G. Yu, Uniform convexity and the coarse geometric Novikov conjecture. Preprint, 2004.
G. Kasparov and G. Yu, Uniform convexity and K-theory of group $C^*$-algebras. In preparation.
V. Lafforgue, $K$-théorie bivariante pour les algèbres de Banach et conjecture de Baum-Connes. Invent. Math. 149 (2002), no. 1, 1–95.
I. Mineyev, Straightening and bounded cohomology of hyperbolic groups. Geom. Funct. Anal. 11 (2001), no. 4, 807–839.
I. Mineyev and G. Yu, The Baum-Connes conjecture for hyperbolic groups. Invent. Math. 149 (2002), no. 1, 97–122.
M. Puschnigg, The Kadison-Kaplansky conjecture for word-hyperbolic groups. Invent. Math. 149 (2002), no. 1, 153–194.
G. Yu, The coarse Baum-Connes conjecture for spaces which admit a uniform embedding into Hilbert space. Invent. Math. 139 (2000), no. 1, 201–240.
A. Zuk, Property (T) and Kazhdan constants for discrete groups. Geom. Funct. Anal. 13 (2003), no. 3, 643–670.
Department of Mathematics
1326 Stevenson Center
Vanderbilt University
Nashville, TN 37240, USA
e-mail: [email protected]
[^1]: Partially supported by NSF and NSFC.
|
---
abstract: |
We study totally decomposable symplectic and unitary involutions on central simple algebras of index $2$ and on split central simple algebras respectively. We show that for every field extension, these involutions are either anisotropic or hyperbolic after extending scalars, and that the converse holds if the algebras are of $2$-power degree. These results are new in characteristic $2$, otherwise were shown in [@Becher:qfconj] and [@Black:inv] respectively.
*Keywords:* Central simple algebras, involutions, characteristic two, hermitian forms, quadratic forms.
*Mathematics Subject Classification (MSC 2010):* 16W10, 16K20, 11E39, 11E81, 12F05.
address: 'Universiteit Antwerpen, Departement Wiskunde-Informatica, Middelheimlaan 1, 2020 Antwerpen, Belgium.'
author:
- Andrew Dolphin
title: Totally decomposable symplectic and unitary involutions
---
[^1]
Introduction
============
An algebra with involution is totally decomposable if it is isomorphic to a tensor product of quaternion algebras with involution. Over fields of characteristic different from two, the adjoint involution of a Pfister form is a totally decomposable involution. Pfister forms are a central part of the algebraic theory of quadratic forms, and in [@parimala:pfisterinv] it was asked whether totally decomposable involutions share certain characterising properties of Pfister forms. In particular, whether totally decomposable involutions are exactly those involutions on algebras of two-power degree that are either anisotropic or hyperbolic after extending scalars. That totally decomposable orthogonal involutions over a field of characteristic different from two are always either anisotropic or hyperbolic can be shown (see [@Black:inv (3.2)]) using the non-hyperbolic splitting results of Karpenko, [@karpenko:hyporth], and that any totally decomposable orthogonal involution on a split algebra is adjoint to a Pfister form. The later result is known as the Pfister Factor Conjecture, and was shown in [@Becher:qfconj]. The converse, that any orthogonal involution on an algebra of two-power degree that is anisotropic or hyperbolic after extending scalars to any field extension is necessarily totally decomposable, is clear for split algebras, but otherwise remains largely open. In [@dolphin:orthpfist], this question is considered for orthogonal involutions in characteristic two, whose behaviour is somewhat unusual.
Here we consider analogues of the property that a totally decomposable orthogonal involution on a split algebra is either anisotropic or hyperbolic, and its converse, for symplectic and unitary involutions. As every split symplectic involution is hyperbolic, and hence the question is trivial in this case, we consider symplectic involutions on an algebra Brauer equivalent to a quaternion algebra. We show that for every field extension, totally decomposable symplectic involutions on index two algebras and unitary involutions on split algebras are either anisotropic or hyperbolic after extending scalars, and that the converse holds if the algebras are of two-power degree. These results were first proven in [@Becher:qfconj] and [@Black:inv (3.1)] respectively under the assumption that the base field was of characteristic different from two. Here we make no assumption on the characteristic of the base field.
Our approach follows that used in [@Becher:qfconj]. The main tool is an index reduction argument using the the so-called trace forms of Jacobson, first introduced for characteristic different from two in [@jacobson:hermformstr]. These trace forms give a correspondence between hermitian forms over a quaternion algebra or a quadratic separable extension with the respective canonical involutions and certain quadratic forms over the base field. In sections §\[section:jacobson\] and §\[jacanal\] we consider versions of these results that hold in any characteristic and derive analogous statements for involutions.
Algebras with involution {#section:basicsalgs}
========================
Throughout, let $F$ be a field, $\operatorname{char}(F)$ denote its characteristic and $F^\times$ denote its multiplicative group. We refer to [@pierce:1982] as a general reference on finite-dimensional algebras over fields, and for central simple algebras in particular, and to [@Knus:1998] for involutions. Let $A$ be an (associative) $F$-algebra. We denote the centre of $A$ by $Z(A)$. For a field extension $K/F$, the $K$-algebra $A\otimes_F K$ is denoted by $A_K$. An $F$-*involution* on $A$ is an $F$-linear map $\sigma:A\rightarrow A$ such that $\sigma(xy)=\sigma(y)\sigma(x)$ for all $x,y\in A$ and $\sigma^2=\textrm{id}_A$.
Assume now that $A$ is finite-dimensional and simple (i.e. it has no nontrivial two sided ideals). Then $Z(A)$ is a field, and by Wedderburn’s Theorem (see [@Knus:1998 (1.1)]) we have $A\simeq{{\mathrm{End}}}_D(V)$ for an $F$-division algebra $D$ and a right $D$-vector space $V$, and furthermore ${{\mathrm{dim}}}_{Z(A)}(A)$ is a square number, whose positive square root is called the *degree of $A$* and is denoted $\mathrm{deg}(A)$. The degree of $D$ is called the *index of $A$* and is denoted $\mathrm{ind}(A)$. We call $A$ *split* if $\mathrm{ind}(A)=1$. We call a field extension $K/F$ a *splitting field of $A$* if $A_K$ is split. If $Z(A)=F$, then we call the $F$-algebra $A$ *central simple*. Two central simple $F$-algebras $A$ and $B$ are called *Brauer equivalent* if $A$ and $B$ are isomorphic to endomorphism algebras of two right vector spaces over the same $F$-division algebra. If $A$ is a central simple $F$-algebra let ${{\mathrm{Trd}}}_A:A{{\longrightarrow}}F$ denote the reduced trace map and ${{\mathrm{Nrd}}}_A:A{{\longrightarrow}}F$ the reduced norm map (see [@Knus:1998 (1.6)] for the definitions).
An *$F$-algebra with involution* is a pair $(A,\sigma)$ of a finite-dimensional $F$-algebra $A$ and an $F$-involution $\sigma$ on $A$ such that one has $F=\{x \in Z(A) \mid \sigma(x)=x\}$, and such that either $A$ is simple or $A$ is a product of two simple $F$-algebras that are mapped to one another by $\sigma$. In this situation there are two possibilities: either $Z(A)=F$, so that $A$ is a central simple $F$-algebra, or $Z(A)/F$ is a quadratic étale extension with $\sigma$ restricting to the nontrivial $F$-automorphism of $Z(A)$. To distinguish these two situations, we speak of algebras with involution of the *first* and *second kind*: we say that the $F$-algebra with involution $(A,\sigma)$ is of the *first kind* if $Z(A)=F$ and of the *second kind* otherwise. For more information on involutions of the second kind, also called *unitary involutions*, we refer to [@Knus:1998 §2.B]. Involutions of the first kind are divided into two *types*, *orthogonal* and *symplectic* (see §\[section:her\]).
Let $(A,{{\sigma}})$ be an $F$-algebra with involution. If $Z(A)$ is a field, then $A$ is a central simple $Z(A)$-algebra, and we say $(A,{{\sigma}})$ is *split* if $A$ is split as $Z(A)$-algebra. If $Z(A)\simeq F\times F$, then $(A,{{\sigma}})\simeq (B\times B^{\mathsf{op}},\epsilon)$ where $B$ is a central simple $F$-algebra, $B^{\mathsf{op}}$ is its opposite algebra and $\epsilon$ is the map exchanging the components of elements of $B\times B^{\mathsf{op}}$; in this case we say $(A,{{\sigma}})$ is *split* if $B$ is split as an $F$-algebra. Given a field extension $K/F$, we abbreviate $\sigma_K=\sigma\otimes \textrm{id}_K$ and $(A,\sigma)_K=(A_K,\sigma_K)$ is a $K$-algebra with involution. We call $K$ a *splitting field* of $(A,\sigma)$ if $(A,{{\sigma}})_K$ is split.
Let $(A,{{\sigma}})$ and $(B,\tau)$ be $F$-algebras with involution. Letting $(\sigma\otimes\tau)(a\otimes b)={{\sigma}}(a)\otimes \tau(b)$ for $a\in A$ and $b\in B$ determines an $F$-involution ${{\sigma}}\otimes \tau$ on the $F$-algebra $A\otimes_F B$. We denote the pair $(A\otimes_F B,\sigma\otimes \tau)$ by $(A,\sigma)\otimes(B,\tau)$. By an *isomorphism of $F$-algebras with involution* we mean an $F$-algebra isomorphism $\Phi: A\rightarrow B$ satisfying $\Phi\circ\sigma=\tau\circ\Phi$. Let $(A,{{\sigma}})$ be an $F$-algebra with involution with centre $K$. We call $(A,{{\sigma}})$ *totally decomposable* if there exists an $n\in\mathbb{N}$ and $F$-quaternion algebras with involution $(Q_1,{{\sigma}}_1),\ldots, (Q_n,{{\sigma}}_n)$ with common centre $K$ such that $(A,\sigma)\simeq \bigotimes_{i=1}^n (Q_i,\sigma_i)$.
We call $(A,\sigma)$ *isotropic* if there exists an element $a\in A\setminus\{0\}$ such that $\sigma(a)a=0$, and *anisotropic* otherwise. We call an idempotent $e$ of $A$ *hyperbolic with respect to $\sigma$* if $\sigma(e)=1-e$. We call an $F$-algebra with involution $(A,\sigma)$ *hyperbolic* if $A$ contains a hyperbolic idempotent with respect to $\sigma$. By [@Knus:1998 (12.35)] we have:
\[cor:allhypsame\] Let $(A,{{\sigma}}_1)$ and $(A,{{\sigma}}_2)$ be hyperbolic $F$-algebras with involution such that ${{\sigma}}_1|_{Z(A)}={{\sigma}}_2|_{Z(A)}$. Then $(A,{{\sigma}}_1)\simeq(A,{{\sigma}}_2)$.
Let $(A,\sigma)$ be an $F$-algebra with involution. For $\lambda \in Z(A)$ such that $\lambda\sigma(\lambda)=1$, let $\mathrm{Sym}_{\lambda}(A,\sigma)= \{ a\in A\mid \lambda\sigma(a)=a\}$ and ${{\mathrm{Symd}}}_\lambda(A,\sigma)= \{ a+\lambda\sigma( a)\,|\, a\in A\}$. These are $F$-linear subspaces of $A$ and we write $\mathrm{Sym}(A,\sigma)= \mathrm{Sym}_1(A,\sigma)$ and ${{\mathrm{Symd}}}(A,\sigma)= {{\mathrm{Symd}}}_{1}(A,\sigma)$.
We also need to consider quadratic pairs for our main results. Let $(A,{{\sigma}})$ be an $F$-algebra with involution of the first kind. We call an $F$-linear map $f:{{\mathrm{Sym}}}(A,\sigma)\rightarrow F$ a *semi-trace on $(A,\sigma)$* if it satisfies $f(x+\sigma(x)) = \mathrm{Trd}_A(x)$ for all $x\in A$. If $\operatorname{char}(F)=2$, then the existence of a semi-trace on $(A,\sigma)$ implies that ${{\mathrm{Trd}}}_A({{\mathrm{Sym}}}(A,\sigma))=\{0\}$ and hence by [@Knus:1998 (2.6)] that $(A,\sigma)$ is symplectic. An *$F$-algebra with quadratic pair* is a triple $(A,\sigma,f)$ where $(A,\sigma)$ is an $F$-algebra with involution of the first kind, assumed to be orthogonal if $\operatorname{char}(F)\neq 2$ and symplectic if $\operatorname{char}(F)=2$, and where $f$ is a semi-trace on $(A,\sigma)$. If $\operatorname{char}(F)\neq2$, then the semi-trace $f$ is uniquely determined by $(A,\sigma)$ (see [@Knus:1998 p.56]). Hence in characteristic different from $2$ the concept of an algebra with quadratic pair is equivalent to the concept of an algebra with orthogonal involution. An isomorphism of quadratic pairs $(A,\sigma,f)$ and $(B,\tau,g)$ is an isomorphism $\Phi$ of the underlying $F$-algebras with involution satisfying $ f=g\circ\Phi$.
Let $(A,{{\sigma}})$ and $(B,\tau)$ be $F$-algebras with involution of the first kind and let $f$ be a semi-trace on $(A,{{\sigma}})$. There is a unique semi-trace $g$ on $(B,\tau)\otimes (A,{{\sigma}})$ such that $g(b\otimes a) = {{\mathrm{Trd}}}_B(b)\cdot f(a)$ for all $a\in{{\mathrm{Sym}}}(A,{{\sigma}})$ and $b\in {{\mathrm{Sym}}}(B,\tau)$ (see [@Knus:1998 (5.18)] for the case of $\operatorname{char}(F)=2$, otherwise this is trivial). Assume that $(A,{{\sigma}})$ and $(B,\tau)$ are orthogonal if $\operatorname{char}(F)\neq2$. Then by [@Knus:1998 (2.23)], $(B,\tau)\otimes (A,{{\sigma}})$ is orthogonal if $\operatorname{char}(F)\neq 2$ and symplectic if $\operatorname{char}(F)=2$. Hence we obtain an $F$-algebra with quadratic pair $( B\otimes_F A, \tau\otimes {{\sigma}},g)$, which we denote by $(B,\tau)\otimes(A,\sigma, f)$. By [@dolphin:PFC (5.3)], the tensor product of algebras with involution and the tensor product of an algebra with involution and an algebra with quadratic pair are mutually associative operations. In particular, for an $F$-algebra with quadratic pair $(A,{{\sigma}},f)$ and $F$-algebras with involution of the first kind $(B,\tau)$ and $(C,\gamma)$, we may write $(C,\gamma)\otimes(B,\tau)\otimes(A,{{\sigma}},f)$ without any ambiguity.
Assume now that $(A,{{\sigma}})$ and $(B,\tau)$ are symplectic. We may define a semi-trace $h$ on $(B,\tau)\otimes (A,{{\sigma}})$ in the following manner. If $\operatorname{char}(F)=2$, then by [@Knus:1998 (5.20)] there exists a unique semi-trace $h$ on $(B,\tau)\otimes(A,{{\sigma}})$ such that $h(s_1\otimes s_2)=0$ for all $s_1\in {{\mathrm{Sym}}}(A,{{\sigma}})$ and $s_2\in {{\mathrm{Sym}}}(B,\tau)$. If $\operatorname{char}(F)\neq2$, then $(B,\tau)\otimes(A,{{\sigma}})$ is orthogonal by [@Knus:1998 (2.23)] and we let $h=\frac{1}{2}{{\mathrm{Trd}}}_{A\otimes_F B}$. In either case, we denote the $F$-algebra with quadratic pair $(B\otimes_FA,\tau\otimes{{\sigma}},h)$ by $(B,\tau)\boxtimes (A,{{\sigma}})$. Note that by [@dolphin:PFC (5.4)], $(B,\tau)\boxtimes (A,{{\sigma}})\simeq (B,\tau)\otimes (A,{{\sigma}},f)$ for any choice of semi-trace $f$ on $(A,{{\sigma}})$. Moreover, for any algebra with involution of the first kind $(C,\gamma)$ we have $$\bigl((C,\gamma)\otimes (B,\tau)\bigr)\boxtimes (A,\sigma)\simeq (C,\gamma)\otimes(B,\tau)\otimes(A,\sigma,f)\simeq (C,\gamma)\otimes\bigl((B,\tau)\boxtimes(A,\sigma)\bigr)\,.$$ We therefore use the notation $(C,\gamma)\otimes(B,\tau)\boxtimes (A,\sigma)$ for this tensor product.
Hermitian and quadratic forms {#section:her}
=============================
We recall certain results we use from hermitian and quadratic form theory. We refer to and [@Elman:2008 Chapters 1 and 2] for standard notation and terminology, and as general references on hermitian and quadratic forms respectively.
Throughout this section, let $(D,\theta)$ be an $F$-division algebra with involution. Fix $\lambda\in Z(D)$ such that $\lambda\theta(\lambda)=1$. Note that if $(D,\theta)$ is of the first kind one must have that $\lambda=\pm1$. A *$\lambda$-hermitian form over $(D,\theta)$* is a pair $(V,h)$ where $V$ is a finite-dimensional right $D$-vector space and $h$ is a bi-additive map $h:V\times V\rightarrow D$ such that $h(xd_1,yd_2)=\theta(d_1)h(x,y)d_2$ and $h(y,x)=\lambda\theta({h(x,y)})$ hold for all $x,y\in V$ and $d_1,d_2\in D$. If $\lambda=1$ then we call a $\lambda$-hermitian form simply an *hermitian form*. If $h(x,y)=0$ for all $y\in V$ implies $x=0$, then we say that $(V,h)$ is *nondegenerate*. We say $(V,h)$ *represents an element $a\in D$* if $h(x,x)=a$ for some $x\in V\setminus\{0\}$. We call $(V,h)$ *isotropic* if it represents $0$, and *anisotropic* otherwise.
We say that a $\lambda$-hermitian form $(V,h)$ over $(D,\theta)$ is *even* if $h(x,x)\in {{\mathrm{Symd}}}_{\lambda}(D,\theta)$ for all $x\in V$. Note that if $\operatorname{char}(F)\neq 2$ or $(D,\theta)$ is unitary then all $\lambda$-hermitian forms over $(D,\theta)$ are even as in these cases ${{\mathrm{Symd}}}_{\lambda}(D,\theta)= \mathrm{Sym}_\lambda(D,\theta)$ (see [@Knus:1998 (2.A) and (2.17)]). Assume $(V,h)$ is even. We say that $(V,h)$ is *hyperbolic* if there exists a totally isotropic subspace $W\subseteq V$ with ${{\mathrm{dim}}}_D(W)=\frac{1}{2}{{\mathrm{dim}}}(V,h)$. The following proposition follows easily from [@Knus:1991 Chapter 1, (3.7.3)].
\[prop:hypuniq\] Let $\varphi$ and $\psi$ be hyperbolic even $\lambda$-hermitian forms over $(D,\theta)$. If ${{\mathrm{dim}}}(\varphi)={{\mathrm{dim}}}(\psi)$ then $\varphi\simeq \psi$.
\[cor:hypiso\] Let $\varphi$ and $\psi$ be even $\lambda$-hermitian forms over $(D,\theta)$ of the same dimension. Then $\varphi\simeq \psi$ if and only if $\varphi\perp(-\psi)$ is hyperbolic.
That $\varphi\perp(-\psi)$ is hyperbolic if $\varphi\simeq \psi$ is clear. For the converse, we have that $\varphi\perp(-\psi)$ and $\psi\perp (-\psi)$ are hyperbolic and of the same dimension. Therefore by \[prop:hypuniq\] we have $\varphi\perp(-\psi)\simeq \psi\perp (-\psi)$. As $-\psi$ is even we have $\varphi\simeq \psi$ by .
For every nondegenerate $\lambda$-hermitian form $\varphi=(V,h)$ over $(D,\theta)$, there is a unique $F$-involution on ${{\mathrm{End}}}_D(V)$, denoted ${{\mathrm{ad}}}_h$, with ${{\mathrm{ad}}}_h(a)=\theta(a)$ for all $a\in E$ such that $h(f(x),y)=h(x,{{\mathrm{ad}}}_h(f)(y)) \textrm{ for all } x,y\in V \textrm{ and } f\in {{\mathrm{End}}}_D(V)$ (see [@Knus:1998 (4.1)]). We denote the $F$-algebra with involution $({{\mathrm{End}}}_D(V),{{\mathrm{ad}}}_{h})$ by ${{\mathrm{Ad}}}(\varphi)$ and we refer to it as the *$F$-algebra with involution adjoint to $\varphi$*. We denote the $1$-dimensional hermitian form $(D,h)$ over $(D,\theta)$ given by $h(x,y)=\theta(x)y$ for $x,y\in D$ by ${\mbox{$\langle 1\rangle $}}_{(D,\theta)}$. It is easy to see that ${{\mathrm{Ad}}}({\mbox{$\langle 1\rangle $}}_{(D,\theta)})\simeq (D,\theta)$.
By a *bilinear form over $F$* we mean a $\lambda$-hermitian form over $(F,{{\mathrm{id}}})$ (in this case we must have $\lambda=\pm1$). Let $\varphi=(V,b)$ be a bilinear form over $F$. We call ${\varphi}$ *symmetric* if $b(x,y)=b(y,x)$ for all $x,y\in V$, i.e. $\varphi$ is a hermitian form over $(F,{{\mathrm{id}}})$. We call $\varphi$ *alternating* if $b(x,x)=0$ for all $x\in V$. This is equivalent to $\varphi$ being an even $(-1)$-hermitian form over $(F,{{\mathrm{id}}})$, as ${{\mathrm{Symd}}}_{-1}(F,{{\mathrm{id}}})=\{0\}$. Any split $F$-algebra with involution of the first kind is isomorphic to ${{\mathrm{Ad}}}(\varphi)$ for some nondegenerate symmetric or alternating bilinear form $\varphi$ over $F$ (see [@Knus:1998 (2.1)]). An $F$-algebra with involution of the first kind is said to be *symplectic* if it is adjoint to an alternating bilinear form over some splitting field, and *orthogonal* otherwise.
Let $\varphi=(V,b)$ be a symmetric bilinear form over $F$ and $\psi=(W,h)$ be a $\lambda$-hermitian form over $(D,\theta)$. Then $V\otimes_FW$ is a finite dimensional right $D$-vector space. Further, there is a unique $F$-bilinear map $b\otimes h:(V\otimes_F W)\times (V\otimes_F W)\rightarrow F$ such that $(b\otimes h)\left( (v_1\otimes w_1), (v_2\otimes w_2)\right) =b(v_1,v_2)\cdot h(w_1,w_2) $ for all $w_1,w_2\in W, v_1,v_2\in V$. We call the $\lambda$-hermitian form $(V\otimes_F W, b\otimes h)$ over $(D,\theta)$ the *tensor product of $\varphi$ and $\psi$*, and denote it by $\varphi\otimes \psi$ (cf. [@Knus:1991 Chapter 1, §8]).
For $b_1,\ldots,b_n\in F^\times$ the symmetric $F$-bilinear map $b:F^n\times F^n\rightarrow F$ given by $(x,y)\mapsto \sum_{i=1}^n b_ix_iy_i$ yields a symmetric bilinear form $(F^n,b)$, which we denote by ${\mbox{$\langle b_1,\ldots,b_n\rangle $}}$. For a nonnegative integer $m$, by an *$m$-fold bilinear Pfister form* we mean a nondegenerate symmetric bilinear form that is isometric to ${\mbox{$\langle 1,a_1\rangle $}}\otimes\ldots\otimes{\mbox{$\langle 1,a_m\rangle $}}$ for some $a_1,\ldots,a_m\in F^\times$, and we denote this form by ${\mbox{$\langle\!\langle a_1,\ldots, a_m
\rangle\!\rangle $}}$. We call ${\mbox{$\langle 1\rangle $}}$ the the *$0$-fold bilinear Pfister form*.
By a *quadratic form over $F$* we mean a pair $(V,q)$ of a finite-dimensional $F$-vector space $V$ and a map $q:V\rightarrow F$ such that, firstly, $q(\lambda x)=\lambda^2q(x)$ holds for all $x\in V$ and $\lambda\in F$, and secondly, the map $b_q:V\times V\rightarrow F\,,\,(x,y)\longmapsto q(x+y)-q(x)-q(y)$ is $F$-bilinear. We say that $(V,q)$ is *nonsingular* if $b_q$ is nondegenerate. Quadratic forms correspond on a one-to-one basis to symmetric bilinear forms if $\operatorname{char}(F)\neq2$, but not otherwise. We say two quadratic forms $\rho_1$ and $\rho_2$ over $F$ are *similar* if there exists an element $c\in F^\times$ such that $c\rho_1\simeq\rho_2$.
Recall that there is a natural notion of the tensor product of a symmetric bilinear form $\varphi$ and a quadratic form $\rho$ over $F$, denoted $\varphi\otimes\rho$ (see [@Elman:2008 p.51]). For a nonnegative integer $m$, by an *$m$-fold quadratic Pfister form over $F$* (or simply an *$m$-fold Pfister form*) we mean a quadratic form that is isometric to the tensor product of a $2$-dimensional nonsingular quadratic form representing $1$ and an $(m-1)$-fold bilinear Pfister form over $F$. For the following result see [@Elman:2008 (9.10) and (23.4)].
\[Pfister\] Let $m$ be a nonnegative integer. A nonsingular quadratic form $\rho$ with ${{\mathrm{dim}}}(\rho)=2^m$ is similar to a Pfister form if and only if for every field extension $K/F$, $\pi_K$ is either anisotropic or hyperbolic.
Quadratic pairs correspond to quadratic forms a way similar to the correspondence between involutions of the first kind and bilinear forms. To any nonsingular quadratic form $\rho=(V,q)$ over $F$ one may associated a uniquely determined quadratic pair on ${{\mathrm{End}}}_F(V)$, giving the *$F$-algebra with quadratic pair adjoint to $\rho$*, denoted ${{\mathrm{Ad}}}({\rho})$. See [@Knus:1998 (5.11)] for a description. By [@Knus:1998 (5.11)], any quadratic pair on a split algebra ${{\mathrm{End}}}_F(V)$ is the adjoint of some nonsingular quadratic form over $F$. The notions of isotropy and hyperbolicity of quadratic forms extend to quadratic pairs; see [@Knus:1998 (6.5) and (6.12)] for the definitions.
The following proposition summarises the various functorial results for hermitian forms and involutions (resp. quadratic forms and quadratic pairs) we use.
\[prop:hypiff\] Let $\psi$ and $\psi'$ be either nondegenerate even $\lambda$-hermitian forms over $(D,\theta)$ or quadratic forms over $F$ and let $\varphi$ be a nondegenerate symmetric bilinear form over $F$. Then
1. ${{\mathrm{Ad}}}(\psi)\simeq {{\mathrm{Ad}}}(\psi')$ if and only if $\psi\simeq c\psi'$ for some $c\in F^\times$.
2. ${{\mathrm{Ad}}}(\psi)$ is isotropic (resp. hyperbolic) if and only if $\psi$ is isotropic (resp. hyperbolic).
3. ${{\mathrm{Ad}}}(\varphi\otimes \psi)\simeq {{\mathrm{Ad}}}(\varphi)\otimes {{\mathrm{Ad}}}(\psi)$.
$(1)$ See [@Knus:1998 (4.2) and (5.11)].
$(2)$ See [@Knus:1998 (6.3), (6.6) and (6.13)] for the result for quadratic forms. For $\psi$ a $\lambda$-hermitian form, see [@Knus:1998 (6.7)] for the statement on hyperbolicity. See [@dolphin:quadpairs (3.2)] for the isotropy result for $\psi$ a nondegenerate symmetric bilinear form over $F$. The proof is easily generalised to the case of a $\lambda$-hermitian form over $(D,\theta)$.
$(3)$ For $\psi$ a quadratic form, see [@Knus:1998 (5.19)]. Otherwise, let $\varphi=(V,b)$ and $\psi=(W,h)$. For all $f\in {{\mathrm{End}}}_F(V)$, $g\in {{\mathrm{End}}}_D(W)$, $v,v'\in V$ and $w,w'\in W$ we have $$\begin{aligned}
(b\otimes h) (f\otimes g(v\otimes w), (v'\otimes w'))& = &b(f(v), v')\cdot h(g(w), w')
\\ &=& b(v, {{\mathrm{ad}}}_b(f)(v'))\cdot h(w, {{\mathrm{ad}}}_h(g)(w'))
\\ &=& (b\otimes h) ((v\otimes w), ({{\mathrm{ad}}}_b(f)(v')\otimes {{\mathrm{ad}}}_h(g)(w')))\,.\end{aligned}$$ Therefore by the bilinearity of $b\otimes h$ we have that ${{\mathrm{ad}}}_{b\otimes h}(f\otimes g) = {{\mathrm{ad}}}_b(f)\otimes {{\mathrm{ad}}}_h(g)$. Using this, it follows from the linearity of ${{\mathrm{ad}}}_{b\otimes h}$ that the natural isomorphism of $F$-algebras $\Phi:{{\mathrm{End}}}_F(V)\otimes_F{{\mathrm{End}}}_D(W)\rightarrow {{\mathrm{End}}}_{D}(V\otimes_D W)$ is an isomorphism of the $F$-algebras with involution in the statement.
It follows from \[prop:hypiff\]$(2)$ and [@Elman:2008 (1.8)] that any split algebra with symplectic involution is hyperbolic and from \[prop:hypiff\]$(3)$ that an $F$-algebra with involution adjoint to a bilinear Pfister form over $F$ is totally decomposable.
Jacobson’s trace forms {#section:jacobson}
======================
We now consider hermitian forms over a quaternion division algebra or a separable quadratic extension together with their respective canonical involution. It was first shown in [@jacobson:hermformstr] that, over fields of characteristic different from $2$, the theory of hermitian forms over these division algebras with involution can be reduced to the study of quadratic forms over the base field that are multiples of the respective norm forms. A version of this result with no assumption on the characteristic of the base field was given in [@sah:evenherm]. In this case we do not get a correspondence between all hermitian forms and quadratic forms, but rather only between even hermitian forms and quadratic forms. This is no restriction for fields of characteristic different from $2$ or for the separable quadratic extension case, as then all hermitian forms are even. For the cases of a separable quadratic extension and quaternion algebras in characteristic different from two, the main result of this section is shown in [@Scharlau:1985 (10.1.1) and (10.1.7)]. Here we give a uniform presentation for both cases and with no restriction on the characteristic of the base field.
First we recall some facts on quaternion algebras. An $F$-*quaternion algebra* is a central simple $F$-algebra of degree $2$. Any $F$-quaternion algebra has a basis $(1,u,v,w)$ such that $$u^2 =u+a, v^2=b\textrm{ and }w=uv=v-vu\,$$ for some $a\in F$ with $-4a\neq 1$, $b\in F^\times$ (see [@Albert:1968 Chapter IX, Thm. 26]). Let $Q$ be an $F$-quaternion algebra. By [@Knus:1998 (2.21)], the map $Q\rightarrow Q$ given by $x\mapsto {{\mathrm{Trd}}}_Q(x)-x$ is the unique symplectic involution on $Q$; it is called the *canonical involution of $Q$*. With an $F$-basis $(1,u,v,w)$ of $Q$ as above, the canonical involution $\gamma$ on $Q$ is determined by the conditions $$\gamma(u)=1-u\quad\textrm{ and }\quad\gamma(v)=-v\,.$$ By considering $Q$ as an $F$-vector space, we can view $(Q,\mathrm{Nrd}_Q)$ as a $4$-dimensional quadratic form on $F$. Further, $(Q,\mathrm{Nrd}_Q)$ is a $2$-fold Pfister form and $Q$ is split if and only if $(Q,{{\mathrm{Nrd}}}_Q)$ is hyperbolic (see [@Elman:2008 (12.5)]).
Now let $K/F$ be a quadratic étale extension. That is, $K/F$ is either a separable quadratic extension or $K\simeq F\times F$. We call the non-trivial $F$-automorphism $\tau$ on $K$ the *canonical involution of $K$*. For every quadratic étale extension $K/F$ there exists an element $u\in K$ satisfying $$\tau(u)=1-u\quad \textrm{ and } \quad u^2-u=a$$ for some $a\in F$ with $4a\neq -1$ such that $K$ is $F$-isomorphic to $F(u)$ (see [@Albert:1968 Chapter IX, Lemma 8] if $K$ is a field, otherwise take $u=(0,1)$). Viewing $K$ as a $2$-dimensional vector-space over $F$, we may consider the $(K,\mathrm{Nrd}_K)$ as a $2$-dimensional quadratic form over $F$. One can then directly check that ${{\mathrm{Nrd}}}_{F(u)}$ is given by $(x,y)\mapsto x^2+xy+ay^2$. In particular, $(K,{{\mathrm{Nrd}}}_K)$ is a $1$-fold Pfister form. Further $K\simeq F\times F$ if and only if $(K,{{\mathrm{Nrd}}}_K)$ is hyperbolic.
Throughout the rest of the section, let $(D,\theta)$ be either an $F$-quaternion division algebra or a separable quadratic extension of $F$ together with the respective canonical involution. Let $(V,h)$ be a nondegenerate even hermitian form over $(D,\theta)$. As $(V,h)$ is even we have that for all $x\in V$, $h(x,x)\in {{\mathrm{Symd}}}(D,\theta)$. Since ${{\mathrm{Symd}}}(D,\theta)=F$, we may define a map $V\rightarrow F$ by $x\mapsto h(x,x)$. We denote this map by $q_h$. The first statement in the following result can be found in [@sah:evenherm Thm. 1], but we provide a full proof for completeness.
\[prop:assocquadform\] Let $(V,h)$ be a nondegenerate even hermitian form over $(D,\theta)$. Considering $V$ as a vector space over $F$, the pair $(V,q_h)$ is a nonsingular quadratic form over $F$. Further there exists a nondegenerate symmetric bilinear form $\varphi$ such that $(V,h)\simeq \varphi\otimes {\mbox{$\langle 1\rangle $}}_{(D,\theta)}$ and $(V,q_h)\simeq \varphi\otimes (D,{{\mathrm{Nrd}}}_D).$
We clearly have that $q_h:V\rightarrow F$ is such that $q_h(\lambda x)=\lambda^2q_h(x)$ for all $\lambda\in F$ and $x\in V$. Further, let $b:V\times V\rightarrow F$ be given by $$b(x,y)=q_h(x+y) - q_h(x)-q_h(y)=h(x,y)+ \theta(h(x,y)), \quad\textrm{for } x,y\in V.$$ It is easily checked that $(V,b)$ is a symmetric bilinear form over $F$. Hence $(V,q_h)$ is a quadratic form over $F$.
By [@Knus:1991 Chapter I, (6.2.4)] there exists an orthogonal basis $(v_1,\ldots, v_n)$ of $(V,h)$ with $h(v_i,v_i)\in {{\mathrm{Symd}}}(D,\theta)\setminus\{0\} =F^\times$ for all $i\in\{1,\ldots, n\}$. Let $h(v_i,v_i)=a_i$ for $i=1,\ldots, n$. Consider the $F$-vector space $U=Fv_1\oplus\ldots\oplus Fv_n$. Then $\varphi=(U,h|_{U\times U})$ is a nondegenerate symmetric bilinear form over $F$, and the natural isomorphism of $D$-spaces $U\otimes_FD\rightarrow V$ gives an isometry $(V,h)\simeq{\mbox{$\langle a_1,\ldots, a_n\rangle $}}\otimes {\mbox{$\langle 1\rangle $}}_{(D,\theta)}$. Hence for $x=(x_1,\ldots,x_n)\in V$ we have $$h(x,x)= a_1\theta(x)x+\ldots +a_n\theta(x)x\,.$$ As $\theta(x)x={{\mathrm{Nrd}}}_D(x)$ for all $x\in D$ we therefore have that $(V,q_h)\simeq \varphi\otimes (D,{{\mathrm{Nrd}}}_D)$ for $\varphi={\mbox{$\langle a_1,\ldots, a_n\rangle $}}$, and in particular $(V,q_h)$ is nonsingular.
\[lemma:trace\] Let $(V,h)$ be a nondegenerate even hermitian form over $(D,\theta)$. Then $(V,h)$ is isotropic (resp. hyperbolic) if and only if $(V,q_h)$ is isotropic (resp. hyperbolic).
Clearly $(V,q_h)$ is isotropic if and only if $(V,h)$ is isotropic. By \[prop:assocquadform\] there exists a nondegenerate symmetric bilinear form $\varphi$ such that $(V,h)\simeq \varphi \otimes {\mbox{$\langle 1\rangle $}}_{(D,\theta)}$. Now suppose $(V,h)$ is hyperbolic. Then $n={{\mathrm{dim}}}_D(V)$ must be even and by \[prop:hypuniq\] we may assume that $\varphi\simeq {\mbox{$\langle 1,-1,\ldots, 1,-1\rangle $}}$. Then as $(V,q_h)\simeq \varphi\otimes (D,{{\mathrm{Nrd}}}_D)$ by \[prop:assocquadform\] we have that $(V,q_h)$ is hyperbolic.
Conversely, suppose $(V,q_h)$ is hyperbolic. Then there exists a vector $x\in V\setminus\{0\}$ such that $h(x,x)=q(x)=0$. As $(V,h)$ is nondegenerate and even, by [@Knus:1991 Chapter 1, (3.7.4)] there exists a vector $y\in V\setminus \{0\}$ such that $h(y,y)=0$ and $h(x,y)= 1$. Let $U$ be the subspace of $V$ generated by $x$ and $y$. Then $(U,h|_U)$ is the hyperbolic $2$-dimensional hermitian form over $(D,\theta)$ and hence there exists a hermitian form $(W,h'')$ such that $(V,h)\simeq(W,h'')\perp(U,h|_U)$ by [@Knus:1991 Chapter 1, (3.7.1)]. It follows that $(V,q_h)\simeq (W,q_{h''})\perp (U,q_{h}|_U)$. As $(U,q_{h}|_U)$ is hyperbolic by the first part of the proof, it follows by Witt cancellation, [@Elman:2008 (8.4)], that $(W,q_{h''})$ is hyperbolic, and the result follows by induction on the dimension of $V$.
\[cor:isom\] Let $(V,h)$ and $(W,h')$ be nondegenerate even hermitian forms over $(D,\theta)$. Then $(V,h)\simeq(W,h')$ if and only if $(V,q_h)\simeq (W,q_{h'})$.
By \[cor:hypiso\] we have $(V,h)\simeq (W,h')$ if and only if $(V,h)\perp(-(W,h'))$ is hyperbolic. Similarly, using [@Elman:2008 (8.4)] we see that $(V,q_h)\simeq (W,q_{h'})$ if and only if $(V,q_h)\perp(-(W,q_{h'}))$ is hyperbolic. The result follows from \[lemma:trace\].
Trace forms for involutions {#jacanal}
===========================
By [@Knus:1998 (4.2)] even hermitian forms over a division algebra with symplectic (resp. unitary) involution correspond to algebras with symplectic (resp. unitary) involutions. In this section we use this correspondence to translate the hermitian form results from §\[section:jacobson\] to statements on symplectic and unitary involutions.
Throughout this section let $(B,\tau)$ be either an $F$-quaternion algebra or a quadratic étale extension of $F$ with respective canonical involution and let $(A,\sigma)$ be an $F$-algebra with symplectic involution such that $A$ is Brauer equivalent to $Q$ or a split $F$-algebra with unitary involution with $Z(A)=Z(B)$ and ${{\sigma}}|_{Z(A)}=\tau|_{Z(B)}$ respectively. Further, let $\pi=(B,{{\mathrm{Nrd}}}_B)$.
\[prop:quadextnfullinv\] There exists a nondegenerate symmetric bilinear form $\varphi$ over $F$ such that $(A,\sigma)\simeq {{\mathrm{Ad}}}(\varphi)\otimes (B,\tau)$.
If $B$ is a split $F$-quaternion algebra or $B\simeq F\times F$ then $(A,{{\sigma}})$ and $(B,\tau)$ are hyperbolic and the result follows from \[cor:allhypsame\], taking $\varphi$ to be any symmetric bilinear form of the appropriate dimension. Otherwise by [@Knus:1998 (4.2)] there exists a nondegenerate even hermitian form $\psi$ over $(B,\tau)$ such that ${{\mathrm{Ad}}}(\psi)\simeq (A,{{\sigma}})$. By \[prop:assocquadform\] there exists a nondegenerate symmetric bilinear form $\varphi$ over $F$ such that $\psi\simeq \varphi\otimes {\mbox{$\langle 1\rangle $}}_{(B,\tau)}$. The result then follows from \[prop:hypiff\]$(3)$.
\[lemma:canon\] Let $\varphi$ be a nondegenerate symmetric bilinear form over $F$. Then ${{\mathrm{Ad}}}(\varphi)\otimes (B,\tau)$ is isotropic (resp. hyperbolic) if and only if ${{\mathrm{Ad}}}(\varphi)\otimes {{\mathrm{Ad}}}{(\pi)}$ is isotropic (resp. hyperbolic).
If $B$ is a split $F$-quaternion algebra or $B\simeq F\times F$, then ${{\mathrm{Ad}}}(\varphi)\otimes (B,\tau)$ and $\pi$ are hyperbolic. Hence as is $\varphi\otimes \pi$ and therefore also ${{\mathrm{Ad}}}(\varphi)\otimes {{\mathrm{Ad}}}{(\pi)}$ by \[prop:hypiff\]$(2)$. Therefore we may assume that $B$ is a division $F$-algebra. By \[prop:hypiff\]$(3)$ we have ${{\mathrm{Ad}}}(\varphi\otimes {\mbox{$\langle 1\rangle $}}_{(B,\tau)})\simeq{{\mathrm{Ad}}}(\varphi)\otimes(B,\tau)$. Since $\varphi\otimes {\mbox{$\langle 1\rangle $}}_{(B,\tau)}$ is isotropic (resp. hyperbolic) if and only if $\varphi \otimes \pi$ is isotropic (resp. hyperbolic) by \[lemma:trace\], by \[prop:hypiff\]$(2)$ we have that ${{\mathrm{Ad}}}(\varphi)\otimes (B,\tau)$ is isotropic (resp. hyperbolic) if and only if $\varphi \otimes \pi$ is isotropic (resp. hyperbolic). This is equivalent to the isotropy (resp. hyperbolicity) of ${{\mathrm{Ad}}}(\varphi\otimes\pi)$ by \[prop:hypiff\]$(2)$. Finally, ${{\mathrm{Ad}}}(\varphi\otimes\pi)\simeq {{\mathrm{Ad}}}(\varphi)\otimes {{\mathrm{Ad}}}(\pi)$ by \[prop:hypiff\]$(3)$, giving the result.
\[cor:isomsepqat\] Let $\varphi_1$ and $\varphi_2$ be nondegenerate symmetric bilinear forms over $F$. Then ${{\mathrm{Ad}}}(\varphi_1)\otimes (B,\tau)\simeq{{\mathrm{Ad}}}(\varphi_2)\otimes (B,\tau)$ if and only if ${{\mathrm{Ad}}}(\varphi_1)\otimes{{\mathrm{Ad}}}(\pi) \simeq{{\mathrm{Ad}}}(\varphi_2)\otimes {{\mathrm{Ad}}}(\pi)$.
By Proposition \[prop:hypiff\]$(1)$ and $(3)$ we have ${{\mathrm{Ad}}}(\varphi_1)\otimes{{\mathrm{Ad}}}(\pi) \simeq{{\mathrm{Ad}}}(\varphi_2)\otimes {{\mathrm{Ad}}}(\pi) $ if and only if $\varphi_1\otimes \pi\simeq c\varphi_2\otimes \pi$ for some $c\in F^\times$. This is equivalent to $\varphi_1\otimes {\mbox{$\langle 1\rangle $}}_{(D,\theta)}\simeq c \varphi_2\otimes {\mbox{$\langle 1\rangle $}}_{(D,\theta)}$ by \[cor:isom\] which is further equivalent to ${{\mathrm{Ad}}}(\varphi_1)\otimes (B,\tau)\simeq{{\mathrm{Ad}}}(\varphi_2)\otimes (B,\tau)$ by \[prop:hypiff\]$(1)$ and $(3)$.
In the case where $(B,\tau)$ is an $F$-algebra with symplectic involution, we get the following reformation of Propositions \[lemma:canon\] and \[cor:isomsepqat\].
\[lemma:canonnicer\] Assume $(B,\tau)$ is an $F$-algebra with symplectic involution Then $(A,\sigma)$ is isotropic (resp. hyperbolic) if and only if ${(A,\sigma)}\boxtimes {(B,\tau)}$ is isotropic (resp. hyperbolic). Further, let $(A',{{\sigma}}')$ be an $F$-algebra with symplectic involution such that $A'$ is Brauer equivalent to $B$. Then $(A,{{\sigma}})\simeq (A',{{\sigma}}')$ if and only if ${(A,\sigma)}\boxtimes {(B,\tau)}\simeq {(A',\sigma')}\boxtimes {(B,\tau)}$.
By [@dolphin:conic (2.9)] we have ${{\mathrm{Ad}}}(\pi)\simeq (B,\tau)\boxtimes {(B,\tau)}$. As $(A,{{\sigma}})\simeq {{\mathrm{Ad}}}(\varphi)\otimes (B,\tau)$ for some nondegenerate symmetric bilinear form $\varphi$ over $F$ by \[prop:quadextnfullinv\], the result follows immediately from Propositions \[lemma:canon\] and \[cor:isomsepqat\].
Totally Decomposable Involutions {#decompinv}
================================
We now give our results on totally decomposable symplectic and unitary involutions. We first recall the notion of a function field of a quadratic form. Let $\rho$ be a nonsingular quadratic form over $F$. If ${{\mathrm{dim}}}(\rho)\geqslant 3$ or if $\rho$ is anisotropic and ${{\mathrm{dim}}}(\rho)=2$, then we call the function field of the projective quadric over $F$ given by $\rho$ the *function field of $\rho$* and denote it by $F(\rho)$. In the remaining cases we set $F(\rho)=F$. This agrees with the definition in [@Elman:2008 §22].
\[prop:symp\] Let $(Q,\gamma)$ be an $F$-quaternion algebra with canonical involution and $(A,\sigma)$ be an $F$-algebra with symplectic involution such that $A$ is Brauer equivalent to $Q$ and $\deg(A)=2^n$ with $n\geqslant 1$. The following are equivalent:
- $(A,\sigma)$ is totally decomposable.
- ${(A,\sigma)}\boxtimes {(Q,\gamma)}$ is adjoint to a Pfister form.
- $(A,\sigma)\simeq {{\mathrm{Ad}}}(\psi)\otimes (Q,\gamma)$ for some bilinear Pfister form $\psi$ over $F$.
- For any field extension $K/F$, $(A,\sigma)_K$ is either anisotropic or hyperbolic.
$(i) \Rightarrow (ii)$: If $\operatorname{char}(F)\neq2$ then $(A,\sigma)\boxtimes (Q,\gamma)$ is equivalent to a totally decomposable orthogonal involution, and the result is a special case of [@Becher:qfconj Thm. 1]. If $\operatorname{char}(F)=2$ then the result is a special case of [@dolphin:PFC (7.3)].
$(ii)\Rightarrow (iii)$: Assume that ${(A,\sigma)}\boxtimes{(Q,\gamma)}\simeq {{\mathrm{Ad}}}(\rho)$ for a Pfister form $\rho$ over $F$. As $Q_{F(\pi)}$ is split, we have that $(A,{{\sigma}})_{F(\pi)}$ is hyperbolic and hence $((A,{{\sigma}})\boxtimes(Q,\gamma))_{F(\pi)}$ is hyperbolic by \[lemma:canonnicer\]. Hence $\rho_{F(\pi)}$ is hyperbolic by \[prop:hypiff\]$(2)$ and by [@Elman:2008 (23.6)] we have $\rho\simeq \psi\otimes\pi$ for some bilinear Pfister form $\psi$ over $F$. As ${{\mathrm{Ad}}}(\pi)\simeq (Q,\gamma)\boxtimes {(Q,\gamma)}$ by [@dolphin:conic (2.9)] we have ${(A,\sigma)}\boxtimes{(Q,\gamma)}\simeq{{\mathrm{Ad}}}(\psi)\otimes (Q,\gamma)\boxtimes (Q,\gamma)$ by \[prop:hypiff\]$(3)$. Therefore $(A,{{\sigma}})\simeq {{\mathrm{Ad}}}(\psi)\otimes (Q,\gamma)$ by \[lemma:canonnicer\].
$(iii)\Rightarrow (i)$: This follows easily from \[prop:hypiff\]$(3)$.
$(ii)\Rightarrow (iv)$: Let $K/F$ be a field extension. As $((A,{{\sigma}})\boxtimes (Q,\gamma))_K$ is adjoint to a Pfister form, it is either anisotropic or hyperbolic by Propositions \[Pfister\] and \[prop:hypiff\]$(2)$. Hence $(A,{{\sigma}})_K$ is either anisotropic or hyperbolic by \[lemma:canonnicer\].
$(iv)\Rightarrow (ii)$: For a given field extension $K/F$, $(A,\sigma)_K$ is anisotropic or hyperbolic if and only if the same holds for $({(A,\sigma)}\boxtimes {(Q,\gamma)})_K$ by \[lemma:canonnicer\]. Let $\rho$ be a quadratic form over $F$ such that ${(A,\sigma)}\boxtimes{(Q,\gamma)}\simeq {{\mathrm{Ad}}}(\rho)$. Then ${{\mathrm{dim}}}(\rho)=\deg(A)=2^n$ and we have that $\rho_K$ is either anisotropic or hyperbolic by \[prop:hypiff\]$(2)$. As this holds for all field extensions $K/F$, we have that $\rho$ is similar to a Pfister form by \[Pfister\] and the result follows from \[prop:hypiff\]$(1)$.
Let $(A,\sigma)$ be a split $F$-algebra with unitary involution with centre $K$ and $\deg(A)=2^n$ with $n\geqslant 1$ and let $\tau$ be the non-trivial $F$-automorphism on $K$. The following are equivalent:
- $(A,\sigma)$ is totally decomposable.
- $(A,\sigma)\simeq{{\mathrm{Ad}}}(\psi)\otimes (K,\tau)$ for some bilinear Pfister form $\psi$ over $F$.
- For any field extension $L/F$, $(A,\sigma)_L$ is either anisotropic or hyperbolic.
If $K$ is not a field then $(A,{{\sigma}})$ and $(K,\tau)$ are both hyperbolic and by \[cor:allhypsame\] we have $(A,{{\sigma}})\simeq {{\mathrm{Ad}}}(\psi)\otimes (K,\tau)$ for any symmetric bilinear form $\psi$ with ${{\mathrm{dim}}}(\psi)=\deg(A)$. Hence the equivalencies all hold trivially. We therefore assume that $K$ is a field and let $\pi=(K,{{\mathrm{Nrd}}}_K)$.
$(i)\Rightarrow (ii)$: By [@Knus:1998 (2.22)] there exist $F$-quaternion algebras with involution of the first kind $(Q_i,\sigma_i)$ for $i=1,\ldots, n$ such that $$(A,\sigma)\simeq (Q_1,\sigma_1)\otimes\ldots\otimes(Q_n,\sigma_n)\otimes(K,\tau)\,.$$ Again by [@Knus:1998 (2.22)], we may assume that $(Q_i,\sigma_i)$ is symplectic for $i=1,\ldots,n$, and hence we may assume that $(B,\sigma')=(Q_1,\sigma_1)\otimes\ldots\otimes(Q_n,\sigma_n)$ is symplectic if $\operatorname{char}(F)=2$ by [@Knus:1998 (2.23)]. Since $B_K$ is split by the hypothesis, it follows from [@Elman:2008 (98.5)] that $B$ is Brauer equivalent to an $F$-quaternion algebra $Q$. It follows by [@Becher:qfconj Thm. 2] if $(B,\sigma')$ is orthogonal and by \[prop:symp\] if $(B,\sigma')$ is symplectic that $(B,\sigma')\simeq {{\mathrm{Ad}}}(\varphi)\otimes (Q,\delta)$ for some bilinear Pfister form $\varphi$ over $F$, and some $F$-quaternion algebra with involution $(Q,\delta)$. Hence $(A,\sigma)\simeq {{\mathrm{Ad}}}(\varphi)\otimes (Q,\delta)\otimes (K,\tau)$. We have that $ (Q,\delta)\otimes (K,\tau)\simeq {{\mathrm{Ad}}}(\psi)\otimes (K,\tau)$ for some $2$-dimensional symmetric bilinear form $\psi$ over $F$ by [@dolphin:quadpairs (6.2)]. Hence by \[prop:hypiff\]$(3)$, $(A,\sigma)\simeq {{\mathrm{Ad}}}(\varphi\otimes\psi)\otimes (K,\tau)$ for the bilinear Pfister form $\varphi\otimes\psi$ over $F$.
$(ii)\Rightarrow (i)$: This follows from \[prop:hypiff\]$(3)$.
$(ii)\Rightarrow (iii)$: Let $L/F$ be a field extension. If $Z(A_L)$ is not a field then $(A,{{\sigma}})_L$ is hyperbolic. Otherwise $(A,{{\sigma}})_L$ is anisotropic or hyperbolic if and only if $(\psi\otimes \pi)_L$ is hyperbolic by Propositions \[lemma:canon\] and \[prop:hypiff\]$(2)$. The result follows from \[Pfister\].
$(iii)\Rightarrow (ii)$: By \[prop:quadextnfullinv\] we have $(A,\sigma)\simeq {{\mathrm{Ad}}}(\varphi)\otimes (K,\tau)$ for some symmetric bilinear form $\varphi$ over $F$ with ${{\mathrm{dim}}}(\varphi)=\deg(A)$. Let $\rho=\varphi\otimes \pi$. Let $L/K$ be a field extension. If $K\otimes_FL=Z(A_L)$ is not a field then $\pi_L$ is hyperbolic. Otherwise $\rho_L$ is anisotropic or hyperbolic if and only if $(A,{{\sigma}})_L$ is anisotropic or hyperbolic by \[lemma:canon\]. In both cases, $\rho_L$ is either anisotropic or hyperbolic. As this holds for any field extension $L/F$, $\rho$ is similar to a Pfister form by \[Pfister\]. Similarly, $\rho_{F(\pi)}$ is isotropic, and hence hyperbolic. It then follows from [@Elman:2008 (23.6) and (24.1)] that $\rho\simeq c\psi\otimes\pi$ for some bilinear Pfister form $\psi$ over $F$ and some $c\in F^\times$. Hence by \[prop:hypiff\]$(1)$, ${{\mathrm{Ad}}}(\rho)\simeq {{\mathrm{Ad}}}(\varphi)\otimes {{\mathrm{Ad}}}(\pi)\simeq {{\mathrm{Ad}}}(\psi)\otimes {{\mathrm{Ad}}}(\pi)$. It follows that ${{\mathrm{Ad}}}(\varphi)\otimes (K,\tau)\simeq {{\mathrm{Ad}}}(\psi)\otimes (K,\tau)$ by \[cor:isomsepqat\], as required.
[^1]: This work was supported by the [Deutsche Forschungsgemeinschaft]{} (project *The Pfister Factor Conjecture in Characteristic Two*, BE 2614/4) and the FWO Odysseus programme (project *Explicit Methods in Quadratic Form Theory*)
|
---
abstract: 'In this paper we study the singularities of the invariant metric of the Poincaré bundle over a family of abelian varieties and their duals over a base of arbitrary dimension. As an application of this study we prove the effectiveness of the height jump divisors for families of pointed abelian varieties. The effectiveness of the height jump divisor was conjectured by Hain in the more general case of variations of polarized Hodge structures of weight $-1$.'
address:
- 'Instituto de Ciencias Matemáticas (CSIC-UAM-UCM-UCM3). Calle Nicolás Cabrera 15, Campus UAM, Cantoblanco, 28049 Madrid, Spain.'
- 'Mathematical Institute, Leiden University, PO Box 9512, 2300 RA Leiden, The Netherlands'
- 'Mathematical Institute, Leiden University, PO Box 9512, 2300 RA Leiden, The Netherlands'
author:
- José Ignacio Burgos Gil
- David Holmes
- Robin de Jong
title: Singularities of the biextension metric for families of abelian varieties
---
Introduction
============
Families of curves {#sec:families_curves}
------------------
By way of motivation of the general results in this paper, consider the following situation. Let $X$ be a smooth complex algebraic variety of dimension $n$, and let $\pi \colon Y\to X$ be a family of smooth projective curves parametrized by $X$. Let $A$, $B$ be two relative degree zero divisors on $Y \to X$, with disjoint support. To these divisors we can associate a function $h\colon X\to {{{\mathbb{R}}}}$, given by the archimedean component of the height pairing $$h(x)=\langle A_x,B_x \rangle_{\infty} \, ,$$ where $x \in X$. Let $X\hookrightarrow {\overline{X}}$ be a smooth compactification of $X$ with $D={\overline{X}}\setminus X$ a normal crossings divisor. We are interested in the behavior of the function $h$ close to the boundary divisor $D$. As is customary to do, we assume that the monodromy operators on the homology of the fibers of $Y \to X$ about all irreducible components of $D$ are unipotent. Let $x_{0}$ be a point of ${\overline{X}}$, and $U{\xrightarrow{\sim}}\Delta^n$ a small enough coordinate neighborhood of $x_{0}$ such that $ D\cap U$ is given by $q_1\cdots q_k=0$. Thanks to a result of D. Lear [@lear], there exist a continuous function $h_{0}\colon U\setminus D{^{\textrm{sing}}}{\rightarrow}{{{\mathbb{R}}}}$ and rational numbers $f_{1},\dots, f_k$ such that on $U\setminus D$ the equality $$\label{eq:2}
h(q_1,\dots,q_n)=h_{0}(q_1,\dots,q_n)-\sum_{i=1}^{k}f_{i}\log|q_{i}|$$ holds. Since $h_{0}$ is continuous on $U\setminus D{^{\textrm{sing}}}$, this determines the behavior of $h$ close to the smooth points of $D$. The question remains what happens when we approach a point of $D{^{\textrm{sing}}}$. In other words, what kind of singularities may $h_{0}$ have on $D{^{\textrm{sing}}}$?
From work by G. Pearlstein [@pearldiff] a strengthening of Lear’s result emerges. Let $x_0 \in {\overline{X}}$ be as above. Then there exists a homogeneous weight one function $f \in \qq(x_1,\ldots,x_k)$ such that the following holds. Consider a holomorphic test curve ${\overline{\phi}} \colon {\overline{C}}\to {\overline{X}}$ that has image not contained in $D$, a point $0\in {\overline{C}}$ such that ${\overline{\phi}}(0)=x_0$, and a local analytic coordinate $t$ for ${\overline{C}}$ close to $0$. Assume that ${\overline{\phi}}$ is given locally by $$t\mapsto\big(t^{m_{1}}u_{1}(t),\dots,t^{m_{k}}u_{k}(t),q_{k+1}(t),
\dots,q_{n}(t)\big),$$ where $m_{1},\dots,m_{k}$ are non-negative integers, $u_{1},\dots,u_{k}$ are invertible functions and $q_{k+1},\dots,q_{n}$ are arbitrary holomorphic functions. Then the asymptotic estimate $$\label{onevariableasympt}
h({\overline{\phi}}(t)) = b'(t) - f(m_1,\ldots,m_k) \log|t|$$ holds in a neighborhood of $0 \in {\overline{C}}$. Here $b'$ is a continuous function that extends continuously over $0$.
Naively one might expect that the function $f$ is linear and $f(m_1,\ldots,m_k)$ is just a linear combination of the numbers $f_{i}$ with coefficients given by the multiplicities $m_{i}$ of the curve ${\overline{C}}$. In general, however this turns out not to be the case. Examples of non-linear $f$ can be found in [@bhdj] and [@bkk]. In [@abbf], [@bhdj] and [@hdj] one finds a combinatorial interpretation of the function $f$ in terms of potential theory on the dual graphs of stable curves.
As a special case of one of the main results of this paper we will have a stronger asymptotic estimate. Namely $$h(q_1,\ldots,q_n) = b(q_1,\ldots,q_n) + f(-\log|q_1|,\ldots,-\log|q_k|)$$ on $U \setminus D$, where $b \colon U \setminus D \to {{{\mathbb{R}}}}$ is a bounded continuous function that extends in a continuous manner over $U \setminus D^{\mathrm{sing}}$. The boundedness of $b$ can be seen as a uniformity property on the asymptotic estimates for different test curves. In general, $b$ can not be extended continuously to $D^{\mathrm{sing}}$, thus the boundedness of $b$ is the strongest estimate that can hoped for.
One may ask for further properties of $h$. For example, a result of T. Hayama and G. Pearlstein [@hp Theorem 1.18] implies that $h$ is locally integrable. Another question is whether the same can be said about the forms $\partial h$ and $\deldelbar h$ and their powers. As seen in [@bkk] in the two-dimensional case this may lead to interesting intersection numbers between infinite towers of divisors. We plan to address this question in full generality in a subsequent work. In this paper we will focus on the one-dimensional case because it is the only case needed to treat conjecture \[con:2\] below. Thus assume that the dimension of $X$ is one. Let $h_{0}$ be the function appearing in equation . Then, the 1-form $\partial h_{0}$ is locally integrable on $U$ with zero residue. Moreover the 2-form $\partial\bar \partial h_{0}$ is locally integrable on $U$.
Admissible variations of Hodge structures {#sec:vari-hodge-struct}
-----------------------------------------
The correct general setting for approaching these issues is to consider an admissible variation of polarized pure Hodge structures $\hh$ of weight $-1$ over $X$, see for instance [@hainbiext] and [@hain_normal]. Let $\hh^\lor$ be the dual variation. Let $J(\hh) \to X$ and $J(\hh^\lor) \to X$ be the corresponding families of intermediate jacobians. Then on $J(\hh) \underset{X}{\times } J(\hh^\vee)$ one has a Poincaré (biextension) bundle ${{\mathcal{P}}}={{\mathcal{P}}}(\hh)$ with its canonical (biextension) metric. The polarization induces an isogeny of complex tori $\lambda \colon J(\hh) \to J(\hh^\lor)$. Let $\nu, \mu \colon X \to J(\hh)$ be two sections (with good behavior near $D$ - more precisely, *normal function sections*). Then we define $$L = {{\mathcal{P}}}_{\nu,\mu} \defeq (\nu,\lambda \mu)^* {{\mathcal{P}}} \, ,$$ a metrized line bundle on $X$. We put ${{\mathcal{P}}}_\nu={{\mathcal{P}}}_{\nu,\nu}$. This “diagonal” case will be of special interest to us. One important example, discussed at length in [@hain_normal], is given by the normal function in $J(\bigwedge^3
H_1(Y_x))=H_{3}(J(Y_{x}))$ associated to the Ceresa cycle $[Y_{x}]-[-Y_{x}]$ in $J(Y_x)$, for a family of curves $Y \to X$.
A second example is provided by the sections determined by two relative degree zero divisors on a family of smooth projective curves, as above. Let $\hh$ be the local system given by the homology of the fibers of the family of curves $Y \to X$. Then $J(\hh)$ is the usual jacobian fibration associated to $Y \to X$. It is (principally) polarized in a canonical way. The divisors $A, B$ give rise to sections $\nu, \mu$ of $J(\hh) \to X$. Let ${\langleA,B\rangle}$ be the Deligne pairing [@de] on $X$ associated with the line bundles ${{\mathcal{O}}}(A)$ and ${{\mathcal{O}}}(B)$. The metric on ${\langleA,B\rangle}$ is determined by the archimedean height pairing ${\langleA,B\rangle}_\infty$. Moreover we have a canonical isometry $${\langleA,B\rangle}^{\otimes -1} {\xrightarrow{\sim}}{{\mathcal{P}}}_{\nu,\mu} \, .$$ Thus the singularity of a local generating section of ${{\mathcal{P}}}_{\nu,\mu}$ near $x_0$ in the biextension metric precisely gives the singularity of the function $h$ near $x_0$ as discussed above.
Returning to the general set-up, the result of Lear [@hain_normal] [@lear] is that some power $L^{\otimes N}$ extends as a continuously metrized line bundle over ${\overline{X}} \setminus
D^{\mathrm{sing}}$. Here we need to impose the condition that the monodromy operators on the fibers of $\hh$ about all irreducible components of $D$ are unipotent. We denote the resulting ${{\mathbb{Q}}}$-line bundle (the Lear extension, see below) by ${\left[L,{\lvert\lvert-\rvert\rvert} \vphantom{{L,{\lvert\lvert-\rvert\rvert}}^{\sum}}\right]}_{{\overline{X}}}$. In general we are interested in the behavior of the biextension metric on $L^{\otimes N}$ when we approach a point $x_0$ in the singular locus $D^{\mathrm{sing}}$. Let $s$ be a local generating section of $L={{\mathcal{P}}}_{\nu,\mu}$ on $U \cap X$. By [@pearldiff Theorem 5.19] there exists a homogeneous weight one function $f \in \qq(x_1,\ldots,x_k)$ such that for each holomorphic test curve ${\overline{\phi}} \colon {\overline{C}} \to {\overline{X}}$ the asymptotic estimate $$-\log\|s({\overline{\phi}}(t))\| = b'(t) -f(m_1,\ldots,m_k)\log|t|$$ holds in a neighborhood $V$ of $0 \in {\overline{C}}$, with $b'(t)$ continuous on $V$.
In the case where the variation $\hh$ is pure of type $(-1,0),
(0,-1)$, that is, the family $J(\hh) \to X$ is a family of polarized abelian varieties, we are able to strengthen this result of Pearlstein’s.
Statement of the main results {#sec:statement_of_main}
-----------------------------
Let $(q_1,\ldots,q_n) \colon U {\xrightarrow{\sim}}\Delta^n$ be a coordinate chart on ${\overline{X}}$ such that $D \cap U = \{ q_1\cdots q_k =
0 \}$. Denote by $D_{i}$ the local component of $D$ with equation given by $q_{i}=0$.
For any $ 0 < \epsilon < 1$ write $$U_\epsilon = \{ (q_1,\ldots,q_n) \in U :
{\lvertq_i\rvert} < \epsilon \quad \textrm{for all} \quad i=1,\ldots,n \} \,
.$$ Note that $U_\epsilon \cap X$ is identified via the coordinate chart with $(\Delta^*_\epsilon)^k \times \Delta_\epsilon^{n-k}$.
\[singbiext\] Let $\hh$ be an admissible variation of polarized pure Hodge structures of type $(-1,0), (0,-1)$ on $X$. Assume that the monodromy operators on the fibers of $\hh$ about the irreducible components of $D$ are unipotent. Let $\nu, \mu \colon X \to J(\hh)$ be two algebraic sections. There exist an integer $d$, a homogeneous polynomial $Q\in
{{\mathbb{Z}}}[x_1,\ldots,x_k]$ of degree $d$ with no zeroes on ${{\mathbb{R}}}_{>0}^k$ and, for each local generating section $s$ of ${{\mathcal{P}}}_{\nu,\mu}$ over $U \cap X$, a homogeneous polynomial $P_s\in
{{\mathbb{Z}}}[x_1,\ldots,x_k]$ of degree $d+1$ such that the homogeneous weight one rational function $f_s=P_s/Q$ satisfies the following properties.
1. For all $\epsilon \in {{\mathbb{R}}}_{>0}$ small enough, the function $$b(q_1,\ldots,q_n)=-\log{\lvert\lverts\rvert\rvert} - f_s(-\log|q_1|,\ldots,-\log|q_k|)$$ is bounded on $U_\epsilon \cap X$ and extends continuously over $U_\epsilon \setminus D^{\mathrm{sing}}$.
2. The function $f_{s}$ is uniquely determined by the previous property. Moreover, if $s'$ is another section such that $${\operatorname{div}}(s'/s)=\sum_{i=1}^{k}a_{i}D_{i},$$ then the difference $$f_{s'}-f_{s}=\sum_{i=1}^{k}a_{i}(-\log|q_{i}|)$$ is linear in the variables $-\log|q_{i}|$.
3. The function $f_s \colon {{\mathbb{R}}}_{>0}^k \to {{\mathbb{R}}}$ extends to a continuous function ${\overline{f}}_s \colon {{\mathbb{R}}}^k_{\ge 0} \to
{{\mathbb{R}}}$.
4. In the case that $\mu=\nu$, the function $f_s$ is convex as a function on ${{\mathbb{R}}}_{>0}^k$ and the function ${\overline{f}}_s$ is convex as a function on ${{\mathbb{R}}}^k_{\ge 0}$.
Example \[exm:1\] below will show that, in general, the locus of indeterminacy $D^{\mathrm{sing}}$ of $b$ can not be reduced to a smaller set.
As to local integrability, R. Hain has made the following conjecture (see [@hain_normal Conjecture 6.4]). Assume we work with an arbitrary polarized variation of Hodge structures $(\hh,\lambda)$ of weight $-1$, with Poincaré bundle ${{\mathcal{P}}}$.
\[con:2\] Write $\hat{{{\mathcal{P}}}} =
(\mathrm{id},\lambda)^*{{\mathcal{P}}}$ and let $\omega=c_1(\hat{{{\mathcal{P}}}})$ be the first Chern form of the pullback of the Poincaré bundle with its canonical metric. Assume that $X$ is a curve. Let $L={{\mathcal{P}}}_\nu=\nu^* \hat{{{\mathcal{P}}}}$ with induced metric ${\lvert\lvert-\rvert\rvert}$ and let $N \in \zz_{>0}$ be such that $L^{\otimes
N}$ extends as a continuous metrized line bundle over ${\overline{X}}$. Let $c_1\left({\left[L^{\otimes N}, {\lvert\lvert-\rvert\rvert} \vphantom{{L^{\otimes N}, {\lvert\lvert-\rvert\rvert}}^{\sum}}\right]}_{{\overline{X}}}\right)$ be the first Chern class of the extended line bundle ${\left[L^{\otimes N},
{\lvert\lvert-\rvert\rvert} \vphantom{{L^{\otimes N},
{\lvert\lvert-\rvert\rvert}}^{\sum}}\right]}_{{\overline{X}}}$. Then the $2$-form $\nu^*\omega$ is integrable on ${\overline{X}}$, and the equality $$\int_X \nu^*\omega = \frac{1}{N} \int_{{\overline{X}}} c_1
\left({\left[L^{\otimes N}, {\lvert\lvert-\rvert\rvert} \vphantom{{L^{\otimes N}, {\lvert\lvert-\rvert\rvert}}^{\sum}}\right]}_{{\overline{X}}} \right)$$ holds.
Note that $\nu^*\omega = c_1({{\mathcal{P}}}_\nu)$, and that the integral on the right hand side equals $\frac{1}{N} \deg_{{\overline{X}}} {\left[L^{\otimes
N}, {\lvert\lvert-\rvert\rvert} \vphantom{{L^{\otimes
N}, {\lvert\lvert-\rvert\rvert}}^{\sum}}\right]}_{{\overline{X}}}$. We prove the following result, which implies Hain’s conjecture in the case of an admissible variation of polarized Hodge structure of type $(-1,0), (0,-1)$.
\[localint\] \[theorem:local\_integrability\_over\_curves\] Assume that the admissible variation $\hh$ over $X$ is pure of type $(-1,0),
(0,-1)$, and that the monodromy operators on the fibers of $\hh$ about all irreducible components of $D$ are unipotent. Let $s$ be a local generating section of ${{\mathcal{P}}}_{\nu,\mu}$ on $U \cap X$ and assume that $\dim X = 1$. Write $$-\log\|s\| = b(z) - r \log|t|$$ on $U \cap X$ with $r \in {{\mathbb{Q}}}$ and with $b$ bounded continuous on $U$, as can be done by the existence of the Lear extension of ${{\mathcal{P}}}_{\nu,\mu}$ over ${\overline{X}}$. Then the 1-form $\partial b$ is locally integrable on $U$ with zero residue. Moreover the 2-form $\partial\bar \partial
b$ is locally integrable on $U$.
As also $\partial \bar{\partial} \log|t|$ is locally integrable, we find that $\partial \bar{\partial} \log {\lvert\lverts\rvert\rvert}$ is locally integrable. Since moreover the $1$-form $\partial b$ has no residue on $U$, so that $d [\bar{\partial} b]=[\partial \bar{\partial} b]$, upon globalizing using bump functions and applying Stokes’ theorem we find $$\int_X c_1({{\mathcal{P}}}_{\nu,\mu}) = \frac{1}{N} \int_{{\overline{X}}} c_1 \left({\left[{{\mathcal{P}}}_{\nu,\mu}^{\otimes N}, {\lvert\lvert-\rvert\rvert} \vphantom{{{{\mathcal{P}}}_{\nu,\mu}^{\otimes N}, {\lvert\lvert-\rvert\rvert}}^{\sum}}\right]}_{{\overline{X}}} \right) \, .$$ In the diagonal case, we mention that by [@hain_normal Theorem 13.1] or [@pearlpeters Theorem 8.2] the metric on ${{\mathcal{P}}}_\nu$ is non-negative. Thus the conjecture implies that actually the inequality $$\label{positivedegree}
{\operatorname{deg}} {\left[{{\mathcal{P}}}_\nu, {\lvert\lvert-\rvert\rvert} \vphantom{{{{\mathcal{P}}}_\nu, {\lvert\lvert-\rvert\rvert}}^{\sum}}\right]}_{{\overline{X}}} \geq 0$$ holds. We also mention that in a letter to P. Griffiths, G. Pearlstein sketches a proof of Conjecture \[con:2\], and hence of the inequality (\[positivedegree\]), without the assumption that the type be $(-1,0), (0,-1)$.
We return again to the setting where the parameter space $X$ is of any dimension. However, we specialize to the “diagonal” case where $\mu=\nu$. Consider a test curve ${\overline{\phi}} \colon {\overline{C}}\to {\overline{X}}$ that has image not contained in $D$, and a point $0\in {\overline{C}}$ such that ${\overline{\phi}}(0)=x_0$. Let $\phi$ denote the restriction of ${\overline{\phi}}$ to ${\overline{C}} \setminus {\overline{\phi}}^{-1}D$. The line bundle $${\left[\phi^*({{\mathcal{P}}}_\nu, {\lvert\lvert-\rvert\rvert}) \vphantom{{\phi^*({{\mathcal{P}}}_\nu, {\lvert\lvert-\rvert\rvert})}^{\sum}}\right]}_{{\overline{C}}}^{\otimes -1} \otimes {\overline{\phi}}^*{\left[{{\mathcal{P}}}_\nu, {\lvert\lvert-\rvert\rvert} \vphantom{{{{\mathcal{P}}}_\nu, {\lvert\lvert-\rvert\rvert}}^{\sum}}\right]}_{{\overline{X}}}$$ has a canonical non-zero rational section, as it is canonically trivial over ${\overline{C}} \setminus {\overline{\phi}}^{-1}D$. We call its divisor the height jump divisor $J=J_{\phi,\nu}$ on ${\overline{C}}$. R. Hain has the following conjecture (see [@hain_normal end of §14]).
\[con:1\] For all holomorphic test curves ${\overline{\phi}} \colon {\overline{C}}\to {\overline{X}}$ with image not contained in $D$, the height jump divisor $J=J_{\phi,\nu}$ on ${\overline{C}}$ is effective.
Let $0 \in {\overline{C}}$ be a point mapping to a point $x_0\in {\overline{X}}\setminus X$. Choose coordinates in a neighbourhood $x_{0}$ as in Theorem \[singbiext\] so that $x_{0}$ has coordinates $(0,\dots,0)$ and let ${\overline{f}}_{s}$ be as in that theorem. Locally around $0$ the map ${\overline{\phi}} $ can be written as $${\overline{\phi}} (t)=(t^{m_{1}}u_{1}(t),\dots,
t^{m_{k}}u_{k}(t),q_{k+1}(t),\dots, q_{n}(t)),$$ where, for $i\in [1,k]$, $m_{i}> 0$ and $u_{i}(0)\not = 0$. Write ${\overline{f}}_{s,i}\defeq{\overline{f}}_s(0,\ldots,0,1,0,\ldots,0)$ (the $1$ placed in the $i$-th spot), then $${\operatorname{ord}}_0 J =-{\overline{f}}_s(m_1,\ldots,m_k)+\sum_{i=1}^{k}m_{i}{\overline{f}}_{s,i} \, .$$ The rational number ${\operatorname{ord}}_0 J$ is called the “height jump” associated to the curve ${\overline{C}}$ and the point $0\in {\overline{C}}$. The fact that the height jump may be non-zero was first observed by R. Hain [@hain_normal] and has been explained by P. Brosnan and G. Pearlstein [@brospearl]. We find that the height jumps precisely when $f_s$ is not linear. We mention that the conjecture about the height jump was only stated in [@hain_normal] for the normal function associated to the Ceresa cycle, but it seems reasonable to make this broader conjecture.
In this paper we prove Conjecture \[con:1\] in the case of sections of families of polarized abelian varieties.
\[theorem:effectivity\] Assume that the admissible variation $\hh$ over the smooth complex variety $X$ is pure of type $(-1,0), (0,-1)$, and that the monodromy operators on the fibers of $\hh$ about all irreducible components of $D$ are unipotent. Then for all holomorphic test curves ${\overline{\phi}} \colon {\overline{C}}\to {\overline{X}}$ with image not contained in $D$, the associated height jump divisor $J$ is effective.
Combining with inequality (\[positivedegree\]) we obtain
Assume that ${\overline{C}}$ is smooth and projective. Then under the assumptions of Theorem \[theorem:effectivity\], the line bundle ${\overline{\phi}}^*{\left[{{\mathcal{P}}}_\nu, {\lvert\lvert-\rvert\rvert} \vphantom{{{{\mathcal{P}}}_\nu, {\lvert\lvert-\rvert\rvert}}^{\sum}}\right]}_{{\overline{X}}}$ has non-negative degree on ${\overline{C}}$.
The key to our proof of Theorem \[theorem:effectivity\] is the fact that $f_s$ is convex, cf. Theorem \[singbiext\].4. Turning again to the case of the Ceresa cycle, note that since the intermediate Jacobian of the primitive part of $H_{3}(J(Y_{x}))$ is a compact complex torus but not an abelian variety, we can not apply directly our results for families of abelian varieties to this case. In a future work we hope to extend our results to cover this case.
In the special case of families of jacobians Conjecture \[con:1\] is proved in [@bhdj]. The proof in this special case makes heavy use of the combinatorics of dual graphs of nodal curves, and so cannot readily be extended to families of abelian varieties, nor does it seem practical to reduce the general case to that of jacobians.
Overview of the paper
---------------------
We review the content of the different sections of this paper. In the preliminary section \[sec:three-faces-coin\] we start by recalling the notion of Lear extension, and the Poincaré bundle on the product of a complex torus and its dual, together with its associated metric. We also recall the explicit description of the Poincaré bundle and its metric on a family of polarized abelian varieties. Also we study the period map associated to a family of pointed polarized abelian varieties. Moreover we give a local expansion for the metric of the pullback of the Poincaré bundle under this period map. The functions that appear as the logarithm of the norm of a section of the pullback of the Poincaré bundle will be called norm-like functions.
In section \[technical\] we study norm-like functions and give several estimates on their growth and that of their derivatives. Finally in section \[sec:proof-main-results\] we prove the main results on local integrability and positivity of the height jump. Along the way we give, for convenience of the reader, a proof of Lear’s extension theorem in our situation.
We fix some notation that we will use throughout. Let $r$ be a positive integer. For any commutative ring $R$ we will denote by ${\operatorname{Col}}_{r}(R)$ (respectively ${\operatorname{Row}}_{r}(R)$, $M_{r}(R)$ and $S_{r}(R)$) the set of column vectors of size $r$ with entries in $R$ (respectively row vectors, matrices and symmetric matrices of size $r$-by-$r$).
We denote by $S_{r}^{++}({{\mathbb{R}}})\subset S_{r}({{\mathbb{R}}})$ (respectively $S_{r}^{+}({{\mathbb{R}}})\subset S_{r}({{\mathbb{R}}})$) the cone of positive definite (respectively positive semidefinite) symmetric real matrices. We denote by $\mathbb{H}_r$ Siegel’s upper half space of rank $r$, and by $\mathbb{P}^r$ its compact dual.
By a variety we mean an integral separated scheme of finite type over ${{\mathbb{C}}}$.
[**Acknowledgments.**]{} We would like to thank R. Hain and G. Pearlstein for several discussions and useful hints. We would also like to thank the hospitality of the Mathematical Institute of Leiden University and the Instituto de Ciencias Matemáticas where the authors could meet to work on this paper.
Preliminary results {#sec:three-faces-coin}
===================
Lear extensions {#sec:lear-extensions}
---------------
We start by recalling the formalism of ${{\mathbb{Q}}}$-line bundles. Let $X$ be a variety. A ${{\mathbb{Q}}}$-line bundle over $X$ is a pair $(L,r)$ where $L$ is a line bundle on $X$ and $r>0$ is a positive integer. A metrized ${{\mathbb{Q}}}$-line bundle is a triple $(L,{\lvert\lvert-\rvert\rvert},r)$, where $(L,r)$ is a ${{\mathbb{Q}}}$-line bundle and ${\lvert\lvert-\rvert\rvert}$ is a continuous metric on $L$. A morphism of ${{\mathbb{Q}}}$-line bundles $(L_{1},r_{1})\to (L_{2},r_{2})$ is a morphism of line bundles $L_{1}^{\otimes r_2}\to L_{2}^{\otimes
r_1}$. A morphism of metrized line bundles is an isometry if the corresponding morphism of line bundles is an isometry. Every line bundle $L$ gives rise to a ${{\mathbb{Q}}}$-line bundle $(L,1)$. Note that, if $L$ is a line bundle and $r>1$ is an integer, then there is a canonical isomorphism $(L^{\otimes r},r)\simeq (L,1)$. Moreover, if $L$ is a torsion line bundle so that $L^{\otimes r}\simeq {{\mathcal{O}}}_{X}$, then there is an isomorphism of ${{\mathbb{Q}}}$-line bundles $(L,1)\to
({{\mathcal{O}}}_{X},r)$. If we do not need to specify the multiplicity $r$, a ${{\mathbb{Q}}}$-line bundle will be denoted by a single letter.
Let $X {\subseteq}{\overline{X}}$ be a smooth compactification of a smooth variety, such that the boundary divisor $D \defeq {\overline{X}} \setminus X$ has normal crossings, and $L$ a line bundle on $X$ with continuous metric ${\lvert\lvert-\rvert\rvert}$. A *Lear extension* of $L$ is a ${{\mathbb{Q}}}$-line bundle $({{\mathcal{L}}},r)$ on ${\overline{X}}$ together with an isomorphism $\alpha \colon
(L,1)\to ({{\mathcal{L}}},r)|_{X}$ and a continuous metric on ${{\mathcal{L}}}|_{{\overline{X}} \setminus D^\mathrm{sing}}$ such that the isomorphism $\alpha $ is an isometry. Since $D^\mathrm{sing}$ has codimension at least $2$ in ${\overline{X}}$, if a Lear extension exists then it is unique up to a unique isomorphism. If a Lear extension of $L$ exists we denote it by ${\left[L, {\lvert\lvert-\rvert\rvert} \vphantom{{L, {\lvert\lvert-\rvert\rvert}}^{\sum}}\right]}_{{\overline{X}}}$. Note that the isomorphism class of the Lear extension of $L$ depends not only on $L$ but also on the metric on $L$.
If $s$ is a rational section of $L$, it can also be seen as a rational section of ${\left[L, {\lvert\lvert-\rvert\rvert} \vphantom{{L, {\lvert\lvert-\rvert\rvert}}^{\sum}}\right]}_{{\overline{X}}}$. We will denote by ${\operatorname{div}}_{X}(s)$ the divisor of $s$ as a rational section of $L$ and by ${\operatorname{div}}_{{\overline{X}}}(s)$ the divisor of $s$ as a rational section of ${\left[L, {\lvert\lvert-\rvert\rvert} \vphantom{{L, {\lvert\lvert-\rvert\rvert}}^{\sum}}\right]}_{{\overline{X}}}$.
Poincaré bundle and its metric {#sec:poincare-bundle}
------------------------------
In this section we recall the definition of the Poincaré bundle and its biextension metric. Moreover we make the biextension metric explicit in the case of families of polarized abelian varieties.
In the literature one can find small discrepancies in the description of the Poincaré bundle, see Remark \[rem:1\]. These discrepancies can be traced back to two different choices of the identification of a complex torus with its bidual. Moreover, there are also different conventions regarding the sign of the polarization of the abelian variety. Since one of our main results is a positivity result it is worthwhile to fix all the signs to avoid these ambiguities.
*Complex tori and their duals.* Let $g\ge 0$ be a non-negative integer, $V$ a $g$-dimensional vector space and $\Lambda \subset V$ a rank $2g$ lattice. The quotient $T=V/\Lambda $ is a compact complex torus. It is a Kähler complex manifold, but in general it is not an algebraic variety.
We recall the construction of the dual torus of $T$. We denote by $V^{\vee}={\operatorname{Hom}}_{{\overline{{{\mathbb{C}}}}}}(V,{{\mathbb{C}}})$ the space of antilinear forms $w\colon V\to {{\mathbb{C}}}$. The bilinear form $$\langle\cdot,\cdot\rangle\colon V^{\vee}\times V\to {{\mathbb{R}}},\ \langle
w,z \rangle \defeq {\operatorname{Im}}(w(z))$$ is non-degenerate. Thus $$\Lambda ^{\vee}\defeq \{\lambda \in V^{\vee}\mid \langle
\lambda ,\Lambda \rangle\subset {{\mathbb{Z}}}\}$$ is a lattice of $V^{\vee}$. The quotient $T^{\vee}=V^{\vee}/\Lambda ^{\vee}$ is again a compact complex torus, called the *dual torus* of $T$.
We can identify $V$ with ${\operatorname{Hom}}_{{\overline{{{\mathbb{C}}}}}}(V^{\vee},{{\mathbb{C}}})$ by the rule $$\label{eq:3}
z(w)={\overline{w(z)}}$$ so that the bilinear pairing $$(V^{\vee}\oplus V)\otimes (V^{\vee}\oplus V)\to {{{\mathbb{R}}}},\quad
(w,z)\otimes(w',z')\mapsto {\operatorname{Im}}(w(z'))+{\operatorname{Im}}(z(w'))$$ is antisymmetric. With this identification the double dual $(T^{\vee})^{\vee}$ gets identified with $T$.
The points of $T^{\vee}$ define homologically trivial line bundles on $T$ giving an isomorphism of $T^{\vee}$ with ${\operatorname{Pic^{0}}}(T)$. We recall this construction. Let $w\in
V^{\vee}$. Denote by $[w]$ its class in $T^{\vee}$ and by $\chi_{[w]}\in {\operatorname{Hom}}(\Lambda ,{{\mathbb{C}}}_{1})$ the character $$\label{eq:4}
\chi_{[w]}(\mu)=\exp(2\pi i\langle w,\mu \rangle).$$ The line bundle associated to $[w]$ is the line bundle $L_{[w]}$ with automorphy factor $\chi_{[w]}$. In other words, consider the action of $\Lambda $ on $V\times {{\mathbb{C}}}$ given by $$\mu (z,t)=(z+\mu ,t\exp(2\pi i \langle w,\mu \rangle)).$$ Write $L_{[w]}=V\times {{\mathbb{C}}}/\Lambda $. The projection $V\times
{{\mathbb{C}}}\to V$ induces a map $L_{[w]}\to T$. It is easy to check that $L_{[w]}$ is a holomorphic line bundle on $T$ that only depends on the class $[w]$. Note that the identification between $T^{\vee}$ and ${\operatorname{Pic^{0}}}(T)$ is not completely canonical because it depends on a choice of sign. We could equally well have used the character $\chi_{[w]}^{-1}$.
*The Poincaré bundle.* Note that, although the cocycle is not holomorphic in $w$, the line bundle $L_{[w]}$ varies holomorphically with $w$, defining a holomorphic line bundle on $ T\times T^{\vee}$ called the Poincaré bundle.
\[def:2\] A *Poincaré (line) bundle* ${{\mathcal{P}}}$ is a holomorphic line bundle on $T\times T^{\vee}$ that satisfies
1. the restriction ${{\mathcal{P}}}|_{T\times \{[w]\}}$ is isomorphic to $L_{[w]}$;
2. the restriction ${{\mathcal{P}}}|_{ \{0\}\times T^{\vee}}$ is trivial.
A *rigidified Poincaré bundle* is a Poincaré bundle together with an isomorphism ${{\mathcal{P}}}|_{\{0\}\times T^{\vee}}
{\xrightarrow{\sim}}{{\mathcal{O}}}_{ \{0\}\times T^{\vee}}$.
To prove the existence of a Poincaré bundle, consider the map $$a_{{{\mathcal{P}}}}\colon (\Lambda \times \Lambda^{\vee})\times (V\times
V^{\vee}) \to {{\mathbb{C}}}^{\times}$$ given by $$\label{eq:16}
a_{{{\mathcal{P}}}}((\mu,\lambda ),(z,w))
=\exp\Big(\pi \big((w+\lambda )(\mu )+\overline{\lambda(z)}\big)\Big).$$ This map is holomorphic in $z$ and $w$. Moreover, since for $(\mu
,\lambda )\in \Lambda \times \Lambda ^{\vee}$, $$\langle \lambda ,\mu \rangle=\frac{1}{2i}(\lambda (\mu
)-\overline{\lambda (\mu )})\in {{\mathbb{Z}}},$$ the map $a_{{{\mathcal{P}}}}$ is a cocycle for the additive action of $\Lambda
\times \Lambda ^{\vee}$ on $V\times V ^{\vee}$. Hence, it is an automorphy factor that defines a holomorphic line bundle ${{\mathcal{P}}}$ on $T\times T ^{\vee}=V\times
V ^{\vee}/\Lambda \times \Lambda ^{\vee}$.
For a fixed $w\in V^{\vee}$, $$a_{{{\mathcal{P}}}}((\mu,0),(z,w))
=\exp(\pi w(\mu)).$$ This last cocycle is equivalent to the cocycle . Indeed, $$\exp(\pi w(\mu ))\exp(\pi \overline{w(z+\mu)})^{-1}\exp(\pi \overline{w(z)})=
\exp(2\pi i \langle w,\mu \rangle ),$$ and the function $z\mapsto \exp(\pi \overline{w(z)})$ is holomorphic in $z$. Thus the restriction ${{\mathcal{P}}}|_{T\times \{[w]\}}$ is isomorphic to $L_{[w]}$. Moreover $$a_{{{\mathcal{P}}}}((0,\lambda),(0,w))=1,$$ which implies that the restriction ${{\mathcal{P}}}|_{\{0\}\times T^{\vee}}$ is trivial. The uniqueness of the Poincaré bundle follows from the seesaw principle.
We conclude
\[prop:6\] A Poincaré bundle exists and is unique up to isomorphism. A rigidified Poincaré bundle exists and is unique up to a unique isomorphism.
\[rem:1\] Using the above identification of $T$ with the dual torus of $T^{\vee}$ we have that, for a fixed $z\in V$, the restriction ${{\mathcal{P}}}|_{T^{\vee}\times \{[z]\}}$ agrees with $L_{[z]}$. In fact $$a_{{{\mathcal{P}}}}((0,\lambda ),(z,w))
=\exp(\pi \overline{\lambda (z)}),$$ and, arguing as in the proof of Proposition \[prop:6\], this cocycle is equivalent to the cocycle $$\exp(2\pi i {\operatorname{Im}}({\overline{\lambda(z) }}))=\exp(2\pi i \langle z,\lambda
\rangle ).$$ Note that the definition of the Poincaré bundle in [@hainbiext § 3.2] states that ${{\mathcal{P}}}|_{
T^{\vee}\times \{[z]\}}=L_{[-z]}$. The discrepancy between [@hainbiext] and the current paper is due to a different choice of identification between $T$ and $(T^{\vee})^{\vee}$.
\[rem:2\] As we will see later the cocycle is not optimal because it does not vary holomorphically in holomorphic families of tori.
*Group theoretical interpretation of the Poincaré bundle.* We next give a group theoretic description of the Poincaré bundle. We start with the additive real Lie group $W$ given by $$W= V\times V^{\vee}.$$ Denote by ${\widetilde{W}}$ the semidirect product ${\widetilde{W}}=W\ltimes
{{\mathbb{C}}}^{\times}$, where the product in ${\widetilde{W}}$ is given by $$\label{eq:7}
\big((z,w),t\big)\cdot\big((z',w'),t'\big)=
\big((z+z',w+ w'),tt'\exp(2\pi i \langle w,z'\rangle )\big).$$ Clearly the group $$\label{eq:15}
W_{{{\mathbb{Z}}}}=\Lambda \times \Lambda ^{\vee}$$ is a subgroup of ${\widetilde{W}}$.
Consider the space $$\label{eq:14}
P\defeq V\times V^{\vee}\times
{{\mathbb{C}}}^{\times}$$ and the action of ${\widetilde{W}}$ on $P$ by biholomorphisms given by $$\label{eq:6}
\big((\mu,\lambda ),t\big)\cdot ((z,w),s)
=\big(z+\mu,w+\lambda,
ts \exp(\pi (w+\lambda )(\mu )+\pi \overline{\lambda (z)})\big).$$ The projection $P\to V\times V^{\vee}$ induces a map $W_{{{\mathbb{Z}}}}\backslash P\to T\times T^{\vee}$. The action of ${{\mathbb{C}}}^{\times }$ on $P$ by acting on the third factor provides $W_{{{\mathbb{Z}}}}\backslash P$ with a structure of ${{\mathbb{C}}}^{\times}$-bundle over $T\times T^{\vee}$. Denote by ${{\mathcal{P}}}_{T}=(W_{{{\mathbb{Z}}}}\backslash
P)\underset{{{\mathbb{C}}}^{\times}}{\times } {{\mathbb{C}}}$ the associated holomorphic line bundle. The structure of $P$ as a product space induces a canonical rigidification ${{\mathcal{P}}}_{T}|_{ \{0\}\times T^{\vee}}={{\mathcal{O}}}_{\{0\}\times T^{\vee}}$.
\[prop:2\] The line bundle ${{\mathcal{P}}}_{T}$ is a rigidified Poincaré line bundle.
From the explicit description of the cocycle and of the action we deduce that ${{\mathcal{P}}}_{T}$ is a Poincaré bundle.
*The metric of the Poincaré bundle.* The Poincaré bundle has a metric that is determined up to constant by the condition that its curvature form is invariant under translation. On a rigidified Poincaré bundle, with given rigidification ${{\mathcal{P}}}_{T}|_{\{(0,0)\}} {\xrightarrow{\sim}}{{\mathcal{O}}}_{\{(0,0)\}}$, the constant is fixed by imposing the condition $\| 1\|=1$. We now describe explicitly this metric.
Let ${{\mathbb{C}}}_{1}$ be the subgroup of ${{\mathbb{C}}}$ of elements of norm one and write ${\widetilde{W}}_{1}=W\ltimes
{{\mathbb{C}}}_{1}$ with the same product as before. Denote by ${{\mathcal{P}}}^{\times}_{T}$ the Poincaré bundle with the zero section deleted. Since ${{\mathcal{P}}}_{T}^{\times}= W_{{{\mathbb{Z}}}}\backslash P$, the invariant metric of ${{\mathcal{P}}}_{T}$ is described by the unique function $\|\cdot\|\colon P\to {{\mathbb{R}}}_{>0}$ satisfying the conditions
1. (Norm condition) For $(w,z,s )\in P$, we have $$\|(w,z,s )\|=|s |\|(w,z,1)\|.$$
2. (Invariance under ${\widetilde{W}}_{1}$) For $g\in {\widetilde{W}}_{1}$ and $x\in
P$, we have $$\|g\cdot x\|=\|x\|$$
3. (Normalization) $\|(0,0,1)\|=1$.
Using the explicit description of the action given in , we have that $$(w,z,s)=(w,z,1)\cdot (0,0,s\exp(-\pi w(z))),$$ from which one easily derives that the previous conditions imply $$\label{eq:8}
\|(w,z,s)\|^{2}= |s|^{2}\exp\Big(-\pi \big(w(z)+\overline{w(z)}\big)\Big).$$
*Holomorphic families of complex tori.* As mentioned in Remark \[rem:2\], the cocycle does not vary holomorphically in families. We now want to consider holomorphic families of complex tori. Let $X$ be a complex manifold and ${{\mathcal{T}}} \to X$ a holomorphic family of dimension $g$ complex tori. This means that ${{\mathcal{T}}}$ is defined by a holomorphic vector bundle $\VV$ of rank $g$ on $X$ and an integral local system $\Lambda \subset \VV$ of rank $2g$ such that, for each $s\in X$, the fibre $\Lambda _{s}$ is a lattice in $\VV_{s}$ and the flat sections of $\Lambda $ are holomorphic sections of $\VV$. Indeed $\Lambda $ is the local system $s \mapsto H_{1}({{\mathcal{T}}}_{s},{{\mathbb{Z}}})$ and $\VV$ the holomorphic vector bundle $s\mapsto
H_{1}({{\mathcal{T}}}_{s},{{\mathbb{C}}})/ F^{0}H_{1}({{\mathcal{T}}}_{s},{{\mathbb{C}}})$.
Write $\HH_{\cc}=\Lambda \otimes {{\mathcal{O}}}_{X}$. It is a holomorphic vector bundle, with a holomorphic surjection $\HH_{\cc}\to \VV$ and an integral structure. The kernel ${{\mathcal{F}}}^{0}= {\operatorname{Ker}}(\HH_{\cc}\to \VV)$ is a holomorphic vector bundle which is fibrewise isomorphic to the complex conjugate ${\overline{\VV}}$. On the dual vector bundle $\HH^{\vee}= \Lambda ^{\vee}\otimes {{\mathcal{O}}}_{X}$ consider the orthogonal complement $({{\mathcal{F}}}^{0})^{\perp}$ to ${{\mathcal{F}}}^{0}$. The quotient $ \HH^{\vee}/(\ca{F}^{0})^{\perp}$ is a holomorphic vector bundle which is fibrewise isomorphic to $\VV^{\vee}$. Thus we will denote it by $\VV^{\vee}$. The dual family of tori is defined as $${{\mathcal{T}}}^{\vee}=\VV^{\vee}/\Lambda ^{\vee}.$$
Let $U\subset X$ be a small enough open subset such that the restriction of ${{\mathcal{T}}}$ to $U$ is topologically trivial. Choose $s_0\in U$ and an integral basis $$(a,b)=(a_{1},\dots,a_{g},b_{1},\dots,b_{g})$$ of $\Lambda _{s_{0}}$ such that $(a_{1},\dots,a_{g})$ is a complex basis of $\VV_{s_0}$. By abuse of notation, we denote by $a_{i},b_{i}$, $i=1,\dots,g$ the corresponding flat sections of $\Lambda $. We can see them as holomorphic sections of ${{\mathcal{H}}}_{{{\mathbb{C}}}}$ and we will also denote by $a_{i}, b_{i}$ their images in $\VV$. After shrinking $U$ if necessary, we can assume that the sections $a_{i}$ form a frame of $\VV$, thus we can write $$\label{eq:10}
(b_{1},\dots,b_{g})=(a_{1},\dots, a_{g})\Omega$$ for a holomorphic map $\Omega \colon U \to M_{g}({{\mathbb{C}}})$. We call $\Omega$ the period matrix of the variation on the basis $(a,b)$. Note that condition is equivalent to saying that ${{\mathcal{F}}}^{0}\subset {{\mathcal{H}}}_{{{\mathbb{C}}}}$ is generated by the columns of the matrix $$\begin{pmatrix}
-\Omega \\
{\operatorname{Id}}
\end{pmatrix}.$$ Writing $\HH_{{{\mathbb{R}}}}$ for the real vector subbundle of $\HH_{{{\mathbb{C}}}}$ formed by sections that are invariant under complex conjugation, we have that ${{\mathcal{F}}}^{0}\cap \HH_{{{\mathbb{R}}}}=0$. This implies that ${\operatorname{Im}}\Omega$ is non-degenerate. The complex basis $(a_{1},\dots,a_{g})$ gives us an identification of $\VV|_U$ with the trivial vector bundle ${\operatorname{Col}}_{g}({{\mathbb{C}}})$ and the basis $(a,b)$ identifies $\Lambda $ with the trivial local system ${\operatorname{Col}}_{g}({{\mathbb{Z}}})\oplus {\operatorname{Col}}_{g}({{\mathbb{Z}}})$. With these identifications, the inclusion $\Lambda \to \VV$ is given by $$(\mu _{1},\mu _{2})\mapsto \mu=\mu _{1}+\Omega \mu _{2}.$$
Let now $(a^{\ast},b^{\ast})=(a_{1}^{\ast},\dots,a_{g}^{\ast},b_{1}^{\ast},\dots
,b_{g}^{\ast})$ be the basis of $\Lambda ^{\vee}_{s_{0}}$ dual to $(a,b)$. As before we extend the elements $a_{i}^{\ast},b_{i}^{\ast}$, $i=1,\dots,g$ to flat sections of $\Lambda $ over $U$. Then $b_{1}^{\ast},\dots ,b_{g}^{\ast}$ is a frame of $\VV^{\vee}$. One can check that, on ${{\mathcal{V}}}^{\vee}$, the equality $$(a_{1}^{\ast},\dots,a_{g}^{\ast})=-(b_{1}^{\ast},\dots,b_{g}^{\ast})\Omega ^{t}$$ holds. Thus if we identify $\VV^{\vee}$ with the trivial vector bundle ${\operatorname{Row}}_{g}({{\mathbb{C}}})$ using the basis $(b^{\ast})$ and $\Lambda ^{\vee}$ with the trivial local system ${\operatorname{Row}}_{g}({{\mathbb{Z}}})\oplus {\operatorname{Row}}_{g}({{\mathbb{Z}}})$ using the basis $(a^{\ast},b^{\ast})$ we obtain that the inclusion $\Lambda^{\vee}\to \VV^{\vee}$ is given by $$\label{eq:9}
(\lambda _{1},\lambda _{2})\mapsto \lambda =-\lambda _{1}\Omega +\lambda
_{2}.$$ In the fixed bases, the pairing between the lattice $\Lambda $ and its dual $\Lambda ^{\vee}$ is given by $$\langle (\lambda _{1},\lambda _{2}),(\mu _{1},\mu _{2})\rangle =
\lambda _{1}\mu _{1}+ \lambda _{2}\mu _{2},$$ where $\lambda_1, \lambda_2 \in {\operatorname{Row}}_{g}({{\mathbb{Z}}})$ and $\mu_1,
\mu_2 \in {\operatorname{Col}}_{g}({{\mathbb{Z}}})$. One can check that the pairing between ${{\mathcal{V}}}^{\vee}$ and ${{\mathcal{V}}}$ is given by $$\label{eq:11}
w(z)=-w ({\operatorname{Im}}\Omega)^{-1} \bar {z},$$ where $w \in {\operatorname{Row}}_{g}({{\mathbb{C}}})$ and $z \in {\operatorname{Col}}_{g}({{\mathbb{C}}})$.
The cocycle $a_{{{\mathcal{P}}}}$ from equation can now be written down explicitly as $$\begin{gathered}
a_{{{\mathcal{P}}}}((\mu _{1},\mu _{2}),(\lambda _{1},\lambda _{2}),(z,w))\\=
\exp(-\pi ((w-\lambda _{1}\Omega +\lambda _{2})({\operatorname{Im}}\Omega)^{-1}(\mu
_{1}+\bar \Omega
\mu _{2})+ (-\lambda _{1}\bar \Omega +\lambda _{2})({\operatorname{Im}}\Omega)^{-1}z)),\end{gathered}$$ which is not holomorphic with respect to $\Omega $. Thus it does not give us on the nose a holomorphic Poincaré bundle in families. Nevertheless the construction of the Poincaré bundle can be extended to families of complex tori.
\[prop:1\] Let $X$ be a complex manifold and ${{\mathcal{T}}}\to X $ a holomorphic family of dimension $g$ complex tori. Let $\nu_0\colon X\to
{{\mathcal{T}}}\underset{X}{\times }{{\mathcal{T}}}^{\vee}$ be the zero section. Then
1. the fibrewise dual tori form a holomorphic family of complex tori ${{\mathcal{T}}}^{\vee}\to X$;
2. \[item:2\] on ${{\mathcal{T}}}\underset{X}{\times }{{\mathcal{T}}}^{\vee}$ there is a holomorphic line bundle ${{\mathcal{P}}}$, together with an isomorphism $\nu_0^{\ast}{{\mathcal{P}}}{\xrightarrow{\sim}}{{\mathcal{O}}}_{X} $, called the rigidified Poincaré bundle, which is unique up to a unique isomorphism, and is characterized by the property that for every point $p\in X$, the restriction ${{\mathcal{P}}}|_{{{\mathcal{T}}}_{p}\times {{\mathcal{T}}}^{\vee}_{p}}$ is the rigidified Poincaré bundle of ${{\mathcal{T}}}_{p}$;
3. there is a unique metric on ${{\mathcal{P}}}$ that induces the trivial metric on $\nu_0^{\ast}{{\mathcal{P}}}={{\mathcal{O}}}_{X} $ and whose curvature is fibrewise translation invariant.
Fix an open subset $U\subset X$ as before. The dual family of tori ${{\mathcal{T}}}^{\vee}$ is holomorphic by definition.
In order to prove that the Poincaré bundle defines a holomorphic line bundle on the family we need to exhibit a new cocycle that is holomorphic in $z$, $w$ and $\Omega $ and that, for fixed $\Omega $, is equivalent to $a_{{{\mathcal{P}}}}$ holomorphically in $z$ and $w$. Write $\lambda =-\lambda
_{1}\Omega +\lambda _{2}$ and $\mu =\mu _{1}+\Omega
\mu _{2}$ as before with $\lambda_1, \lambda_2 \in {\operatorname{Row}}_{g}({{\mathbb{Z}}})$ and $\mu_1, \mu_2 \in {\operatorname{Col}}_{g}({{\mathbb{Z}}})$. Consider the cocycle $$\label{eq:5}
b_{{{\mathcal{P}}}}((\lambda,\mu),(z,w))
=\exp(2 \pi i((w-\lambda _{1}\Omega +\lambda _{2})\mu _{2}-\lambda_{1} z))$$ for $w \in {\operatorname{Row}}_{g}({{\mathbb{C}}})$ and $z \in {\operatorname{Col}}_{g}({{\mathbb{C}}})$. Then $b_{{{\mathcal{P}}}}$ is holomorphic in $z$, $w$, and $\Omega $. Consider also the function $$\label{eq:19}
\psi (z,w)=\exp(-\pi w({\operatorname{Im}}\Omega)^{-1}z),$$ which is holomorphic in $z$ and $w$. Since $$b_{{{\mathcal{P}}}}((\mu,\lambda),(z,w))=a_{{{\mathcal{P}}}}((\mu,\lambda
),(z,w))\psi (z,w)\psi (z+\mu ,w+\lambda )^{-1}$$ we deduce that the cocycle $b_{{{\mathcal{P}}}}$ determines a line bundle that satisfies the properties stated in item (\[item:2\]) from the proposition over the open $U$. The uniqueness follows again from the seesaw principle. By the uniqueness, we can glue together the rigidified Poincaré bundles obtained in different open subsets $U$ to obtain a rigidified Poincaré bundle over $X$.
The fact that the invariant metric has invariant curvature fixes it up to a function on $X$ that is determined by the normalization condition. Thus if it exists, it is unique. Since the expression for the metric in is smooth in $\Omega $ and the change of cocycle function in is also smooth in $\Omega $ we obtain an invariant metric locally. Again the uniqueness implies that we can patch together the different local expressions.
\[formulanorm\] Since the cocycle $a_{{{\mathcal{P}}}}$ does not vary holomorphically in families, the frame for the Poincaré bundle used in equation is not holomorphic in families. The cocycle $b_{{{\mathcal{P}}}}$ and the rigidification do determine a holomorphic frame of the Poincaré bundle over $X\times V\times
V^{\vee}$. In this holomorphic frame the metric is given by $$\begin{aligned}
\|(z,w,s)\|^{2}&=|s|^{2}\exp(-\pi (w(z)+\overline{w(z)}))|\psi
(z,w)|^{2}\notag \\
&=|s|^{2}\exp(4\pi {\operatorname{Im}}(w)({\operatorname{Im}}\Omega)^{-1} {\operatorname{Im}}(z))\label{eq:20},
\end{aligned}$$ where $\psi $ is the function given in .
*Abelian varieties.* We now specialize to the case of polarized abelian varieties. A polarization on the torus $T=V/\Lambda $ is the datum of an antisymmetric non-degenerate bilinear form $E\colon \Lambda \times \Lambda \to {{\mathbb{Z}}}$ such that for all $v, w \in V$, $$E(iv,iw)=E(v,w),\qquad -E(iv,v)>0,\text{ for }v\not=0.$$ Here we have extended $E$ ${{\mathbb{R}}}$-bilinearly to $V=\Lambda \otimes {{\mathbb{R}}}$. Note that the standard convention in the literature on abelian varieties is to ask $E(iv,v)$ to be positive. But this convention is not compatible with the usual convention in the literature on Hodge Theory. We have changed the sign here to have compatible conventions for abelian varieties and for Hodge structures.
Since $E$ is antisymmetric and non-degenerate we can choose an integral basis $(a,b)$ such that the matrix of $E$ on $(a,b)$ is given by $$\label{type}
\begin{pmatrix}
0 & \Delta \\
-\Delta & 0
\end{pmatrix} \, ,$$ where $\Delta $ is an integral diagonal matrix. We will call such basis a $\qq$-symplectic integral basis. From a $\qq$-symplectic integral basis $(a,b)$ we can construct a symplectic rational basis $(a\Delta^{-1} ,b)$.
With the choice of a $\qq$-symplectic integral basis, the condition $E(iv,iw)=E(v,w)$ is equivalent to the product matrix $\Delta \Omega $ being symmetric. Thus $\Omega^{t}\Delta =\Delta \Omega $. The condition $-E(iv,v)>0$ is equivalent to $\Delta {\operatorname{Im}}\Omega$ being positive definite. This last condition is equivalent to that any of the symmetric matrices $({\operatorname{Im}}\Omega)^{t}\Delta $, $(({\operatorname{Im}}\Omega)^{-1})^{t}\Delta
$ or $\Delta ({\operatorname{Im}}\Omega)^{-1}$ is positive definite.
Recall from (\[eq:10\]) that $\Omega \in
M_g({{\mathbb{C}}})$ is determined by the relation $b=a\Omega$. The polarization $E$ defines a positive definite hermitian form $H$ on $V$ given by $$H(v,w)=-E(iv,w)-iE(v,w),$$ so that we recover the polarization $E$ as the restriction of $-{\operatorname{Im}}(H)$ to $\Lambda \times \Lambda $. In the basis $(a_{1},\dots,a_{g})$ of $V$, the hermitian form $H$ is given by $\Delta ({\operatorname{Im}}\Omega)^{-1}=(({\operatorname{Im}}\Omega)^{-1})^{t}\Delta $. That is, under the identification $V={\operatorname{Col}}_{g}({{\mathbb{C}}})$, we have $$\label{eq:21}
H(v,w)=v^{t}\Delta ({\operatorname{Im}}\Omega)^{-1}{\overline{w}}.$$
The polarization defines an isogeny $\lambda_{E}\colon T\to T^{\lor}$ that is given by the map $V\to V^{\lor}$, $v\mapsto H(v,-)$. Under the identification $V^{\lor}={\operatorname{Row}}_{g}({{\mathbb{C}}})$ given by the basis $(b^{\ast})$, by equations and , we deduce that $\lambda_E$ is given by $$\label{eq:22}
\lambda _{E}(v)=-v^{t}\Delta .$$ The fact that $\Delta \Omega $ is symmetric and $\Delta $ is integral implies that this map sends $\Lambda $ to $\Lambda ^{\lor}$ defining an isogeny. The dual polarization $E^{\lor}$ on $V^{\vee}$ is given by the hermitian form $H^{\vee}(e,f)=e({\operatorname{Im}}\Omega)^{-1}\Delta ^{-1}{\overline{f}}^{t}$ so that the map $V\to V^{\vee}$ is an isometry.
Consider now the composition of the diagonal map with the polarization map on the second factor $(\mathrm{id},\lambda_E) \colon T\to T\times
T^{\vee}$ and let ${{\mathcal{P}}}$ be the Poincaré bundle on $T\times T^{\vee}$. Then $(\mathrm{id},\lambda_E)^{\ast}{{\mathcal{P}}}$ is an ample line bundle on $T$ whose first Chern class agrees with the given polarization of $T$.
\[explicitmetricPoinc\] The metric induced on the bundle $(\mathrm{id},\lambda_E)^{\ast}{{\mathcal{P}}}$ is given by the function $\|\cdot\|\colon V\times {{\mathbb{C}}}^{\times}\to {{{\mathbb{R}}}}_{>0}$, $$\label{eq:23}
\|(z,s)\|^{2}=|s|^{2}\exp(-4\pi {\operatorname{Im}}(z)^{t}\Delta ({\operatorname{Im}}\Omega)^{-1}{\operatorname{Im}}(z)).$$
This follows from equations and .
*Hodge structures of type $(-1,0),(0,-1)$.* Recall that a pure Hodge structure of type $(-1,0), (0,-1)$ is given by
1. A torsion free finite rank ${{\mathbb{Z}}}$-module, $H_{{{\mathbb{Z}}}}$.
2. A decreasing filtration $F^{\bullet}$ on $H_{{{\mathbb{C}}}}\defeq
H_{{{\mathbb{Z}}}}\otimes {{\mathbb{C}}}$ such that $$F^{-1}H_{{{\mathbb{C}}}}=H_{{{\mathbb{C}}}},\quad F^{1}H_{{{\mathbb{C}}}}=\{0\},\quad
H_{{{\mathbb{C}}}}=F^{0}H_{{{\mathbb{C}}}}\oplus \overline{F^{0}H_{{{\mathbb{C}}}}}.$$
A polarization of a Hodge structure of type $(-1,0),(0,-1)$ is a non-degenerate antisymmetric bilinear form $Q\colon
H_{{{\mathbb{Z}}}}\otimes H_{{{\mathbb{Z}}}}\to {{\mathbb{Z}}}$ which, when extended to $H_{{{\mathbb{C}}}}$ by linearity, satisfies the “Riemann bilinear relations”
1. The subspace $F^{0}H_{{{\mathbb{C}}}}$ is isotropic.
2. If $x\in F^{0}H_{{{\mathbb{C}}}}$, then $iQ(x,{\overline{x}})>0$.
We recall that the category of Hodge structures of type $(-1,0),(0,-1)$ and the category of complex tori are equivalent. If $(H_{{{\mathbb{Z}}}},F^\bullet)$ is such a Hodge structure, we write $V=H_{{{\mathbb{C}}}}/F^{0}$ and $\pi
\colon H_{{{\mathbb{C}}}}\to H_{{{\mathbb{C}}}}/F^{0}$ for the projection. Then $\Lambda \defeq \pi (H_{{{\mathbb{Z}}}})$ is a lattice in $V$, that defines a torus $T=V/\Lambda $. Conversely, if $T$ is a complex torus, then $H_{1}(T,{{\mathbb{Z}}})$ has a Hodge structure of type $(-1,0),(0,-1)$.
If $(H_{{{\mathbb{Z}}}},F^\bullet)$ has a polarization $Q$ then, identifying $\Lambda $ with $H_{{{\mathbb{Z}}}}$ and writing $E=Q$, we obtain a polarization of $T$. We finish by verifying that, indeed $E$ is a polarization in the sense of complex tori. That $E$ is non-degenerate follows from the non-degeneracy of $Q$. Let $v,w\in V$, choose $\bar
x,\bar y\in
\overline{F^{0}H_{{{\mathbb{C}}}}}$ such that $\pi (\bar x)=v$ and $\pi (\bar y)=w$. Write $x$, $y$ for the complex conjugates of $\bar x$ and $\bar y$ respectively. Then $x+\bar x\in
H_{{{\mathbb{Z}}}}\otimes {{\mathbb{R}}}$ and $\pi (x+\bar x)=v$, while $ix-i\bar x\in H_{{{\mathbb{Z}}}}\otimes {{\mathbb{R}}}$ and $\pi (ix-i\bar
x)=-iv$. Thus by the first Riemann bilinear relation $$\begin{aligned}
E(iv,iw)&=Q(-ix+i\bar x,-iy+i\bar y)=Q(x,\bar y)+Q(\bar x,y)\\
E(v,w)&=Q(x+\bar x,y+\bar y)=Q(x,\bar y)+Q(\bar x,y),\end{aligned}$$ Thus $E(iv,iw)=E(v,w)$. Moreover, by the second bilinear relation $$H(v,v)=-E(iv,v)=-Q(-ix+i\bar x,x+\bar x)=2iQ(x,\bar x)>0.$$
Nilpotent orbit theorem {#sec:norm-section}
-----------------------
The aim of this section is to formulate a version of the Nilpotent orbit theorem that allows us to deal with variations of mixed Hodge structures, in a setting with several variables. Such a Nilpotent orbit theorem is stated and proved in [@pearlhiggs]. In order to formulate this theorem, we need quite a bit of background material and in particular define the notion of “admissibility” for variations of mixed Hodge structures. Also we need to take a detailed look at the behaviour of monodromy on the fibers of the underlying local systems. Most of the introductory material below is taken from [@ps Section 14.4] and [@pearlhiggs].
*Variations of polarized mixed Hodge structures.* Let $X$ be a complex manifold. A graded-polarized variation of mixed Hodge structures on $X$ is a local system $\hh \to X$ of finitely generated torsion free abelian groups equipped with:
1. A finite increasing filtration $$\ww_\bullet \colon \quad 0 \subseteq \ldots \subseteq \ww_k
\subseteq \ww_{k+1} \subseteq \ldots \subseteq \hh_\qq$$ of $\hh_\qq = \hh \otimes \qq$ by local subsystems, called the weight filtration,
2. A finite decreasing filtration $$\ff^\bullet \colon \quad \hh_\cc \otimes \oo_X \supseteq \ldots \supseteq \ff^{p-1} \supseteq \ff^p \supseteq \ldots \supseteq 0$$ of the vector bundle $\HH=\hh_\cc \otimes \oo_X$ by holomorphic subbundles, called the Hodge filtration,
3. For each $k \in \zz$ a non-degenerate bilinear form $$\boldsymbol{Q}_k \colon \Gr_k^\ww(\hh_\qq) \otimes \Gr_k^\ww(\hh_\qq) \to \qq_X$$ of parity $(-1)^k$,
such that:
1. For each $p \in \zz$ the Gauss-Manin connection $\nabla$ on $\HH$ satisfies the “Griffiths transversality condition” $\nabla \ff^p \subseteq \Omega^1_X \otimes \ff^{p-1}$,
2. For each $k \in \zz$ the triple $(\Gr_k^\ww(\hh_\qq),\ff^\bullet\Gr_k^\ww(\HH),\boldsymbol{Q}_k)$ is a variation of pure polarized rational Hodge structures of weight $k$. Here for each $p \in \zz$ we write $\ff^p \Gr_k^\ww(\HH)$ for the image of $\ff^p \HH \cap \ww_k \HH$ in $\Gr_k^\ww(\hh_\cc)$ under the projection map $\ww_k \HH \to \Gr_k^\ww(\hh_\cc)$.
*Period domains.* We recall that if $(H,W_\bullet,F^\bullet)$ is a mixed Hodge structure, then $H_\cc$ has a unique bigrading $I^{\bullet,\bullet}$ such that $$F^pH_\cc = \oplus_{r \geq p,s} I^{r,s} \, , \quad
W_k H_\cc = \oplus_{r+s \leq k} I^{r,s} \, , \quad
I^{r,s} = \overline{I}^{s,r} \bmod \oplus_{p<r,q<s} I^{p,q} \, .$$ The integers $h^{r,s}=\dim I^{r,s}$ are called the Hodge numbers of $(H,W_\bullet,F^\bullet)$.
Given a triple $(H,W_\bullet,Q_k)$ with $H$ a rational vector space, $W_\bullet$ an increasing filtration of $H$, and $Q_k$ a collection of non-degenerate bilinear forms of parity $(-1)^k$ on $\Gr_k^W(H)$, together with a partition of $\dim H$ into a sum of non-negative integers $h=\{ h^{r,s} \}$, there exists a natural classifying space (also known as a period domain) $\mm=\mm(h)$ of mixed Hodge structures $(W_\bullet, F^\bullet)$ on $H$ which are graded-polarized by $Q_k$.
Let $G=G(H,W_\bullet,Q_k)$ be the $\qq$-algebraic group $$G=\{ g \in
GL(H) | \forall k\in \zz : g(W_{k})\subset W_{k},\ \Gr^W_k(g) \in
\mathrm{Aut}(Q_k) \}.$$ Then the group $G({{{\mathbb{R}}}})$ of real points acts transitively on $\mm$, and provides $\mm$ with an embedding in a so-called “compact dual” $\check{\mm} \supset
\mm$, which is the orbit, inside a flag manifold parametrizing filtrations of $H_\cc$ compatible with $W_\bullet$ and the given Hodge numbers, of any point in $\mm$ under the action of $G(\cc)$. The inclusion $\mm \subset \check{\mm}$ is open and hence gives $\mm$ a natural structure of complex manifold. We remark that although called compact dual by analogy with the pure case, $\check{\mm}$ is not in general compact.
*Relative filtrations.* Let $H$ be a rational vector space, equipped with a finite increasing filtration $W_\bullet$. We let $N$ denote a nilpotent endomorphism of $H$, compatible with $W_\bullet$. We call an increasing filtration $M_\bullet$ of $H$ a weight filtration for $N$ relative to $W_\bullet$ if the two following conditions are satisfied:
1. for each $i \in \zz$ we have $NM_i \subseteq M_{i-2}$,
2. for each $k \in \zz$ and each $i \in \mathbb{N}$ we have that $N^i$ induces an isomorphism $$N^i \colon \Gr_{k+i}^M \Gr_k^W H {\xrightarrow{\sim}}\Gr_{k-i}^M \Gr_k^W H$$ of vector spaces.
It can be verified that if $H$ has a weight filtration for $N$ relative to $W_\bullet$, then it is unique. We call $N$ strict if $N(H)\cap W_k = N(W_k)$ for all $k \in \zz$. By [@sz Proposition 2.16], if the filtration $W_\bullet$ has length two (in the sense that $H=W_k$ and $W_{k-2}=0$ for some $k$), and if $H$ has a weight filtration for $N$ relative to $W_\bullet$, then $N$ is strict.
*Admissible variations of mixed Hodge structures.* Now let $(\hh,\ww_\bullet,\ff^\bullet,\boldsymbol{Q}_k)$ be a graded-polarized variation of mixed Hodge structures over the punctured unit disc $\Delta^*$. Since $\ww_\bullet$ is a filtration by local subsystems, monodromy preserves $\ww_\bullet$. We note that the monodromy on $\hh$ is unipotent if and only if the monodromy on each $\Gr_k^\ww(\hh_\qq)$ is unipotent. We choose a reference fiber $(H,W_\bullet,F^\bullet,Q_k)$ of $(\hh,\ww_\bullet,\ff^\bullet,\boldsymbol{Q}_k)$. We denote by $N=\log
T$ the logarithm of monodromy acting on $H$. Clearly, if $T$ is unipotent, then $N$ is nilpotent.
Assume that $\hh$ is unipotent, then there exists a canonical (Deligne) extension $\tilde{\HH}$ of $\HH$. Both the weight filtration and the graded-polarization extend naturally to $\tilde{\HH}$, and in particular each $\Gr_k^\ww \tilde{\HH}$ is the canonical extension of $\Gr_k^\ww \HH$. Moreover, for each $k \in \zz$ the Hodge filtration $\ff^\bullet\Gr_k^\ww(\HH)$ extends to a Hodge filtration ${}^k \tilde{\ff}^\bullet$ of $\Gr_k^\ww \tilde{\HH}$.
We come now to the main definition of this section: we call $(\hh,\ww_\bullet,\ff^\bullet,\boldsymbol{Q}_k)$ *admissible* if
1. monodromy is unipotent,
2. the logarithm of monodromy $N$ has a weight filtration $M_\bullet(V,W_\bullet,N)$ relative to $W_\bullet$ on $V$,
3. the Hodge filtration $\ff^\bullet$ extends into a filtration $\tilde{\ff^\bullet}$ of the canonical extension $\tilde{\HH}$ which for each $k \in \zz$ induces ${}^k \tilde{\ff}^\bullet$ on $\Gr_k^\ww \tilde{\HH}$.
Assume now that $X$ is an open submanifold of a manifold $\Xbar$, where $D = \Xbar \setminus X$ is a normal crossings divisor. Let $(\hh,\ww_\bullet,\ff^\bullet,\boldsymbol{Q}_k)$ be a graded-polarized variation of mixed Hodge structures over the complex manifold $X$. We then call the variation $(\hh,\ww_\bullet,\ff^\bullet,\boldsymbol{Q}_k)$ admissible if the local monodromies around all branches of $D$ are unipotent, and if for all holomorphic test curves $\bar{f} \colon \Delta \to \Xbar$ the variation $f^*\hh$ on $\Delta^*$ is admissible. Here we denote by $f$ the restriction of $\bar{f}$ to $\Delta^*$.
In algebraic geometry, admissible variations come about as follows. Let $\pi \colon Y \to X$ be a morphism of complex algebraic varieties. Then there is an open subset $\iota \colon U\to X$ and a finite étale map $f\colon \widetilde U\to U$ such that the local system $\hh=(\iota\circ f)^{\ast}\mathrm{R}^i \pi_* \zz_Y$ has a canonical structure of admissible graded-polarized variation of mixed Hodge structures $(\hh,\ww_\bullet,\ff^\bullet,\QQ_k)$.
In general, the usual cohomological operations like direct images or relative cohomology will produce a *mixed Hodge module* [@Saito1; @Saito2] which is a generalization of the notion of admissible variations of mixed Hodge structures. There is a criterion for when a mixed Hodge module is indeed an admissible variation of mixed Hodge structures: Given a polarizable mixed Hodge module $\hh$, if the underlying perverse sheaf is a local system with unipotent monodromy, then $\hh$ is a polarizable admissible variation of mixed Hodge structures. See for instance [@Asakura] for a survey on mixed Hodge modules.
For admissible variations of mixed Hodge structures we have the following compatibility between the graded polarization and the monodromy. Let $(H,W_\bullet,F^\bullet,Q_k)$ be a reference fiber of the variation near the boundary divisor $D=\Xbar \setminus X$ of the smooth algebraic variety $X$. We denote the local monodromy operators around the branches of $D$ by $T_1,\ldots,T_m$, and the corresponding monodromy logarithms by $N_1,\ldots,N_m$. We denote by $\mathfrak{g}$ the Lie algebra ${\operatorname{Lie}}G({{{\mathbb{R}}}})$ of the real points $G({{{\mathbb{R}}}})$ of the algebraic group $G=G(H,W_\bullet,Q_k)$ defined above. Then the $T_i$ belong to $G({{{\mathbb{R}}}})$, and the $N_i$ belong to $\mathfrak{g}$, for each $i=1,\ldots,m$. The ${{{\mathbb{R}}}}_{>0}$-span $\mathcal{C}$ of the local monodromy logarithms $N_i$ inside $\mathfrak{g}$ is called the *open monodromy cone* of the reference fiber $(H,W_\bullet,F^\bullet,Q_k)$. Each element of $\mathcal{C}$ is nilpotent, and it can be proved that the relative weight filtration of $(H,W_\bullet)$ is constant on $\mathcal{C}$.
*The period map.* To an admissible graded-polarized variation of mixed Hodge structures $(\hh,\ww_\bullet,\ff^\bullet,\boldsymbol{Q}_k)$ over $X=(\Delta^*)^k
\times \Delta^{n-k}$ we have naturally associated a period map, as follows. Let $\mm=\mm(h)$ be the period domain associated to $(H,W_\bullet,Q_k)$ and set $G=G(H,W_\bullet,Q_k)$ as above. Let $\Gamma \subset G(\qq)$ be the image of the monodromy representation $\rho \colon \pi_1(X,x_0) \to G(\qq)$. The period map $\phi \colon X
\to \Gamma \setminus \mm$ is then the map that associates to $x \in X$ the Hodge filtration of $\hh_x$. The period map is locally liftable and holomorphic.
Let $\mathbb{H} \subset \cc$ be the Siegel upper half plane. Let $e
\colon \mathbb{H}^k \to (\Delta^*)^k$ be the uniformization map given by $(z_1,\ldots,z_k) \mapsto (\exp(2\pi i z_1),\ldots,\exp(2\pi i
z_k))$. Then along $e$ the period map $\phi$ lifts to a map $\tilde{\phi} \colon \mathbb{H}^k \times \Delta^{n-k} \to \mm$. In other words, we have the following commutative diagram $$\xymatrix{ \mathbb{H}^k \times \Delta^{n-k} \ar[r]^-{\tilde{\phi}}
\ar[d]^{(e,\mathrm{id})} & \mm \ar[d] \\ (\Delta^*)^k \times \Delta^{n-k} \ar[r]^-\phi
& \Gamma \setminus \mm }$$ where the right hand arrow is the canonical projection. Write $T_i = \exp(N_i)$ for $i=1,\ldots,k$. As $N_i \in {\operatorname{Lie}}G({{{\mathbb{R}}}})$ we find $\exp(\sum_{i=1}^k z_iN_i) \in G(\cc)$ for all $z_1,\ldots,z_k \in \mathbb{H}$. Let $\tilde{\psi} \colon \mathbb{H}^k \times \Delta^{n-k} \to \check{\mm}$ be the map given by $$\tilde{\psi}(z_1,\ldots,z_k,q_{k+1},\ldots,q_n) = \exp(-\sum_{i=1}^k z_iN_i) . \tilde{\phi}(z_1,\ldots,z_k,q_{k+1},\ldots,q_n) \, .$$ Then $\tilde{\psi}$ descends to an “untwisted” period map $\psi
\colon (\Delta^*)^k \times \Delta^{n-k} \to
\check{\mm}$, fitting in a commutative diagram $$\xymatrix{ \mathbb{H}^k \times \Delta^{n-k} \ar[r]^{\tilde{\psi}}
\ar[d] & \check{\mm} \\ (\Delta^*)^k \times \Delta^{n-k} \ar[ur]^\psi
& }$$ Note that, importantly, the map $\psi$ takes values in the compact dual $\check{\mm}$, and not in a quotient of it.
*Nilpotent orbit theorem.* The following result is the starting point of G. Pearlstein’s Nilpotent orbit theorem (cf. [@pearlhiggs Section 6]) for admissible graded-polarized variations of mixed Hodge structures and is enough for showing the estimates we need.
\[nilpotentorbit\] (G. Pearlstein) Let $(\hh,\ww_\bullet,\ff^\bullet,\boldsymbol{Q}_k)$ be an admissible graded-polarized variation of mixed Hodge structures over $X=(\Delta^*)^k \times \Delta^{n-k}$. Then the untwisted period map $\psi$ extends to a holomorphic map $\psi \colon \Delta^n \to \check{\mm}$.
Families of pointed polarized abelian varieties {#sec:families}
-----------------------------------------------
Let $(\pi \colon Y \to X,\lambda)$ be a family of polarized abelian varieties over a smooth algebraic variety $X$. Assume that $\Xbar
\supset X$ is a smooth complex algebraic variety, with $D = \Xbar
\setminus X$ a normal crossings divisor. Denote by $\pp$ the Poincaré bundle on $Y \times_X Y^\lor$ with its canonical $C^\infty$ hermitian metric as described above. Given two algebraic sections $\nu
,\mu \colon X \to Y$ we will denote $$\pp_{\nu ,\mu }=(\nu ,\lambda \mu )^{\ast}\pp,\qquad \pp_{\nu
}=\pp_{\nu ,\nu },$$ where $\lambda \colon Y\to Y^{\lor}$ is the isogeny provided by the polarization. We are interested in studying the singularities of the metric of $\pp_{\nu ,\mu }$ when we approach the boundary of $X$.
Consider the maps $$m,p_{1,3},p_{1,4},p_{2,3},p_{2,4}\colon Y \times_X Y \times_X
Y^\lor \times_X Y^\lor \longrightarrow Y \times_X
Y^\lor \, ,$$ where $m(x,y,z,t)=(x+y,z+t)$ and $p_{i,j}$ is the projection over the factors $i,j$. Then we have a canonical isomorphism $$\label{biextproperty}
m^{\ast}\pp {\xrightarrow{\sim}}p_{1,3}^{\ast}\pp \otimes p_{1,4}^{\ast}\pp
\otimes p_{2,3}^{\ast}\pp \otimes p_{2,4}^{\ast}\pp \, ,$$ of holomorphic line bundles over $Y \times_X Y \times_X
Y^\lor \times_X Y^\lor$, in other words, the Poincaré bundle is a biextension on $Y \times_X Y^\lor$. The explicit description of the cocycle $b_{\pp}$ in equation and of the metric of the Poincaré bundle in Remark \[formulanorm\] shows that the canonical isomorphism (\[biextproperty\]) is in fact an isometry for the canonical induced metrics on left and right hand side. We obtain in particular
\[lemm:1\] Let $\nu_1, \nu_2, \mu_1, \mu_2$ be holomorphic sections of the family $Y \to X$. Then we have a canonical isometry $$(\nu_1+\nu_2,\lambda(\mu_1+\mu_2))^*\pp {\xrightarrow{\sim}}(\nu_1,\lambda \mu_1)^*\pp \otimes
(\nu_1,\lambda \mu_2)^*\pp \otimes
(\nu_2,\lambda \mu_1)^*\pp \otimes
(\nu_2,\lambda \mu_2)^*\pp$$ of hermitian line bundles on $X$.
As consequence of this lemma, in order to study the singularities of the metric on $\pp_{\nu,\mu}$ it suffices to study the singularities of the metric on $\pp_\nu$, $\pp_\mu$ and $\pp_{\nu+\mu}$. In particular, for the purpose of proving our main results, it suffices to focus on the diagonal cases $\pp_\nu$. Thus let $\nu$ be an algebraic section of the family $Y \to X$. Let $x_0$ be a point of $D$. The purpose of the present section is to give an asymptotic expansion of the logarithm of the norm of a section of $\pp_\nu$ near $x_0$. From equation it follows that it suffices to give asymptotic expansions of the period matrix of the family $Y \to X$, and of the period vector (see below) associated to $\nu$. To this end we make Pearlstein’s result concrete for the case of the period map on a family of polarized abelian varietes together with a section $(Y \to X,\nu)$.
*Period vectors.* Assume that $(H,F^\bullet,Q)$ is a polarized pure Hodge structure of weight $-1$, type $(-1,0),(0,-1)$ and rank $2g$. Recall that given a $\qq$-symplectic integral basis $(a_1,\ldots,a_g,b_1,\ldots,b_g)$ of $(H,Q)$, there exists a unique basis $(w_1,\ldots,w_g)$ of $F^0H_\cc$ determined by demanding that $w_i = -\sum_{j=1}^g \Omega_{ij}a_j +
b_i$ for some (period) matrix $\Omega \in M_g({{\mathbb{C}}})$ (cf. equation (\[eq:10\])). We call this new basis the associated *normalized* basis. As we have seen, the Riemann bilinear relations imply that $\Omega^t\Delta =\Delta \Omega$ and $\Delta {\operatorname{Im}}\Omega >0$.
Assume an extension $$\label{anyextension} 0 \to H \to H' \to \zz(0) \to 0$$ in the category of mixed Hodge structures is given. Then $H'$ has weight filtration $$W_\bullet \colon \quad 0 \subset W_{-1} = H_\qq \subset W_0 = H_\qq' \, .$$ Taking $F^0(-)_\cc$ in (\[anyextension\]) yields the extension $$0 \to F^0H_\cc \to F^0H'_\cc \to \cc \to 0$$ of $\cc$-vector spaces. As can be readily checked, for each $a_0 \in H'$ that lifts the canonical generator of $\zz(0)$ in (\[anyextension\]) there exists a unique $w_0 \in F^0H'_\cc$ such that $w_0 \in a_0 + \cc$-${\operatorname{span}}(a_1,\ldots,a_g)$. Given such a lift $a_0$, we let $\delta_{H'} = (\delta_1,\ldots,\delta_g)^{t} \in
{\operatorname{Col}}_{g}({{\mathbb{C}}})$ be the coordinate vector determined by the identity $w_0=a_0+\sum_{j=1}^g \delta_j a_j$. We call $\delta_{H'}$ the *period vector* of the mixed Hodge structure $(H',F^\bullet,W_\bullet)$ on the basis $(a_0,a_1,\ldots,a_g,b_1,\ldots,b_g)$ of the $\zz$-module $H'$. It can be verified that replacing $a_0$ by some element from $a_0+H$ changes $\delta$ by an element of $\zz^g+\Omega \zz^g$. The resulting map $\mathrm{Ext}^1_{\mathrm{MHS}}(\zz(0),H) \to \cc^g/(\zz^g+\Omega
\zz^g)$ is finite, and gives $\mathrm{Ext}^1_{\mathrm{MHS}}(\zz(0),H)$ a canonical structure of complex torus.
Let $A$ be a polarized complex abelian variety of dimension $g$. Let $H={H}_1(A)$; then $H$ carries a canonical pure polarized Hodge structure of type $(-1,0), (0,-1)$. Let $\nu \in A$, and write $H(\nu)$ for the relative homology group ${H}_1(A,\{0,\nu\})$. There is an extension of mixed Hodge structures $$0 \to H \to H(\nu) \to \zz(0) \to 0$$ canonically associated to $(A,\nu)$. Here $\zz(0)$ is to be identified with the reduced homology group $\tilde{{H}}_0(\{0,\nu \})$. The map $A \to \mathrm{Ext}^1_{\mathrm{MHS}}(\zz(0),H)$ given by sending $\nu$ to the extension $H(\nu)$ is a bijection, compatible with the structure of complex torus on left and right hand side.
*The period map of a family of pointed polarized abelian varieties.* Let $\pi \colon Y \to X$ be a family of polarized abelian varieties, and assume that $\Xbar \supset X$ is a complex algebraic variety, with $D = \Xbar \setminus X$ a normal crossings divisor. As we work locally complex analytically, we will suppose that $\Xbar$ is the polydisk $\Delta^n$, and $D$ is the divisor given by the equation $q_1\cdots q_k=0$, so that $X = (\Delta^*)^k \times \Delta^{n-k}$.
We assume that all local monodromy operators $T_1,\ldots,T_k$ about the various branches determined by $q_1,\ldots,q_k$ are unipotent (for instance, assume that the family extends as a semiabelian scheme $\Ybar \to \Xbar$). Let $\hh={R}^1 \pi_* \zz_Y(1)$. Let $g$ be the relative dimension of $Y \to X$. Then $\hh$ underlies a canonical admissible variation of pure polarized Hodge structure $(\hh,\ff^\bullet,\boldsymbol{Q})$ of type $(-1,0), (0,-1)$ and rank $2g$ over $X$. We will henceforth usually suppress the polarization from our notation.
Let $(H,F^\bullet)$ be a reference fiber of $\hh$ near the origin. Let $N$ be any element of the open monodromy cone of $H$. Then we have $N^2=0$ and the filtration associated to $N$ simply reads $$0 \subset M_{-2} \subset M_{-1} \subset M_0 = H_\qq$$ with $M_{-2}={\operatorname{Im}}N $ and $M_{-1}={\operatorname{Ker}}N $. Since $N$ belongs to the Lie algebra of $G(H)({{{\mathbb{R}}}})$, there exist a $\qq$-symplectic integral basis $(a_1,\ldots,a_g,b_1,\ldots,b_g)$ of $(H,Q)$ and a non-negative integer $r \leq g$ such that:
1. $M_{-2}={\operatorname{span}}{(a_1,\ldots,a_r)}$,
2. $M_{-1}={\operatorname{span}}{(a_1,\ldots,a_g,b_{r+1},\ldots,b_g)}$.
In particular, $(\bar{a}_{r+1},\ldots,\bar{a}_g,\bar{b}_{r+1},\ldots,\bar{b}_g)$ is a $\qq$-symplectic integral basis of the pure polarized Hodge structure $\Gr_{-1}^M H$ of type $(-1,0), (0,-1)$. Clearly, with respect to this basis, each local monodromy operator $N_j$ has the form $$N_j = \begin{pmatrix}[c|c] 0 & A'_j \\
\hline
0 & 0 \\
\end{pmatrix} \, .$$ Each $A'_j$ is integral and the $g$-by-$g$ matrices $A_{j}\defeq \Delta A'_{j}$ are symmetric and positive semidefinite. Moreover, the left upper $r$-by-$r$ block of $A_j$ is positive definite.
To simplify the notation by avoiding the appearance of the polarization matrix $\Delta $ we will sometimes replace the $\qq$-sympletic integral basis $(a,b)$ by the symplectic $\qq$-basis $(a\Delta ^{-1},b)$. In this new basis each local monodromy operator $N_j$ has the form $$N_j = \begin{pmatrix}[c|c] 0 & A_j \\
\hline
0 & 0 \\
\end{pmatrix} \, .$$
On this new basis we can realize the period domain associated to $H$ as the usual Siegel’s upper half space $\mathbb{H}_g$ of rank $g$. We have $G(H)={\operatorname{Sp}}(2g)_\qq$, and the action of $G(H)({{{\mathbb{R}}}})$ on $\mathbb{H}_g$ is given by the usual prescription $$\begin{pmatrix}[c|c] A & B \\
\hline
C & D \\
\end{pmatrix} \cdot M = (AM+B)(CM+D)^{-1} \, , \,
\begin{pmatrix}[c|c] A & B \\
\hline
C & D \\
\end{pmatrix} \in {\operatorname{Sp}}(2g,{{{\mathbb{R}}}}) \, , \, M \in \mathbb{H}_g \, .$$
In this representation the period map $\Omega \colon X \to \Gamma
\setminus \mathbb{H}_g$ is made explicit by associating to each $x \in X$ the matrix $\Omega
(x)=\Delta\Omega _{Y_{x}} $, where $\Omega _{Y_{x}}$ is the period matrix of the fibre $Y_x$ on the chosen $\qq$-symplectic integral basis of $H$. Here $\Gamma$ is the image of the monodromy representation into $G(H)(\qq)={\operatorname{Sp}}(2g,\qq)$. In the new basis, the monodromy representation sends the local monodromy operator $T_{j}$ to the matrix $$\begin{pmatrix}[c|c] 1 & A_j \\
\hline
0 & 1 \\
\end{pmatrix} \in {\operatorname{Sp}}(2g,\qq)\, .$$
We will now extend this picture to include the section $\nu$. Varying $x \in X$ we obtain a canonical extension $$0 \to \hh \to \hh(\nu) \to \zz(0) \to 0$$ of variations of mixed Hodge structure. The weight filtration of this variation looks like $$W_\bullet \colon \quad 0 \subset \boldsymbol{W}_{-1}=\boldsymbol{H}_\qq \subset \boldsymbol{W}_0 = \boldsymbol{H}(\nu)_\qq \, ,$$ so that $\Gr_{-1}^{\boldsymbol{W}} \boldsymbol{H}(\nu)_\qq=\boldsymbol{H}_\qq$ and $\Gr_{0}^{\boldsymbol{W}} \boldsymbol{H}(\nu)=\qq(0)$. We denote the Hodge filtration of $\hh(\nu)_\qq$ by $\ff^\bullet$. We start by taking a reference fiber $H(\nu)$ of $\boldsymbol{H}(\nu)$ and augmenting our chosen $\qq$-symplectic integral basis of $H$ by an $a_0 \in H(\nu)$ lifting the canonical generator of $\zz(0)$ as before.
Note that $\hh(\nu)$ is an admissible variation of graded-polarized mixed Hodge structures. Hence the relative weight filtration $M'_\bullet$ on our reference fiber $H(\nu)$ exists. Let $N'$ be an element of the open monodromy cone of $H(\nu)$ such that $N=N'|_{H}$. We will now proceed to determine the matrix shape of $N'$ on the basis $(a_0,a\Delta ^{-1},b)$ of $H(\nu)$. As $N'^2=0$, the filtration associated to $N'$ on $H(\nu)$ is $$L_\bullet \colon \quad 0 \subset L_{-1} \subset L_0 \subset
L_1=H(\nu)_\qq \, ,$$ with $L_{-1}={\operatorname{Im}}(N')$, $L_0={\operatorname{Ker}}(N')$. As the monodromy action on $\Gr_0^W =\qq(0)$ is trivial, we have that ${\operatorname{Im}}(N') \subset H_\qq$, so that $N'^{-1}H_\qq=H(\nu)_\qq$. As $W_\bullet$ has length two, and as by admissibility the weight filtration of $N'$ relative to $W_\bullet$ exists, as we noted above it follows that $N'$ is strict. Explicitly, we have that $H(\nu)_\qq=N'^{-1}H_\qq=H_\qq+{\operatorname{Ker}}(N')$. The equality $H(\nu)_\qq=H_\qq+{\operatorname{Ker}}(N')$ implies that ${\operatorname{Ker}}(N') \supsetneqq
{\operatorname{Ker}}(N)$ and hence that ${\operatorname{Im}}(N')={\operatorname{Im}}(N)$.
The period domain associated to $(H(\nu),W_\bullet)$ can be realized as $\cc^g \times \mathbb{H}_g$. The associated algebraic group has ${{{\mathbb{R}}}}$-points $$G(H(\nu),W_\bullet)({{{\mathbb{R}}}}) = \left\{ \begin{pmatrix}[c|c|c]
1 & 0 & 0 \\ \hline
m & A & B \\ \hline
n & C & D
\end{pmatrix} \, : \, m, n \in {{{\mathbb{R}}}}^g \, , \, \begin{pmatrix}[c|c]
A & B \\ \hline C & D \end{pmatrix} \in {\operatorname{Sp}}(2g,{{{\mathbb{R}}}})
\right\} \, .$$ The action of $G(H(\nu),W_\bullet)({{{\mathbb{R}}}})$ on $\cc^g \times \mathbb{H}_g$ is given by $$\begin{pmatrix}[c|c|c]
1 & 0 & 0 \\ \hline
m & A & B \\ \hline
n & C & D
\end{pmatrix} (v,M) = (v+m+Mn,(AM+B)(CM+D)^{-1}) \, , \, v \in \cc^g \, , \, M \in \mathbb{H}_g \, .$$
Varying $x \in X$ and then taking $F^0$ we obtain a period map associated to the variation $\hh(\nu)$ $$(\delta,\Omega) \colon X \to \Gamma \setminus (\cc^g \times
\mathbb{H}_g)$$ that is given by $$(\delta (x),\Omega (x))=(\Delta \delta _{H(\nu (x))}, \Delta \Omega _{Y_{x}}).$$ We denote by $$(\tilde \delta,\tilde \Omega) \colon {{\mathbb{H}}}^
k\times \Delta ^{n-k} \to \cc^g \times \mathbb{H}_g$$ a lift of the period map. As in the previous section we denote by $e\colon {{\mathbb{H}}}^{k}\to (\Delta
^{\ast})^{k} $ the map $$e(z_{1},\dots,z_{k})=(\exp(2\pi i z_{1}),\dots,\exp(2\pi i z_{k})).$$
\[asympt\] There exist a holomorphic map $\psi \colon \Delta^n \to S_g(\cc)$, a holomorphic map $\alpha \colon \Delta^n \to \cc^g$, and vectors $c_1,\ldots, c_k \in \qq^g$ with $\Delta ^{-1}A_jc_j \in
\zz^g$ for $j=1,\ldots,k$ such that for $(z,t) \in \mathbb{H}^k \times
\Delta^{n-k}$ with $e(z)$ sufficiently close to zero the equalities $$\tilde{\Omega}(z,t) = \sum_{j=1}^k z_j A_j + \psi(e(z),t) \, ,
\quad \tilde{\delta}(z,t) = \sum_{j=1}^k z_j A_jc_j + \alpha(e(z),t)$$ hold in $S_g(\cc)$ resp. $\cc^g$.
Let $N_j$ denote the local monodromy operator of $H$ around the branch of $D$ determined by $q_j=0$. We have $$\exp(z_jN_j)=T_j^{z_j} = \begin{pmatrix}[c|c] 1 & z_jA_j \\
\hline
0 & 1 \\
\end{pmatrix}$$ and hence $\exp(z_jN_j).M=z_jA_j+M$ for each $M \in \mathbb{H}_g$, $z_j
\in U$, and $j=1,\ldots,k$ (here $U$ is an open subset of ${{\mathbb{H}}}$ consisting of points with sufficiently large imaginary part). Denote by ${{\mathbb{P}}}_{g}$ the compact dual of ${{\mathbb{H}}}_{g}$. The untwisted period map $\psi \colon
\Delta^n \to \mathbb{P}_g$ obtained by Theorem \[nilpotentorbit\] extending $\exp(-\sum_{j=1}^k
z_jN_j).\tilde{\Omega}(z,t)$ factors through $S_g(\cc)\subset
\mathbb{P}_g$. We obtain the equalities $$\tilde{\Omega}(z,t) = \exp(\sum_{j=1}^k z_jN_j).\psi(e(z),t) =
\sum_{j=1}^k z_j A_j + \psi(e(z),t)$$ in $S_g(\cc)$.
Let $N'_j$ denote the local monodromy operator of $H(\nu)$ around the branch of $D$ determined by $q_j=0$. The equality ${\operatorname{Im}}(N'_j)={\operatorname{Im}}(N'_j|_{H_\qq})$ on $H(\nu)_\qq$ that follows from our above considerations shows that $N'_j$ has a matrix $$\begin{pmatrix}[c|c|c]
0 & 0 & 0 \\ \hline
\Delta ^{-1}A_jc_j & 0 & \Delta ^{-1}A_j \\ \hline
0 & 0 & 0
\end{pmatrix}$$ on the integral basis $(a_0,a_1,\ldots,a_g,b_1,\ldots,b_g)$, for some $c_j \in \qq^g$. Since the monodromy is integral in such basis, we deduce that $\Delta ^{-1}A_jc_j$ has to be integral. In the $\qq$-basis $(a_{0},a\Delta ^{-1},b)$, the matrix of $N'_j$ is $$\begin{pmatrix}[c|c|c]
0 & 0 & 0 \\ \hline
A_jc_j & 0 & A_j \\ \hline
0 & 0 & 0
\end{pmatrix}$$
Then for $(v,M) \in \cc^g \times \mathbb{H}_g$ and $z_j \in U$ we have $\exp(z_jN'_j).(v,M)=(v+z_jA_jc_j,M+z_jA_j)$. Let $(\alpha,\psi) \colon \Delta^n \to \cc^g \times \mathbb{P}_g$ denote the untwisted period map. We find the equalities $$\tilde{\delta}(z,t) = \exp(\sum_{j=1}^k z_jN_j').\alpha(e(z),t) =
\sum_{j=1}^k z_j A_jc_j + \alpha(e(z),t)$$ in $\cc^g$.
*The norm of a section.* We use now Theorem \[asympt\] to obtain an expression of the norm of a section of $\pp_{\nu }$. Let ${\lvert\lvert-\rvert\rvert}$ denote the canonical metric on ${{\mathcal{P}}}_\nu=\nu^*{{\mathcal{P}}}$. Continuing the notation from the previous theorem, let $a = 2\pi {\operatorname{Im}}\alpha$ and $B=2\pi {\operatorname{Im}}\psi$. For $j=1,\ldots,k$ let $x_j = -\log|t_j|$.
\[explicitnorm\] For all trivializing sections $s$ of ${{\mathcal{P}}}_\nu$ on $ (\Delta^*)^ {k} \times \Delta^{n-k}$ there exists a meromorphic function $h$ on $\Delta^n$ which is holomorphic on $ (\Delta^*)^ {k} \times \Delta^{n-k}$, such that on $ (\Delta^*)^{k} \times \Delta^{n-k}$ the identity $$\label{eq:def_of_f_eta}
-\log{\lvert\lverts\rvert\rvert} = -\log{\lverth\rvert} + \left(\sum_{j=1}^kx_jA_jc_j + a
\right)^t \left(\sum_{j=1}^kx_jA_j + B \right)^{-1}
\left(\sum_{j=1}^kx_jA_jc_j + a \right)$$ holds.
The vector $z$ and the matrix $\Omega $ in Theorem \[explicitmetricPoinc\] are expressed in the integral basis $(a,b)$, while $\delta (x)$ and $\Omega (x)$ are expressed in the $\qq$-basis $(a\Delta ^{-1},b)$. Writing $z=\Delta ^{-1}\delta (x)$ and $\Omega =\Delta ^{-1}\Omega (x)$, we obtain that $$-\log{\lvert\lverts(x)\rvert\rvert} = -\log{\lverth(x)\rvert} + 2\pi ({\operatorname{Im}}\delta(x))^t ({\operatorname{Im}}\Omega(x))^{-1} ({\operatorname{Im}}\delta(x))$$ for a suitable meromorphic function $h$ on $\Delta^n$ which is holomorphic on $(\Delta^*)^ {k} \times \Delta^{n-k}$. Note that, even though $\Omega(x), \delta(x)$ are multivalued, their imaginary parts are single valued. From Theorem \[asympt\] we obtain, noting that ${\operatorname{Im}}z_j = -\frac{1}{2\pi}\log|t_j|$, $${\operatorname{Im}}\Omega (x)= -\frac{1}{2\pi} \sum_{j=1}^k A_j \log|t_j| + {\operatorname{Im}}\psi \, , \quad
{\operatorname{Im}}\delta (x)= -\frac{1}{2\pi} \sum_{j=1}^k A_jc_j \log|t_j| + {\operatorname{Im}}\alpha
\, .$$ Combining we find equation (\[eq:def\_of\_f\_eta\]).
Normlike functions
==================
The purpose of this section is to carry out a systematic study of the functions $$\varphi = \left(\sum_{j=1}^kx_jA_jc_j + a \right)^t \left(\sum_{j=1}^kx_jA_j + B \right)^{-1} \left(\sum_{j=1}^kx_jA_jc_j + a \right)$$ that appear on the right hand side of the equality in Corollary \[explicitnorm\]. We call such functions *normlike* functions. We show that such functions have a well-defined recession function ${\operatorname{rec}} \varphi$ with respect to the variables $x_j$, and we are able to calculate ${\operatorname{rec}} \varphi$ explicitly. As it turns out, the function ${\operatorname{rec}} \varphi$ is homogeneous of weight one in the variables $x_j$. In our main technical lemma Theorem \[thm:main\_technical\] we give bounds for the difference $\varphi -
{\operatorname{rec}} \varphi$ and, in the case where $k=1$, for the first and second order derivatives of $\varphi - {\operatorname{rec}} \varphi$. The bound on the difference will be key to the proof of our first main result Theorem \[singbiext\], the bounds on the derivatives will be used in our proof of Theorem \[localint\]. In section \[propertiesnormlike\] we prove, among other things, that the recession functions ${\operatorname{rec}} \varphi$ are convex. This will lead to the effectivity statement in Theorem \[theorem:effectivity\].
Some definitions {#technical}
----------------
Recall that we have denoted by $M_{r}({{\mathbb{R}}})$ the space of $r$-by-$r$ matrices with real coefficients, by $S_r^+({{\mathbb{R}}})\subset M_{r}({{\mathbb{R}}})$ the cone of symmetric positive semidefinite real matrices inside $M_r({{\mathbb{R}}})$, and by $S_r^{++}({{\mathbb{R}}})\subset S^+_r({{\mathbb{R}}})$ the cone of symmetric positive definite real matrices. We endow these spaces with their canonical real manifold structure.
\[lem:simultaneous\] Let $N_{1},\dots,N_{k}$ be a finite set of positive semidefinite symmetric real $g$-by-$g$ matrices such that $N_{1}+\dots+N_{k}$ has rank $r$. Then there exists an orthogonal matrix $u \in O_g({{\mathbb{R}}})$ such that, upon writing $M_{i}=u^tN_iu$ for $i=1,\dots,k$, we have $$M_i = \begin{pmatrix}[c|c] M_i' & 0\\
\hline
0 & 0 \\
\end{pmatrix},$$ with all $M_i'\in S_{r}^+({{\mathbb{R}}})$ and $\sum M_i'\in S_{r}^{++}({{\mathbb{R}}})$.
It will be convenient to use the language of bilinear forms. If $Q$ is a symmetric positive semidefinite bilinear form on ${{\mathbb{R}}}^g$ and $f_1,\ldots, f_g$ is a basis of ${{\mathbb{R}}}^g$ such that $Q(f_\alpha ,f_\alpha )=0$ for $\alpha =r+1,\ldots,g$, then $Q(f_\alpha ,f_\beta )=0$ for $\beta =1,\ldots,g$ and $\alpha =r+1,\ldots,g$. Indeed, for all $\lambda
\in {{\mathbb{R}}}$ we have $Q(\lambda f_\alpha -f_\beta ,\lambda f_\alpha - f_\beta ) \geq 0$, that is $$-2\lambda Q(f_\alpha ,f_\beta ) + Q(f_\beta ,f_\beta ) \geq 0 \, .$$ Since this inequality is satisfied for all $\lambda$ we deduce that $Q(f_\alpha ,f_\beta )=0$.
Let $N=N_1+\cdots+N_k$, and denote by $Q$ the symmetric positive semidefinite bilinear form that $N$ defines on the standard basis $(e_1,\ldots,e_g)$ of ${{\mathbb{R}}}^g$. Note that $Q$ has rank $r$. By the spectral theorem, upon replacing the basis $(e_1,\ldots,e_g)$ of ${{\mathbb{R}}}^g$ by $(f_1,\ldots,f_g)=(e_1,\ldots,e_g)u$ for some orthogonal matrix $u$ we can assume that the expression of $Q$ in the basis $(f_1,\ldots,f_g)$ is $$M= \begin{pmatrix}[c|c] A & 0\\
\hline
0 & 0 \\
\end{pmatrix},$$ with $A\in S^{+}_{r}({{\mathbb{R}}})$ invertible and diagonal. In particular, $Q(f_\alpha ,f_\alpha )=0$ for $\alpha =r+1,\dots, g$. For $i=1,\ldots,k$ let $Q_i$ denote the symmetric positive semi-definite bilinear form that $N_i$ defines on the standard basis $(e_1,\ldots,e_g)$ of ${{\mathbb{R}}}^g$. Note that $Q=Q_1+\cdots+Q_k$. Since all the $Q_{i}$ are positive semidefinite, we deduce that $Q_{i}(f_\alpha ,f_\alpha )=0$ for $i=1,\dots ,k$. Note that $M_i=u^tN_iu$ is the expression of $Q_{i}$ in the basis $(f_1,\ldots,f_g)$. By the previous discussion we have $$M_i= \begin{pmatrix}[c|c] M_i' & 0 \\
\hline
0 & 0 \\
\end{pmatrix} \, ,$$ with $M'_{i}\in S_{r}^+({{\mathbb{R}}})$ and $\sum M_i'=A\in
S_{r}^{++}({{\mathbb{R}}})$, proving the lemma.
Suppose we are given the following data:
- three integers $k \ge 0$, $m \ge 0$, $g \ge 0$;
- a real number $\kappa \ge 0$;
- a compact subset $K {\subseteq}{{\mathbb{R}}}^m$;
- matrices \[item:1\] ${A}_1, \ldots, {A}_k \in S_g^+({{\mathbb{R}}})$ all of rank $\ge 1$;
- vectors ${c}_1, \ldots, {c}_k \in {{\mathbb{R}}}^g$;
- functions ${a}\colon K {\rightarrow}{{\mathbb{R}}}^g$ and ${B}\colon K {\rightarrow}S_g({{\mathbb{R}}})$ which are restrictions of smooth functions on some open neighbourhood of $K$;
such that for all $(x_1,\ldots,x_k, \lambda) \in {{\mathbb{R}}}_{>\kappa}^k
\times K$, we have that $$\label{equation:positivity}
P(x_1,\ldots,x_k,\lambda) \defeq \sum_{i=1}^kx_i{A}_i + {B}(\lambda) >0 \, .$$ Note that if $g=0$, then necessarily $k=0$.
To these data we associate a smooth function $\varphi\colon
{{\mathbb{R}}}_{>\kappa}^k\times K
{\rightarrow}{{\mathbb{R}}}$ by $$\begin{gathered}
\varphi(x_1, \ldots, x_k, \lambda) =\\
\left( \sum_{i=1}^kx_i{A}_i{c}_i + {a}(\lambda) \right)^t
\left( \sum_{i=1}^kx_i{A}_i + {B}(\lambda) \right)^{-1}
\left( \sum_{i=1}^kx_i{A}_i{c}_i + {a}(\lambda) \right) .\end{gathered}$$ By condition (\[equation:positivity\]), the function $\varphi$ is well-defined and its values are non-negative. We call $\varphi$ the *normlike* function associated to the $4$-tuple $(({A}_i), ({c}_i), {a}, {B})$. We call the natural number $k$ the *dimension* of $\varphi$. Write $r = {\operatorname{rk}} \sum_{i=1}^k x_i {A}_i$ for some (hence all) $(x_1, \ldots, x_k) \in {{\mathbb{R}}}^k_{>\kappa}$. Note that $r \geq 1$ if $k >0$.
Let $u \in O_g({{\mathbb{R}}})$. Replacing the vector ${c}_i$ by $u^{-1}{c}_i$, ${a}$ by $u^{-1}{a}$, the matrix ${B}$ by $u^{t}{B}u$ and ${A}_i$ by $u^{t}{A}_i u$ one checks that the function $\varphi$ remains unchanged. By Lemma \[lem:simultaneous\] we can thus restrict to considering normlike functions where the $A_i$ have the shape $${A}_i = \begin{pmatrix}[c|c] {A}_i'& 0_{r,g-r}\\
\hline
0_{g-r,r} & 0_{g-r,g-r}\\
\end{pmatrix},$$ with each ${A}'_i \in S_r^+({{\mathbb{R}}})$ and such that $\sum x_i
{A}'_i\in S^{++}_r({{\mathbb{R}}})$ for all $(x_1,\ldots,x_k) \in
{{\mathbb{R}}}_{>\kappa}^k$ (hence for all $(x_1,\ldots,x_k)\in{{\mathbb{R}}}_{>0}^k$).
From now on we assume that the matrices $A_i$ indeed have this shape. We write $${c}_i = {\left(\begin{matrix}{c}_i'\\
\hline
\star_{g-r}\end{matrix}\right)}, \;\;\;
{a}= {\left(\begin{matrix}{a}_1\\
\hline
{a}_2\end{matrix}\right)}, \;\;\;
\text{and} \;\;\; {B}= \begin{pmatrix}[c|c] {B}_{11}& {B}_{12}\\
\hline
{B}_{21} & {B}_{22}\\
\end{pmatrix}$$ where $c_i'$ and $a_1$ have size $r$, and $B_{11}$ is an $r$-by-$r$ matrix. The second block of the vector ${c}_{i}$ is marked with an asterisk because the function $\varphi$ is independent of its value. Condition \[equation:positivity\] implies that ${B}_{22}(\lambda)$ is positive definite for all $\lambda \in K$, and the symmetry of ${B}$ implies that ${B}_{21}={B}_{12}^t$.
We define another smooth function $ f \colon
{{\mathbb{R}}}_{>\kappa}^k\times K {\rightarrow}{{\mathbb{R}}}$ by $$\label{recessionexplicit}
f(x_1, \ldots, x_k, \lambda) =
\left(\sum_{i=1}^kx_i{A}'_i{c}'_i\right)^t
\left(\sum_{i=1}^kx_i{A}'_i\right)^{-1}
\left(\sum_{i=1}^kx_i{A}'_i{c}'_i\right).$$ This function $f$ is well defined as $\sum_{i=1}^kx_i{A}'_i$ is positive definite on ${{\mathbb{R}}}_{> 0}^k$. The function $f$ depends trivially on $\lambda$ and is clearly homogeneous of degree 1 in the $x_i$, and so defines a smooth function ${{\mathbb{R}}}_{> 0}^k{\rightarrow}{{\mathbb{R}}}$, which we also call $f$. Again, the values of $f$ are non-negative. By convention, if $k=0$, the function $f$ is zero.
Finally, the “recession” of $\varphi$ is defined as the pointwise limit $$\begin{matrix}
{\operatorname{rec}}\varphi\colon & {{\mathbb{R}}}_{>\kappa}^k\times K & {\rightarrow}&{{\mathbb{R}}}\\
&(x_1, \ldots, x_k, \lambda) & \mapsto &{\operatorname{lim}}_{\mu {\rightarrow}\infty}
\frac{1}{\mu} \varphi(\mu x_1, \ldots, \mu x_k, \lambda),
\end{matrix}$$ if it exists. Again, if $k=0$, then ${\operatorname{rec}}\varphi=0$.
Statement of the technical lemma
--------------------------------
We can now state the “main technical lemma”:
\[thm:main\_technical\] In the notation of the previous section, write $\varphi_{0}=\varphi-f$. Note that $\varphi_0$ is a smooth function on ${{\mathbb{R}}}_{>\kappa} \times K$. Then
1. \[main\_technical\_bounddiff\] the function ${\lvert\varphi_{0}\rvert}$ is bounded on ${{\mathbb{R}}}_{>\kappa'}^k \times K$ for some $\kappa' \ge
\kappa$. The recession of $\varphi $ exists and is equal to $f$. In particular, ${\operatorname{rec}} \varphi$ is independent of the parameter $\lambda$;
2. \[main\_technical\_fbound\] the function $f$ is bounded on the open simplex $\Delta^0 = \{(x_1, \ldots, x_k) \in {{\mathbb{R}}}_{>0}^k :
\sum_{i=1}^k x_i = 1\}$;
3. \[main\_technical\_k=1\] when $k=1$,
1. the function $\varphi_0 \colon {{\mathbb{R}}}_{>\kappa} \times K {\rightarrow}{{\mathbb{R}}}$ extends continuously to a function from ${\overline{{{\mathbb{R}}}_{>\kappa}}}
\times K$ to ${{\mathbb{R}}}$, where by ${\overline{{{\mathbb{R}}}_{>\kappa}}}$ we denote ${{\mathbb{R}}}_{>\kappa} \sqcup \{\infty\}$ with the natural topology;
2. the derivatives of $\varphi_{0}$ satisfy the estimates $$\frac{\partial \varphi_0}{\partial x_1} = O(x_1^{-2}) \quad \textrm{and} \quad
\frac{\partial^2 \varphi_0}{\partial x_1^2} = O(x_1^{-3}),$$ as $x_1 \to \infty$, where the implicit constant is uniform in $K$.
\[exm:1\] When $k>1$, in general we can not extend $\varphi_{0}$ to a continuous function on ${\overline{{{\mathbb{R}}}_{>\kappa}}}^{k}
\times K$ as the following example shows. Put $g=1$, $k=2$, $m=0$, ${A}_{1}={A}_{2}=1$, ${c}_{1}=1$, ${c}_{2}=2$, ${B}=0$, $\kappa=1$ and ${a}=1$. Then $$\varphi_{0}=\varphi-f=\frac{2(x_{1}+2x_{2})+1}{x_1+x_2}.$$ The sequences $\{(n,n)\}_{n\ge 1}$ and $\{(n,2n)\}_{n\ge 1}$ converge, when $n\to \infty$, to the point $(\infty,\infty)\in
{\overline{{{\mathbb{R}}}_{>1}}}^{2}$. Nevertheless $$\lim_{n\to \infty} \varphi_{0}(n,n)=3,\qquad
\lim_{n\to \infty} \varphi_{0}(n,2n)=\frac{10}{3},$$ showing that $\varphi_{0}$ can not be continuously extended to ${\overline{{{\mathbb{R}}}_{>1}}}^{2}$.
Before starting the proof of Theorem \[thm:main\_technical\] we recall a few easy statements related to Schur complements and inverting a symmetric block matrix. For a symmetric block matrix $$M = \begin{pmatrix}[c|c] A & B \\
\hline
B^t & C \\
\end{pmatrix}$$ with $C$ invertible we call $A - BC^{-1}B^t$ the *Schur complement* of the block $C$ in $M$. We have a product decomposition $$M = \begin{pmatrix}[c|c] A & B \\
\hline
B^t & C \\
\end{pmatrix} = \begin{pmatrix}[c|c] 1 & BC^{-1} \\
\hline
0 & 1 \\
\end{pmatrix}
\begin{pmatrix}[c|c] A - BC^{-1}B^t & 0 \\
\hline
0 & C \\
\end{pmatrix}
\begin{pmatrix}[c|c] 1 & 0 \\
\hline
C^{-1}B^t & 1 \\
\end{pmatrix}
\, .$$ In particular, $M$ is invertible if and only if $A-BC^{-1}B^t$ is invertible, and if these conditions are satisfied we have $$M^{-1} = \begin{pmatrix}[c|c] (A-BC^{-1}B^t)^{-1} & -(A-BC^{-1}B^t)^{-1}BC^{-1} \\
\hline
-C^{-1}B^t(A-BC^{-1}B^t)^{-1} & C^{-1} + C^{-1}B^t(A-BC^{-1}B^t)^{-1}BC^{-1} \\
\end{pmatrix} \, .$$ Also, if $M$ is positive semidefinite, then so is the Schur complement $A - BC^{-1}B^t$.
Proof of the technical lemma
----------------------------
First we observe that, if $k=0$, then $\varphi$ is a continuous function on a compact set, hence is bounded. Moreover, the function $f$ is zero. Thus the statements are trivially true and we are reduced to the case $k>0$ and hence $g>0$.
Assume that we have already shown that ${\lvert\varphi-f\rvert}$ is bounded on ${{\mathbb{R}}}_{>\kappa'}^k \times K$. Then, for each $(x_1,\ldots,x_k,\lambda) \in {{\mathbb{R}}}_{>\kappa'}^k \times K$ we have $$\lim_{\mu \to \infty} \frac{1}{\mu} \varphi(\mu x_1,\ldots,\mu
x_k,\lambda) = \lim_{\mu \to \infty} \frac{1}{\mu} f(\mu
x_1,\ldots,\mu x_k) \, .$$ The latter limit exists and is equal to $f(x_1,\ldots,x_k)$ by weight-one-homogeneity of $f$. Thus the recession function of $\varphi$ exists and agrees with $f$. In consequence, in order to prove Theorem \[thm:main\_technical\].\[main\_technical\_bounddiff\] and \[thm:main\_technical\].\[main\_technical\_fbound\] we only need to show the boundedness of ${\lvert\varphi-f\rvert}$ and of $f$ on the required subsets.
We next show that we can assume a simplifying hypothesis.
\[def:1\] We say that the set of symmetric positive semidefinite matrices ${A}_{1},\dots, {A}_{k}$ satisfies the *flag condition* if ${\operatorname{Ker}}({A}_i) \subseteq {\operatorname{Ker}}({A}_{i+1})$, for $i=1,\dots,k-1$.
Consider the subset $$U = \{ 0 < x_1 \leq x_2 \leq \cdots \leq x_k \} \subset {{\mathbb{R}}}_{>0}^k \, .$$ Since $${{\mathbb{R}}}_{>\kappa}^k = \bigcup_{\sigma \in {{\mathfrak{S}}}_k} \left( \sigma^{-1}U \cap {{\mathbb{R}}}_{>\kappa}^k \right)$$ and $$\Delta^0 = \bigcup_{\sigma \in {{\mathfrak{S}}}_k} \left( \sigma^{-1}U \cap \Delta^0 \right) \, ,$$ by symmetry it is enough to prove the boundedness of $|\varphi-f|$ in $U \cap {{\mathbb{R}}}_{>\kappa}^k$ and of $f$ in $U \cap \Delta^0$. Writing $y_1=x_1$, $y_i=x_i-x_{i-1}$ for $i=2,\ldots,k$ we find that $x_i = \sum_{j=1}^i y_j$ and that $U \cap {{\mathbb{R}}}_{>\kappa}^k$ is parametrized by the set $y_1 > \kappa$, $y_2, \ldots, y_k \geq 0$.
Note that $$\sum_{i=1}^k x_i {A}_i = \sum_{i=1}^k y_i \sum_{j=i}^k {A}_j
\quad\text{and}\quad
\sum_{i=1}^k x_i {A}_i{c}_i = \sum_{i=1}^k y_i \sum_{j=i}^k
{A}_j{c}_j.$$
Writing $\tilde{{A}}_i = \sum_{j=i}^k {A}_j$ we have that $\mathrm{Ker}\, \tilde{{A}}_i \subseteq \mathrm{Ker}\,
\tilde{{A}}_{i+1}$. Moreover we have ${\operatorname{Im}}({A}_{i})=\sum_{j=i}^{k} {\operatorname{Im}}( {A}_j )$.
We first observe that, if ${A}$ is a symmetric positive semidefinite real matrix, then ${A}x=0$ if and only if $x^t{A}x=0$. Indeed, clearly ${A}x=0$ implies $x^{t}{A}x=0$. Conversely, assume that $x^{t}{A}x=0$ and let $y$ be any vector. Then, for all $\lambda \in {{{\mathbb{R}}}}$, $$0\le (y+\lambda x)^{t}{A}(y+\lambda x)=y^{t}{A}y+2\lambda y^{t}{A}x$$ which implies that $y^{t}{A}x=0$. Therefore ${A}x=0$.
We show that this observation implies that ${\operatorname{Ker}}\tilde
{A}_{i}=\bigcap_{j=i}^{k}{\operatorname{Ker}}{A}_{j}$. We have $x\in {\operatorname{Ker}}\tilde
{A}_{i}$ if and only if $$0=x^{t}\tilde {A}_{i}x=\sum_{j=i}^{k}x^{t}{A}_{j}x.$$ Since the matrices ${A}_{j}$ are positive semidefinite this implies that $x^{t}{A}_{j}x=0$, $j=i,\dots,k$. Therefore $x\in
\bigcap_{j=i}^{k}{\operatorname{Ker}}{A}_{j}$. The converse is clear. As a result $$\mathrm{Ker}\, \tilde{{A}}_i =\bigcap_{j=i}^{k}{\operatorname{Ker}}{A}_{j} \subseteq \bigcap_{j=i+1}^{k}{\operatorname{Ker}}{A}_{j} = \mathrm{Ker}\,
\tilde{{A}}_{i+1}.$$
Since, for a symmetric positive semidefinite matrix ${A}$, the image ${\operatorname{Im}}({A})$ is the orthogonal complement of ${\operatorname{Ker}}({A})$ we deduce $${\operatorname{Im}}({A}_{i})={\operatorname{Ker}}({A}_{i})^{\perp}=\Big(\bigcap_{j=i}^{k}{\operatorname{Ker}}( {A}_{j})\Big)^{\perp}=\sum_{j=i}^{k}{\operatorname{Ker}}( {A}_{j})^{\perp}=
\sum_{j=i}^{k}{\operatorname{Im}}( {A}_{j}) \, .$$ This proves the lemma.
It follows from the Lemma that there exist vectors $\tilde{{c}}_i \in {{\mathbb{R}}}^g$ such that $$\sum_{j=i}^k {A}_j{c}_j = \tilde{{A}}_i \tilde{{c}}_i \, .$$ Replacing ${A}_i$ by $\tilde{{A}}_i$, $x_i$ by $y_i$ and ${c}_i$ by $\tilde{{c}}_i$ we are reduced to proving the boundedness of ${\lvert\varphi-f\rvert}$ on ${{\mathbb{R}}}_{>\kappa} \times {{\mathbb{R}}}_{\ge 0}^{k-1} \times K$ and of $f$ on the set $$\{ (x_1,\ldots,x_k) \in {{\mathbb{R}}}_{\ge 0}^k : \, x_1 >0 \, , \, x_i \geq 0
\ \textrm{for all} \ i>1,\ \sum _{i=1}^{k}(k-i+1)x_{i}=1 \}$$ under the extra hypothesis that the matrices ${A}_{1},\dots,{A}_{k}$ satisfy the flag condition from Definition \[def:1\]. Clearly, by the homogeneity of $f$ it is enough to prove the boundedness of $ f$ on the set $$H=\{ (x_1,\ldots,x_k) \in {{\mathbb{R}}}_{\ge 0}^k : \, x_1 >0 \, , \, x_i \geq 0
\ \textrm{for all} \ i>1,\ \sum _{i=1}^{k}x_{i}=1 \}.$$
From now on we assume the flag condition and we write $r_i = \mathrm{rk}({A}_i)$. Then $r=r_1 \ge \cdots \ge r_k\ge 1$. Thanks to the flag condition, we can assume furthermore that the basis of ${{{\mathbb{R}}}}^{g}$ has been chosen in such a way that $${A}'_i = \begin{pmatrix}[c|c] {A}_i''& 0\\
\hline
0 & 0\\
\end{pmatrix},$$ with ${A}_i''\in S^{++}_{r_{i}}({{{\mathbb{R}}}})$.
The following is our main estimate.
\[main\_estimate\] There exists a constant $c$ such that for all $1 \leq \alpha, \beta \leq r$ and all $(x_1,\ldots,x_k)
\in {{\mathbb{R}}}_{>0} \times {{\mathbb{R}}}_{\geq 0}^{k-1}$ we have the following bound on the $\alpha, \beta$-entry in the inverse of the $r$-by-$r$ matrix $\sum_{i=1}^k x_i {A}'_i$: $$\left|\Big( \sum_{i=1}^k x_i {A}'_i \Big)^{-1}_{\alpha \beta}\right| \leq
\frac{c}{\displaystyle \sum_{j \colon r_j \geq \min(\alpha,\beta)} x_j} \leq
\frac{c}{x_1} .$$
This follows immediately from two intermediate results:
There exists a constant $c_{1}>0$ such that for all $(x_1,\ldots,x_k)
\in {{\mathbb{R}}}_{>0} \times {{\mathbb{R}}}_{\geq 0}^{k-1}$ we have the bound $${\operatorname{det}}\Big(\sum_{i=1}^k x_i {A}_i' \Big)\ge c_{1}
\prod_{j=1}^r \sum_{i:r_i \ge j}x_i > 0.$$
To prove this claim, define the $r$-by-$r$ matrix $$J_i = \begin{pmatrix}[c|c] {\operatorname{Id}}_{r_i}& 0\\
\hline
0 & 0\\
\end{pmatrix}.$$ Since ${A}''_{i}$ is positive definite, there exists $\epsilon >0$ such that for all $i$, we have that ${A}_i' - \epsilon J_i$ is positive semidefinite. Then $$\sum_i x_i{A}_i' = \sum_i x_i({A}_i' - \epsilon J_i) + \sum_i x_i \epsilon J_i,$$ so we find $${\operatorname{det}} \Big(\sum_i x_i{A}_i'\Big) \ge {\operatorname{det}} \Big(\sum_i x_i
\epsilon J_i\Big) = \epsilon^r \prod_{j=1}^r \sum_{i:r_i \ge j}x_i >
0$$ as required. The second intermediate result is as follows:
There exists a constant $c_{2} > 0$ such that for all $1 \le \alpha,
\beta \le r$ and all $(x_1,\ldots,x_k) \in {{\mathbb{R}}}_{>0} \times
{{\mathbb{R}}}_{\geq 0}^{k-1}$ we have the following bound on the cofactors of the matrix $\sum_{i=1}^k x_i {A}'_i$: $$\left|{\operatorname{cof}}_{\alpha, \beta}\Big(\sum_{i=1}^k x_i{A}'_i\Big)
\right|\le c_{2} \prod_{\stackrel{\alpha '=1}{\alpha ' \neq
\min(\alpha, \beta)}}^r \sum_{i : r_i \ge \alpha ' } x_i.$$
To prove this second claim, write $A=\sum_i x_i {A}_i'$. Then there is a constant $c_3$ such that for $1
\leq \alpha' ,\beta' \leq r$ one has $$\big| A_{\alpha' ,\beta' } \big| \leq c_{3} \sum_{i : r_i \geq \max(\alpha' ,\beta' )} x_i \leq
c_{3} \sum_{i : r_i \geq \alpha' } x_i .$$ Let $\sigma \colon \{ 1,\ldots,\hat{\alpha},\ldots,r\} {\xrightarrow{\sim}}\{ 1,\ldots,\hat{\beta},\ldots,r\}$ be a bijection (the $\hat{}$ means “omit”). Then $$\prod_{\alpha ' \neq \alpha} \big| A_{\alpha ',\sigma(\alpha
')}\big | \leq
c_{3}^{r-1} \prod_{\alpha ' \neq \alpha} \sum_{i : r_i \geq \alpha '} x_i$$ and since $A_{\alpha ',\sigma(\alpha ')}=A_{\sigma(\alpha '),\alpha '}$ we also have $$\prod_{\alpha ' \neq \alpha} \big| A_{\alpha ',\sigma(\alpha ')}\big| \leq
c_{3}^{r-1} \prod_{\alpha ' \neq \beta} \sum_{i : r_i \geq \alpha '} x_i \, .$$ Choosing the smaller upper bound of the two we find $$\prod_{\alpha ' \neq \alpha}\big| A_{\alpha ',\sigma(\alpha ')} \big| \leq
c_{3}^{r-1} \prod_{\alpha ' \neq \min(\alpha,\beta)} \sum_{i : r_i \geq \alpha '} x_i$$ and hence $$\big|{\operatorname{cof}}_{\alpha, \beta}(A)\big| \leq (r-1)!c_{3}^{r-1}
\prod_{\alpha ' \neq \min(\alpha,\beta)} \sum_{i : r_i \geq \alpha '} x_i \, .$$ This proves the second claim and, consequently, Lemma \[main\_estimate\].
From Lemma \[main\_estimate\] we deduce the existence of a constant $c_{4}>0$ such that, for all $1\le \alpha ,\beta \le r$, $$\begin{aligned}
\left|\left( \sum x_i {A}'_i {c}'_i \right)_\alpha^t \left( \sum x_i
{A}'_i\right)^{-1}_{\alpha,\beta} \left( \sum x_i {A}'_i
{c}'_i \right)_{\beta} \right|
& \leq c_{4} \cdot \frac{\left(\sum_{j \colon r_j \geq \alpha} x_j\right) \left( \sum_{i
\colon r_i \geq \beta} x_i \right) }{\sum_{j \colon r_j \geq
\min(\alpha,\beta)} x_j } \\
& = c_{4} \cdot \sum_{j \colon r_j \geq \max(\alpha,\beta)} x_j
\end{aligned}$$ and hence $$0 \leq f = \sum_{\alpha,\beta} \left( \sum x_i {A}'_i {c}'_i \right)_\alpha^t \left( \sum x_i {A}'_i\right)^{-1}_{\alpha,\beta} \left( \sum x_i {A}'_i {c}'_i \right)_{\beta} \leq c_{4} \sum_{\alpha,\beta}
\sum_{j \colon r_j \geq \max(\alpha,\beta)} x_j \, .$$ This is clearly bounded on $H$. This proves Theorem \[thm:main\_technical\].\[main\_technical\_fbound\].
We start by noting that $$P = \begin{pmatrix}[c|c] \sum x_i {A}_i' + {B}_{11} & {B}_{12}\\
\hline
{B}_{21} & {B}_{22} \\
\end{pmatrix} \, ,$$ with ${B}_{22}$ invertible. Moreover, as $P$ is invertible, so is the Schur complement $\sum x_i {A}'_i + {B}_{11} -
{B}_{12}{B}_{22}^{-1}{B}_{21} $ of ${B}_{22}$ in $P$. If we put $$Q = \left( \sum x_i {A}'_i + {B}_{11} - {B}_{12}{B}_{22}^{-1}{B}_{21} \right)^{-1}$$ then we get $$\label{oominverse} P^{-1} = \begin{pmatrix}[c|c] Q& -Q{B}_{12}{B}_{22}^{-1} \\
\hline
-{B}_{22}^{-1}{B}_{21}Q & {B}_{22}^{-1} + {B}_{22}^{-1}{B}_{21}Q{B}_{12}{B}_{22}^{-1} \\
\end{pmatrix} \, .$$ Write $A = \sum x_i {A}'_i$ and $M= {B}_{11} - {B}_{12}{B}_{22}^{-1}{B}_{21}$ so that $Q = (A+M)^{-1}$. Recall that $A$ is invertible, so that $Q=(\mathrm{Id}_r+A^{-1}M)^{-1}A^{-1}$.
\[claim:seriesQ\] There exists a $\kappa' > \kappa$ such that the series $$A^{-1} - A^{-1}MA^{-1} + A^{-1}MA^{-1}MA^{-1} + \cdots + (-1)^m (A^{-1}M)^mA^{-1} + \cdots$$ converges to $Q$ on ${{\mathbb{R}}}_{>\kappa'} \times {{\mathbb{R}}}_{\geq 0}^{k-1} \times K$.
The entries of the matrix $M$ are continuous functions on the compact set $K$, hence bounded. Let $c$ be the constant of Lemma \[main\_estimate\], choose $$\kappa' > \max( c r^2 \max(M_{\alpha
\beta}),c,\kappa )$$ and put $\epsilon = c r^2\max(M_{\alpha\beta})/\kappa'$. Note that $0
< \epsilon < 1$. Moreover, by Lemma \[main\_estimate\] and the condition $x_1\geq \kappa'$, $$\left( {\lvert(A^{-1}M)^mA^{-1}\rvert} \right)_{\alpha \beta}
\leq \frac{c}{x_1} \frac{(c\max(M_{\alpha \beta})r^{2})^{m}}{\kappa'^m} < \epsilon^m .$$ It follows that the series converges absolutely. By construction, the limit of the series is $(A+M)^{-1}=Q$ finishing the proof of the claim.
Write $M_{1}=(\mathrm{Id}_r+MA^{-1})^{-1}$ and $M_{2}=(\mathrm{Id}_r+A^{-1}M)^{-1}$. Then $Q = A^{-1}M_1 = M_2
A^{-1}$. An argument similar to that of Claim \[claim:seriesQ\] shows that the entries of $M_1$ and $M_2$ are bounded on the set ${{\mathbb{R}}}_{> \kappa'} \times {{\mathbb{R}}}_{\ge 0}^{k-1} \times K$.
We deduce from Lemma \[main\_estimate\] that there is a constant $c_{2}$ such that $${\lvert Q_{\alpha \beta}\rvert} \le \frac{c_{2}}{\sum_{j \colon r_j \geq
\min(\alpha,\beta)}x_j}$$ on the same set. It follows that $$\left| \Big( Q (\sum x_i {A}'_i {c}'_i) \Big)_\beta\right| \quad \textrm{and}\quad
\left| \Big( (\sum x_i {A}'_i {c}'_i)^t Q \Big)_\alpha\right|$$ are bounded on ${{\mathbb{R}}}_{>\kappa'} \times {{\mathbb{R}}}_{\ge 0}^{k-1} \times
K$. Moreover, since $Q-A^{-1} = A^{-1}M_3 A^{-1}$ with again $M_3$ having bounded entries, we deduce that there is another constant $c_{3}$ such that $$\left|\left( Q- A^{-1} \right)_{\alpha \beta} \right|\le \frac{c_{3}}{ \left( \sum_{j \colon r_j \ge \alpha} x_j \right)\left( \sum_{i \colon r_i \ge \beta} x_i\right) },$$ and consequently $$\left| \left( \sum x_i {A}'_i {c}'_i \right)^t \left( Q-A^{-1}
\right) \left( \sum x_i {A}'_i {c}'_i \right) \right|$$ is bounded. Finally, to prove that $|\varphi-f|$ is bounded we compute $$\begin{gathered}
\varphi-f= \left( \sum x_i {A}'_i {c}'_i \right)^t \left(
Q-A^{-1} \right) \left( \sum x_i {A}'_i {c}'_i
\right)
+ 2 {a}_{1}^{t}Q\left( \sum x_i {A}'_i {c}'_i
\right)\\ + {a}_{1}^{t}Q{a}_{1}
- 2 {a}_{2}^{t}{B}_{22}^{-1}{B}_{21}Q\left( \sum x_i {A}'_i {c}'_i
\right)\\
- 2 {a}_{2}^{t}{B}_{22}^{-1}{B}_{21}Q{a}_1
+ {a}_{2}^{t}({B}_{22}^{-1} + {B}_{22}^{-1}{B}_{21}Q{B}_{12}{B}_{22}^{-1}){a}_{2}\end{gathered}$$ and we use the previously obtained bounds. This proves Theorem \[thm:main\_technical\].\[main\_technical\_bounddiff\].
From now on we assume that $k=1$ so we have $\varphi \colon {{\mathbb{R}}}_{>\kappa}
\times K \to {{\mathbb{R}}}$ and $f \colon {{\mathbb{R}}}_{>0} \to
{{\mathbb{R}}}$. Explicitly, $$\varphi(x_1,\lambda) = ({A}_1x_1{c}_1 + {a})^t P^{-1}({A}_1x_1{c}_1+{a}) \,$$ with $P = {A}_1x_1 + {B}$, and $f={c}_1^t {A}_1 {c}_1x_1$. Recall that we write $\varphi_0 = \varphi-f$. Put $w_0 = {a}-{B}{c}_1$.
\[simplifyg\] We have $$\varphi_0(x_1, \lambda) = 2{a}^t {c}_1 - {c}_1^t{B}{c}_1 + w_0^tP^{-1}w_0.$$
We compute $$\begin{split}
\varphi_0(x_1, \lambda) & = ({A}_1x_1{c}_1 + {a})^t
P^{-1}({A}_1x_1{c}_1+{a}) -
{c}_1^t {A}_1 {c}_1x_1 \\
& = (w_{0}+P {c}_{1})^{t} P^{-1} (w_{0}+P
{c}_{1})-{c}_1^t {A}_1 {c}_1x_1\\
&= w_{0}^{t} P^{-1} w_{0}+ 2 {c}_{1}^{t}w_0+ {c}_1^t P {c}_1-{c}_1^t
{A}_1 {c}_1x_1\\
&= w_{0}^{t} P^{-1} w_{0}+2 {c}_{1}^{t}{a}-{c}_1^t {B}{c}_1.
\end{split}$$
We continue to assume that $k=1$. It follows that ${A}_1'$ is invertible.
\[expandoominverse\] In the above notation and with $k=1$, we have $$P^{-1} = \begin{pmatrix}[c|c] 0&0\\
\hline
0 & {B}_{22}^{-1}\\
\end{pmatrix} + \frac{1}{x_1}
\begin{pmatrix}[c|c] {A}'^{-1}_1 & -{A}'^{-1}_1 {B}_{12}{B}_{22}^{-1} \\
\hline
-{B}_{22}^{-1}{B}_{21} {A}'^{-1}_1 & {B}_{22}^{-1} {B}_{21} {A}'^{-1}_1 {B}_{12} {B}_{22}^{-1} \\
\end{pmatrix}
+ O(x_1^{-2})$$ and $$P^{-1}{A}_1 = \frac{1}{x_1} \begin{pmatrix}[c|c] 1&0\\
\hline
-{B}_{22}^{-1}{B}_{21} & 0 \\
\end{pmatrix} + O(x_1^{-2})$$ as $x_1 \to \infty$, where the implicit constants are uniform in $K$.
From equation (\[oominverse\]) we obtain $$P^{-1} =
\begin{pmatrix}[c|c] 0&0\\
\hline
0 & {B}_{22}^{-1}\\
\end{pmatrix} + \begin{pmatrix}[c|c] Q & -Q{B}_{12}{B}_{22}^{-1} \\
\hline
-{B}_{22}^{-1}{B}_{21} Q & {B}_{22}^{-1} {B}_{21} Q {B}_{12} {B}_{22}^{-1} \\
\end{pmatrix} \, .$$ Also recall that $Q = (\mathrm{Id}_r + A^{-1}M)^{-1} A^{-1}$ with $A =
{A}_1'x_1$ and $M$ bounded. This yields $Q =x_1^{-1}
{A}'^{-1}_1+ O(x_1^{-2})$ as $x_1 \to \infty$. The first estimate readily follows. Upon recalling that $${A}_1 = \begin{pmatrix}[c|c] {A}_1'&0\\
\hline
0 & 0 \\
\end{pmatrix}$$ the second estimate also follows.
To finish the proof of Theorem \[thm:main\_technical\].\[main\_technical\_k=1\], note that by combining Lemma \[simplifyg\] and Lemma \[expandoominverse\] that $$\varphi_0(x_1,\lambda) = 2{a}^t {c}_1-{c}_1^t{B}{c}_1 + w_0^t \begin{pmatrix}[c|c] 0&0\\
\hline
0 & {B}_{22}^{-1}\\
\end{pmatrix} w_0 + O(x_1^{-1})$$ as $x_1 \to \infty$. From this it is immediate that $\varphi_0$ extends continuously to a function from ${\overline{{{\mathbb{R}}}_{>\kappa}}} \times K$ to ${{\mathbb{R}}}$. Next, from Lemma \[simplifyg\] we have $$\frac{ \partial \varphi_0}{\partial x_1} = -w_0^t P^{-1}
{A}_1 P^{-1} w_0 , \quad \frac{ \partial^2
\varphi_0}{\partial x_1^2} =
2w_0^t P^{-1} {A}_1 P^{-1} {A}_1 P^{-1} w_0 .$$ Combining this with Lemma \[expandoominverse\] we find the estimates $$\frac{ \partial \varphi_0}{\partial x_1} = O(x_1^{-2}) , \quad
\frac{ \partial^2 \varphi_0}{\partial x_1^2} = O(x_1^{-3}) ,$$ completing the proof of Theorem \[thm:main\_technical\].\[main\_technical\_k=1\].
On the recession function of a normlike function {#propertiesnormlike}
------------------------------------------------
Let $f \colon {{\mathbb{R}}}^k_{>0} \to {{\mathbb{R}}}$ be the recession function of the normlike function $\varphi$ associated to $(({A}_i), ({c}_i), {a}, {B})$ as above. The purpose of this section is to list a number of useful properties of $f$.
\[prop:convex\] The function $f$ is convex, that is, for all $x, y \in {{\mathbb{R}}}_{>0}^k$ and all $\lambda \in [0,1]$ we have $f(\lambda x + (1-\lambda)y) \leq \lambda f(x) + (1-\lambda)f(y)$.
Example 3.4 on p. 90 of [@boyd] states that the function $h_g
\colon {{\mathbb{R}}}^g \times S^{++}_{g}({{\mathbb{R}}}){\rightarrow}{{{\mathbb{R}}}}$ given by $
h_g(x,Y) = x^t Y^{-1} x$ is convex. The function $f \colon {{\mathbb{R}}}_{>0}^k \to {{\mathbb{R}}}$ is the composition of $h_g$ with the linear map $${{\mathbb{R}}}_{>0}^k \to {{\mathbb{R}}}^g \times S_g^{++}({{\mathbb{R}}}) , \quad
(x_1,\ldots,x_k) \mapsto \left( \sum_{i=1}^k x_i {A}_i'{c}_i'\ ,\,
\sum_{i=1}^k x_i{A}'_i \right).$$ Since the composition of a linear map followed by a convex function is again convex, we deduce that $f$ is convex.
\[extendconvex\] The function $f$ extends to a continuous function ${\overline{f}} \colon
{{\mathbb{R}}}^k_{\ge 0} \to {{\mathbb{R}}}_{\ge 0}$. The function ${\overline{f}}$ is homogeneous of weight one and convex.
By Theorem \[thm:main\_technical\].\[main\_technical\_fbound\] we know that the function $f$ is bounded on the open standard simplex $\Delta^0$. Define $${\overline{{\overline{f}}}} \colon \Delta {\rightarrow}{{\mathbb{R}}}_{\ge 0}$$ by the formula $${\overline{{\overline{f}}}}(x_1, \ldots, x_k)= {\operatorname{inf}}_{(p_l)_l {\rightarrow}(x_1, \ldots, x_k)} {\operatorname{liminf}}_{l {\rightarrow}\infty} f(p_l);$$ here the infimum is over sequences in $\Delta^0$ tending to the point $(x_1, \ldots, x_k)$. This function ${\overline{{\overline{f}}}}$ is well-defined because $f$ is bounded on $\Delta^0$. It follows easily from the definition of ${\overline{{\overline{f}}}}$ that ${\overline{{\overline{f}}}}$ is convex and lower semi-continuous. Since $\Delta$ is a convex polytope, it follows from [@Roc70 Theorem 10.2] that ${\overline{{\overline{f}}}}$ is continuous. Now extend ${\overline{{\overline{f}}}}$ to ${{\mathbb{R}}}_{\ge 0}^k \setminus \{0\}$ by homogeneity. By sending in addition $0$ to $0$ we obtain the required continuous and convex function ${\overline{f}} \colon {{\mathbb{R}}}^k_{\ge 0} \to
{{\mathbb{R}}}_{\ge 0}$.
We can make the function ${\overline{f}}$ explicit as follows. Let $I {\subseteq}\{1, \ldots, k\}$ be any subset, and set $J = \{1, \ldots, k \} \setminus I$. We consider the restriction of ${\overline{f}}$ to the subset ${{\mathbb{R}}}_{> 0}^I
{\subseteq}{{\mathbb{R}}}_{\ge 0}^k$ given by setting $x_j$ equal to zero for all $j
\in J$. Let $$r_I = {\operatorname{rk}}\left( \sum_{i \in I} x_i {A}_i\right) : x_i > 0,$$ and for $i \in I$ set $${A}_i = \begin{pmatrix}[c|c] {A}_i''& 0\\
\hline
0 & 0\\
\end{pmatrix}$$ where ${A}_i''$ has size $r_I$, and similarly $${c}_i = {\left(\begin{matrix}{c}_i''\\
\hline
\star\end{matrix}\right)},$$ where ${c}_i''$ has length $r_I$.
Note that, if $I \not = \emptyset$, then $r_I \ge 1$. Let $K
\subset {{\mathbb{R}}}_{>0}^J$ be an arbitrary compact subset. Write $x_I =
(x_i)_{i \in I}$ and $x_J = (x_j)_{j \in J}$. We define the function $$f_I \colon {{\mathbb{R}}}_{>0}^I \times K \to {{\mathbb{R}}} \, , \quad
(x_I;x_J) \mapsto f(x_1,\ldots,x_k) \, .$$ Write $${a}(x_J) = \sum_{j \in J} x_j{A}_j' {c}'_j \, , \quad {B}(x_J) =
\sum_{j \in J} {A}'_j x_j \, ,$$ then we see that $$f_I(x_I;x_J) = \left( \sum_{i \in I} x_i {A}'_i{c}'_i + {a}(x_J) \right)^t
\left( \sum_{i \in I} x_i {A}_i' + {B}(x_J) \right)^{-1} \left(
\sum_{i \in I} x_i {A}'_i{c}'_i + {a}(x_J) \right)$$ and hence by Theorem \[thm:main\_technical\] $f_I$ has a recession function ${\operatorname{rec}} f_I \colon {{\mathbb{R}}}^I_{>0} \to {{\mathbb{R}}}$ which can be written $${\operatorname{rec}} f_I(x_I) = \left(\sum_{i\in I}x_i{A}''_i{c}''_i \right)^t
\left(\sum_{i\in I}x_i{A}''_i \right)^{-1} \left(\sum_{i\in
I}x_i{A}''_i{c}''_i \right),$$ when $I\not = \emptyset$, and ${\operatorname{rec}} f_\emptyset=0$. Note that ${\operatorname{rec}} f_I$ is independent of the choice of $K$. Also note that, by Theorem \[thm:main\_technical\], ${\lvertf_I-{\operatorname{rec}} f_I \rvert}$ is bounded on ${{\mathbb{R}}}_{>0}^I \times K$.
\[restrict\_to\_faces\] Let $I {\subseteq}\{1, \ldots, k\}$ be any subset. We have $${\overline{f}}|_{{{\mathbb{R}}}_{>0}^I} = {\operatorname{rec}} f_I.$$
When $I=\emptyset$ the equality is trivially true. We assume that $I\not=\emptyset$. Choose $c \in {{\mathbb{R}}}^J_{>0}$ and $x_I \in {{\mathbb{R}}}^I_{>0}$ arbitrarily. By Theorem \[thm:main\_technical\] there exists a constant $\delta >0$ depending on $c$ and $x_I$ such that for all $\lambda \in {{\mathbb{R}}}_{>0}$ we have $${\lvert({\operatorname{rec}} f_I)(\lambda x_I) - f_I(\lambda x_I;c) \rvert} \leq \delta \, .$$ We deduce that for all $\lambda \in {{\mathbb{R}}}_{>0}$ we have $$\label{eq:1}
{\lvert({\operatorname{rec}} f_I)(x_I) - f (x_I,\frac{c}{\lambda}) \rvert} \leq
\frac{\delta}{\lambda} .$$
As ${\overline{f}}$ extends $f$ continuously we have $$\lim_{\lambda \to \infty} f(x_I,\frac{c}{\lambda}) = {\overline{f}}|_{{{\mathbb{R}}}_{>0}^I}(x_I)$$ independently of the choice of $c$. Combining with the bound , we find that $$({\operatorname{rec}}f_I)(x_I) = {\overline{f}}|_{{{\mathbb{R}}}_{>0}^I}(x_I) ,$$ as required.
A special case of interest is when $|I|=1$. For each $1 \le i \le k$, set $${A}_i = \begin{pmatrix}[c|c] {A}_i^{{\operatorname{e}}}& 0\\
\hline
0 & 0\\
\end{pmatrix}$$ where ${A}_i^{{\operatorname{e}}}$ has size $r_i = {\operatorname{rk}} {A}_i$ and hence is positive definite; here ${\operatorname{e}}$ is short for “essential”. Similarly set $${c}_i = {\left(\begin{matrix}{c}_i^{{\operatorname{e}}}\\
\hline
\star\end{matrix}\right)},$$ where ${c}_i^{{\operatorname{e}}}$ has length $r_i$. Define $$\mu_i = {c}_i^t {A}_i {c}_i = ({c}_i^{{\operatorname{e}}})^t{A}_i^{{\operatorname{e}}}{c}_i^{{\operatorname{e}}} \, .$$ Then $\mu_i \geq 0$ and we have for all $x_i >0$ $${\overline{f}}(0,\ldots, 0, x_i, 0, \ldots, 0) = ({\operatorname{rec}} f_{\{i\}}) (x_i)
= x_i ({c}_i^{{\operatorname{e}}})^t ({A}_i^{{\operatorname{e}}})^t x_i^{-1}
({A}_i^{{\operatorname{e}}})^{-1} x_i{A}_i^{{\operatorname{e}}} {c}_i^{{\operatorname{e}}} = x_i \mu_i
.$$ In particular, the function ${\overline{f}}(0,\ldots, 0, x_i, 0, \ldots, 0) $ is homogeneous linear in $x_i$, and $$\mu_i = {\overline{f}}( 0,\ldots, 0, 1, 0, \ldots, 0 ) \, .$$ We call $\mu_1,\ldots,\mu_k \geq 0$ the *coefficients* associated to $\varphi$.
Proofs of the main results {#sec:proof-main-results}
==========================
In this section we prove our main results. We also reprove Lear’s result in our situation. We will continue to work with the “diagonal case” where we consider the pullback Poincaré bundle ${{\mathcal{P}}}_\nu$ associated to a single section $\nu$ of our family $\pi \colon Y \to
X$. As was explained at the beginning of Section \[sec:norm-section\], by the biextension property of the Poincaré bundle this is sufficient for the purpose of proving the main results as stated in the introduction.
Singularities of the biextension metric {#singularity}
---------------------------------------
In this section we will prove Theorem \[singbiext\].
Following Theorem \[asympt\], take
- a small enough $\epsilon >0$,
- matrices $${A}_1, \ldots, {A}_k \in S_g({{\mathbb{R}}}) \cap M_g({{\mathbb{Z}}})$$ of positive rank,
- vectors $${c}_1, \ldots, {c}_k \in {{\mathbb{Q}}}^g$$ such that ${A}_i {c}_i \in {{\mathbb{Z}}}^g$ for $i=1,\ldots,k$,
- bounded holomorphic maps $\alpha\colon \Delta_\epsilon^n {\rightarrow}{{\mathbb{C}}}^g$ and $\psi\colon \Delta_\epsilon^n {\rightarrow}{{\mathbb{P}}}^g$,
such that the multi-valued period mapping $$(\Omega,\delta) \colon U_\epsilon \cap X {\rightarrow}\mathcal{M} = {{\mathbb{H}}}_g \times {{\mathbb{C}}}^g$$ of the variation of mixed Hodge structures $\hh(\nu)$ on $U_\epsilon$ is given by the formula $$(\underline{q}) = (q_1, \ldots, q_n ) \mapsto \left(
\sum_{j=1}^k{A}_j\frac{\log q_j}{2\pi i} + \psi(\underline{q}),
\sum_{j=1}^k{A}_j{c}_j\frac{\log q_j}{2\pi i} +
\alpha(\underline{q}) \right)$$ (recall that $U_\epsilon$ was defined in Section \[sec:statement\_of\_main\]). Put ${a}= 2\pi {\operatorname{Im}}\alpha$, ${B}= 2\pi {\operatorname{Im}}\psi$, and define $\kappa \in
{{\mathbb{R}}}$ via $\kappa = -\log \epsilon$. As above define the function $\varphi \colon {{\mathbb{R}}}_{>\kappa}^k \times
\Delta_{\epsilon}^n \to {{\mathbb{R}}}_{\ge 0}$ via $$\varphi (x_1,\ldots,x_k; \underline{q} ) =
\left(\sum_{i=1}^kx_i{A}_i{c}_i + {a}\right)^t
\left(\sum_{i=1}^kx_i{A}_i + {B}\right)^{-1}
\left(\sum_{i=1}^kx_i{A}_i{c}_i + {a}\right).$$
Choose any $0<\epsilon '<\epsilon $. The restriction of $\varphi$ to ${{\mathbb{R}}}_{>\kappa}^k \times \overline{\Delta}_{\epsilon'}^n$ is then a normlike function of dimension $k$. Let $f \colon {{\mathbb{R}}}^k_{>0} \to {{\mathbb{R}}}_{\ge 0}$ be the associated recession function $f = {\operatorname{rec}} \varphi$. Recalling the explicit expression (\[recessionexplicit\]) for $f$, the conditions $${A}_i \in S_g({{\mathbb{R}}}) \cap M_g({{\mathbb{Z}}}) \, , \quad
{A}_i {c}_i \in {{\mathbb{Z}}}^g,$$ for each $i=1,\ldots,k$ imply that $f$ is the quotient of two homogeneous polynomials in ${{\mathbb{Z}}}[x_1,\ldots,x_k]$. In particular $f \in {{\mathbb{Q}}}(x_1,\ldots, x_k)$. It is clear that $f$ is homogeneous of weight one, and by Proposition \[prop:convex\] the function $f$ is convex when seen as a real-valued function on ${{\mathbb{R}}}_{>0}^k$.
Let $s$ be a local generating section of ${{\mathcal{P}}}_\nu$ over $U_\epsilon \cap X$. Following Corollary \[explicitnorm\] we have $$-\log{\lvert\lverts\rvert\rvert} = -\log{\lverth\rvert} +
\varphi(-\log|q_1|,\ldots,-\log|q_k|;\underline{q})$$ on $U_\epsilon \cap X$ with $h$ a meromorphic function on $U_\epsilon$, holomorphic on $U_\epsilon \cap X$. As $s$ is locally generating over $U_\epsilon \cap X$ we have that $h$ has no zeroes or poles on $U_\epsilon \cap X$. Hence there is a linear form $l \in {{\mathbb{Z}}}[x_1,\ldots,x_k]$ and a holomorphic map $u \colon U_\epsilon \to {{\mathbb{C}}}^*$ such that $$-\log|h| = l(-\log|q_1|,\ldots,-\log|q_k|) + \log|u|$$ on $U_\epsilon \cap X$. The image of $\overline
U_{\epsilon '}$ under the map $u$ is compact.
Put $f_s=f+l$ in ${{\mathbb{Q}}}(x_1,\ldots,x_k)$. Then $f_s$ is again homogeneous of weight one and convex as a function on ${{\mathbb{R}}}_{>0}^k$. Our claim is that $f_s$ satisfies all the requirements of Theorem \[singbiext\]. We need to show first of all that $-\log {\lvert\lverts\rvert\rvert} - f_s(-\log|q_1|,\ldots,-\log|q_k|)$ is bounded on $\overline{U}_{\epsilon'} \cap X$ and extends continuously over $\overline{U}_{\epsilon'} \setminus D^{\mathrm{sing}}$.
In order to see this, put $\varphi_0=\varphi-f$ on ${{\mathbb{R}}}_{>\kappa}^k \times
\Delta_\epsilon^n$. Then $$-\log {\lvert\lverts\rvert\rvert}(\underline{q}) =
f_s(-\log|q_1|,\ldots,-\log|q_k|) + \log|u| +
\varphi_0(-\log|q_1|,\ldots,-\log|q_k|;\underline{q})$$ on $U_\epsilon \cap X$. Note that $\log|u|$ extends in a continuous and bounded manner over the whole of $\overline{U}_{\epsilon'}$. We are reduced to showing that $\varphi_0(-\log|q_1|,\ldots,-\log|q_k|;\underline{q})$ is bounded on $\overline{U}_{\epsilon'} \cap X$ and extends continuously over $\overline{U}_{\epsilon'} \setminus D^{\mathrm{sing}}$.
For this we invoke Theorem \[thm:main\_technical\].\[main\_technical\_bounddiff\]. This readily gives the boundedness of $\varphi_0$ via the map $$(-\log|\cdot|,\mathrm{id}) \colon (\Delta_\epsilon^*)^k \times
\Delta_\epsilon^n \to {{\mathbb{R}}}_{>\kappa}^k \times \Delta_\epsilon^n .$$ Let $p \in (D \setminus D^\mathrm{sing})\cap \overline{U}_{\epsilon'}$. Up to a change in the order of the variables, we can assume that the coordinates of $p$ satisfy $q_1=0$, $q_i \neq 0$ for $i=2,\ldots,
N$. We take a small polydisk $V_{\epsilon''} \subset
\overline{U}_{\epsilon'}$ of small radius $\epsilon''$ with center at $p$ such that $V_{\epsilon''} \cap X$ can be identified with $\Delta_{\epsilon''}^* \times \Delta_{\epsilon''}^{n-1}$ and hence $V_{\epsilon''} \cap D$ can be identified with the divisor $q_1=0$ on $\Delta_{\epsilon''}^n$. Write $$\underline{r} = (r_2,\ldots,r_k) =
(-\log|q_2|,\ldots,-\log|q_k|)$$ for $\underline{q} \in V_{\epsilon''}$; then $\underline{r}$ can be assumed to move through a compact subset $K' \subset
{{\mathbb{R}}}^{k-1}$. Put $K'' = K' \times \overline{\Delta}_{\epsilon'}^n$. We define functions $\varphi' \colon {{\mathbb{R}}}_{>\kappa} \times K'' \to {{\mathbb{R}}}_{\geq 0}$ and $f' \colon {{\mathbb{R}}}_{>\kappa} \times K'' \to {{\mathbb{R}}}_{\ge 0}$ via $$\varphi'(x_1;\underline{r},\underline{q}) = \varphi(x_1,\underline{r};\underline{q}) \, , \quad f'(x_1;\underline{r}) = f(x_1,\underline{r}) \, .$$ Then both $\varphi'$, $f'$ are normlike of dimension one. Write $${A}_1 = \begin{pmatrix}[c|c] {A}_1'& 0\\
\hline
0 & 0\\
\end{pmatrix} \, , \quad {A}_1' = \begin{pmatrix}[c|c] {A}_1''& 0\\
\hline
0 & 0\\
\end{pmatrix} \, ,$$ with ${A}_1'$ positive semidefinite of size $r$, and ${A}_1''={A}_1^{{\operatorname{e}}}$ positive definite of size and rank $r_1$. Then it is readily verified that both ${\operatorname{rec}} f'$ and ${\operatorname{rec}} \varphi'$ are equal to the linear function $x_1\mu_1 = x_1 {c}_1^t {A}_1 {c}_1 = {\overline{f}}(x_1,0,\ldots,0)$. Note that $$\varphi_0(-\log|q_1|,\ldots,-\log|q_k|;\underline{q})
= \varphi'(-\log|q_1|;\underline{r},\underline{q}) - f'(-\log|q_1|;\underline{r})$$ on $V_{\epsilon'} \cap X$. We are done once we show that $ \varphi' - f'$ extends continuously over ${\overline{ {{\mathbb{R}}}_{>\kappa} }} \times K''$. Following Theorem \[thm:main\_technical\].\[main\_technical\_k=1\] we have that both $ \varphi' - {\operatorname{rec}} \varphi'$ and $f' - {\operatorname{rec}} f'$ extend continuously over ${\overline{ {{\mathbb{R}}}_{>\kappa} }} \times K''$. As ${\operatorname{rec}} \varphi' = {\operatorname{rec}} f'$ we find the required extension result.
The second item of Theorem \[singbiext\] is clear. As $f_s$ is up to a linear form the recession function of a normlike function we have that $f_s$ is convex, and by Proposition \[extendconvex\] that $f_s$ extends as a convex, continuous homogeneous weight one function ${\overline{f}}_s \colon {{\mathbb{R}}}_{\geq 0}^k \to {{\mathbb{R}}}$. This finally proves items (3) and (4) of Theorem \[singbiext\].
The Lear extension made explicit
--------------------------------
Write $U=U_{\epsilon'}$, $V=V_{\epsilon''}$ to reduce notation. A closer look at the above proof shows that a Lear extension ${\left[
{{\mathcal{P}}}_\nu,{\lvert\lvert-\rvert\rvert} \vphantom{{
{{\mathcal{P}}}_\nu,{\lvert\lvert-\rvert\rvert} }^{\sum}}\right]}_U$ of ${{\mathcal{P}}}_\nu$ exists: let $\mu_1,\ldots,\mu_k \in
{{\mathbb{Q}}}$ be the coefficients of $\varphi$ (see end of Section \[propertiesnormlike\] for the definition), and $\nu_i = \mathrm{ord}_{D_i} h$ for $i=1,\ldots,k$, and $a_i = \mu_i + \nu_i$. Here $D_i$ is the divisor on ${\overline{X}}$ given locally on $U$ by $q_i=0$. We obtain from the above proof that $$\label{definepsi} -\log {\lvert\lverts\rvert\rvert}(\underline{q}) =
-a_1 \log|q_1| + \psi_1(\underline{q})$$ on $V \cap X$ where $\psi_1(\underline{q})$ extends continuously over $V$. This is precisely what is needed to show the extendability of ${{\mathcal{P}}}_\nu|_{V \cap X}$ as a continuously metrized ${{\mathbb{Q}}}$-line bundle over $V$. Varying $p$ over $D \setminus D^{\mathrm{sing}}$ we get the existence of the desired continuous extension of ${{\mathcal{P}}}_\nu$ over ${\overline{X}} \setminus D^{\mathrm{sing}}$. This reproves Lear’s result in our situation.
We can be more precise here. Let $s$ be a rational section of ${{\mathcal{P}}}_\nu$ on $X$. Then $s$ can also be seen as a rational section of ${\left[ {{\mathcal{P}}},{\lvert\lvert-\rvert\rvert} \vphantom{{ {{\mathcal{P}}},{\lvert\lvert-\rvert\rvert}}^{\sum}}\right]}_{{\overline{X}}}$. We can compute the global ${{\mathbb{Q}}}$-divisor ${\operatorname{div}}_{{\overline{X}}}(s)$ that represents the Lear extension ${\left[ {{\mathcal{P}}},{\lvert\lvert-\rvert\rvert} \vphantom{{ {{\mathcal{P}}},{\lvert\lvert-\rvert\rvert} }^{\sum}}\right]}_{{\overline{X}}}$ of ${{\mathcal{P}}}$ over ${\overline{X}}$. We do this after a little digression.
We say that $p \in {\overline{X}}$ is of depth $k$ if $p$ is on precisely $k$ of the irreducible divisors $D_i$. The set $\Sigma_k$ of points of depth $k$ on ${\overline{X}}$ is a locally closed subset of ${\overline{X}}$ and for $k \ge 1$ they yield a stratification of $D = {\overline{X}} \setminus
X$. For $p \in \Sigma_k$ take a coordinate neighborhood $U \subset {\overline{X}}$ such that $p=(0,\ldots,0)$ and $D \cap X $ is given by the equation $q_1\cdots q_k=0$. Assume that $p$ is away from $\overline{{\operatorname{div}}_X(s)}$, the closure in ${\overline{X}}$ of the support of the divisor ${\operatorname{div}}_X(s)$ of $s$ on $X$. Shrinking $U$ if necessary, we may assume that ${\overline{U}}\cap
\overline{{\operatorname{div}}_X(s)}=\emptyset$. Then Theorem \[singbiext\] yields an associated homogeneous weight-one function $f_{p,s} \in {{\mathbb{Q}}}(x_1,\ldots,x_k)$.
\[locconstant\] The map $\Sigma_k \setminus
\overline{{\operatorname{div}}_X(s)} {\rightarrow}{{\mathbb{Q}}}(x_1,\ldots,x_k)$ given by $p
\mapsto f_{p,s}$ is locally constant.
Take $p$, $U$ as above and let $y=(0,\ldots,0,y_{k+1},\ldots,y_n)\in U$ be another point of depth $k$. Let $q_i'=q_i$ for $i=1,\ldots, k$, $q_i'=q_i-y_i$ for $i=k+1,\ldots,n$. Then $\underline{q}'$ are coordinates centered around $y$ and we have $$\begin{split}
-\log {\lvert\lverts\rvert\rvert} & = f_{p,s}(-\log|q_1|,\ldots,-\log|q_k|) +
\psi_p(\underline{q}) \\
& = f_{y,s}(-\log|q_1'|,\ldots,-\log|q_k'|) + \psi_y(\underline{q}') \\
& = f_{y,s}(-\log|q_1|,\ldots,-\log|q_k|) + \psi_y(q_i-y_i)
\end{split}$$ on $U \cap X$ with $\psi_p$, $\psi_y$ bounded on $U \cap X$. We find that $f_{p,s}-f_{y,s}$ is bounded on ${{\mathbb{R}}}_{>\kappa}^k$ and, being homogeneous of weight one, it vanishes identically.
In order to compute the divisor ${\operatorname{div}}_{{\overline{X}}}(s)$ that represents the Lear extension of ${{\mathcal{P}}}_\nu$ over ${\overline{X}}$ we are interested in the behavior of the function $
f_s \colon \Sigma_1 \setminus {\overline{\mathrm{div}_X(s)}} \to
{{\mathbb{Q}}}(x)
$ obtained from Lemma \[locconstant\] by restricting to $k=1$. Note that $\Sigma_1 = D \setminus D^{\mathrm{sing}}$. Let $D=\bigcup
_{\alpha =1}^{d}D_{\alpha }$ be the decomposition of $D$ into irreducible components. Take any irreducible component $D_{\alpha }$. Since $D_{\alpha}\setminus (D^{\mathrm{sing}}\cup
{\overline{\mathrm{div}_X(s)}})$ is connected, we deduce from Lemma \[locconstant\] that the function $$f_{s,\alpha } \colon D_\alpha \setminus (D^{\mathrm{sing}}\cup
{\overline{\mathrm{div}_X(s)}}) \to {{\mathbb{Q}}}(x)$$ is constant. Its value is a homogeneous linear function which we write as $f_{s,\alpha}(x)=a_\alpha x$, with $a_\alpha \in {{\mathbb{Q}}}$. In this notation we find:
\[learext\_explicit\] Let $s$ be any nonzero rational section of ${{\mathcal{P}}}$ on $X$. Let $L={\left[ {{\mathcal{P}}},{\lvert\lvert-\rvert\rvert} \vphantom{{ {{\mathcal{P}}},{\lvert\lvert-\rvert\rvert} }^{\sum}}\right]}_{{\overline{X}}}$ be the Lear extension of ${{\mathcal{P}}}$ over ${\overline{X}}$. Then $L$ is represented by the ${{\mathbb{Q}}}$-divisor $$\mathrm{div}_{{\overline{X}}}(s)={\overline{\mathrm{div}_{X}(s)}} + \sum_{\alpha=1}^d a_\alpha D_\alpha$$ on ${\overline{X}}$.
Local integrability
-------------------
Our next task is to investigate $\partial \bar{\partial} \log {\lvert\lverts\rvert\rvert}$ over curves.
We use the estimates from Theorem \[thm:main\_technical\].\[main\_technical\_k=1\]. We assume $k=n=1$, but otherwise keep the notation and assumptions from Section \[singularity\]. In particular we have the normlike function $\varphi(x_1,q_1)$ on ${{\mathbb{R}}}_{>\kappa}
\times \Delta_\epsilon$ and the associated recession function $f={\operatorname{rec}} \varphi$ on ${{\mathbb{R}}}_{>0}$. Put $\varphi_0=\varphi-f$. Put $\varphi_1(q_1)=\varphi_0(-\log|q_1|,q_1)$. By Corollary \[explicitnorm\] on $U_\epsilon \cap X$, noting that $f$ is linear, we have $$-\log {\lvert\lverts\rvert\rvert}(q_1) = -\log|h|(q_1) + \varphi_1(q_1)$$ for some meromorphic function $h$. Note that $$\partial \varphi_1 = -\frac{1}{2}\frac{\partial \varphi_0}{\partial
x_1} \frac{d q_1}{q_1}
+ \frac{\partial \varphi_0}{\partial q_1} dq_1 .$$ Here $\partial \varphi_0/\partial q_1$ is smooth and bounded on ${\overline{U_{\epsilon'}}}$, and by Theorem \[thm:main\_technical\].\[main\_technical\_k=1\] we have a constant $c_1$ such that $$\left| \frac{\partial \varphi_0}{\partial x_1} \right| \leq c_1 \cdot
x_1^{-2} \, .$$ Hence for a smooth vector field $T$ with bounded coefficients we find a constant $c_2$ such that $${\lvert \partial \varphi_1 (T) \rvert} \leq c_2 \cdot \frac{1}{(-\log|q_1|)^2|q_1|}$$ on $U_\epsilon \cap X$. A similar argument yields $${\lvert \bar{\partial} \varphi_1 (T) \rvert} \leq c_2 \cdot \frac{1}{(-\log|q_1|)^2|q_1|}$$ on $U_\epsilon \cap X$. In particular, there is a constant $c_3$ such that $$\left\| \int_{\partial U_{\epsilon }}\partial \varphi_1\right\|\le c_3
\frac{\epsilon }{(\log \epsilon )^2 \epsilon }.$$ Thus the residue ${\operatorname{res}}_{0}(\partial \varphi_{1})$ of $\partial \varphi_{1}$ at zero is zero.
Next, there exists a smooth $(1,1)$-form $\zeta$ on $U_\epsilon$ such that $$\partial \bar{\partial} \varphi_1 = \frac{1}{4} \frac{\partial^2 \varphi_0}{\partial x_1^2} \frac {1}{|q_1|^2} dq_1 d {\overline{q_1}} + \zeta \, .$$ By Theorem \[thm:main\_technical\].\[main\_technical\_k=1\] we have a constant $c_4$ such that $$\left| \frac{\partial^2 \varphi_0}{\partial x_1^2} \right| \leq c_4 \cdot x_1^{-3} \, .$$ Hence for smooth vector fields $T, U$ with bounded coefficients we find a constant $c_5$ and an estimate $${\lvert \partial \bar{\partial} \varphi_1 (T,U) \rvert} \leq c_5 \cdot \frac{1}{(-\log|q_1|)^3|q_1|^2}$$ on $U_\epsilon \cap X$. This shows that $\partial \bar{\partial} \varphi_1$ is locally integrable on $U_\epsilon$.
Effectivity of the height jump divisor
--------------------------------------
In this section we prove Theorem \[theorem:effectivity\]. We continue again with the notation as in Section \[singularity\]. In particular we have $U=U_\epsilon$, $s$ a locally generating section of $\mathcal{P}_\nu$ on $U \cap X$, and $f_s \colon {{\mathbb{R}}}_{>0}^k \to {{\mathbb{R}}}$ the associated homogeneous weight one function such that $$-\log\|s\| - f_s(-\log|q_1|,\ldots,-\log|q_k|)$$ is bounded on $U \cap X$ and extends continuously over ${\overline{X}} \setminus D^\mathrm{sing}$. Moreover $f_s$ extends as a convex homogeneous weight one function ${\overline{f}}_s \colon {{\mathbb{R}}}_{\geq 0}^k \to {{\mathbb{R}}}$ (cf. Theorem \[singbiext\]). It is clear that a convex homogeneous weight one function is subadditive, hence we have the estimate $$\label{subadditivity}
{\overline{f}}_s(x_1, \ldots, x_k) \leq \sum_{i=1}^k {\overline{f}}_s(0,\ldots,0,x_i,0,\ldots,0)$$ on ${{\mathbb{R}}}^k_{\geq 0}$.
Now let ${\overline{\phi}}\colon{\overline{C}}{\rightarrow}{\overline{X}}$ be a map from a smooth curve, sending a point $0$ in ${\overline{C}}$ to $p=(0,\ldots,0)$, and such that there exists an open neighbourhood $V$ of $0$ in ${\overline{C}}$ such that ${\overline{\phi}}$ maps $V$ into $U$. We also assume that ${\overline{\phi}}$ does not map $V$ into $D$. Then ${\overline{\phi}}$ is given locally at $0 \in {\overline{C}}$ by $${\overline{\phi}}(t) = (t^{m_1}u_1, \ldots, t^{m_i}u_i, \ldots) \, ,$$ where $t$ is a local coordinate on ${\overline{C}}$ at $0$, the $m_i$ are non-negative integers, and $u_i$ are units. Write $\phi$ for the restriction of ${\overline{\phi}}$ to $V \setminus \{0\}$.
\[learpullback\] We have an equality of $\qq$-divisors on $V$: $${\operatorname{div}}\left( \phi^*s\right)|_V = {\overline{f}}_s(m_1, \ldots, m_k)\cdot[0] \, ,$$ where $\phi^*s$ is viewed as a rational section of the Lear extension ${\left[\phi^*({{\mathcal{P}}}_\nu, {\lvert\lvert-\rvert\rvert}) \vphantom{{\phi^*({{\mathcal{P}}}_\nu, {\lvert\lvert-\rvert\rvert})}^{\sum}}\right]}_V$.
It suffices to show that $$-\log {\lvert\lvert\phi^*s\rvert\rvert} \sim -{\overline{f}}_s(m_1,\ldots,m_k)\log|t|$$ on $V \setminus \{ 0\}$, where $\sim$ denotes that the difference is bounded and extends continuously over $V$. As by Theorem \[singbiext\] $$-\log {\lvert\lverts\rvert\rvert} - f_s(-\log|q_1|,\ldots,-\log|q_k|)$$ is bounded on $U \cap X$ we obtain the boundedness by pullback along $\phi$. The continuous extendability over $V$ then follows from the boundedness combined with the existence of a Lear extension for $\phi^*({{\mathcal{P}}}_\nu, {\lvert\lvert-\rvert\rvert})$.
\[pullbacklear\] We have an equality of divisors on $V$: $${\operatorname{\phi}}^*({\operatorname{div}}_{{\overline{X}}}(s)) = \sum_{i = 1}^k{\overline{f}}_s(0,\ldots,0,m_i,0,\ldots,0)\cdot [0] \, ,$$ where $s$ is viewed as a rational section of the Lear extension ${\left[{{\mathcal{P}}}_\nu, {\lvert\lvert-\rvert\rvert} \vphantom{{{{\mathcal{P}}}_\nu, {\lvert\lvert-\rvert\rvert}}^{\sum}}\right]}_U$.
This follows immediately from Corollary \[learext\_explicit\].
Combining Propositions \[learpullback\] and \[pullbacklear\] one sees that the line bundle $${\left[\phi^*({{\mathcal{P}}}_\nu, {\lvert\lvert-\rvert\rvert}) \vphantom{{\phi^*({{\mathcal{P}}}_\nu, {\lvert\lvert-\rvert\rvert})}^{\sum}}\right]}_{{\overline{C}}}^{\otimes -1} \otimes {\overline{\phi}}^*{\left[{{\mathcal{P}}}_\nu, {\lvert\lvert-\rvert\rvert} \vphantom{{{{\mathcal{P}}}_\nu, {\lvert\lvert-\rvert\rvert}}^{\sum}}\right]}_{{\overline{X}}}$$ has a canonical non-zero rational section, whose divisor is $$\left( -{\overline{f}}_s(m_1, \ldots, m_k) + \sum_{i=1}^k {\overline{f}}_s(0,\ldots,0,m_i,0,\ldots,0) \right) \cdot[0]$$ on $V$, which is indeed independent of the choice of rational section $s$. This divisor is effective by the subadditivity of $f_s$ expressed by inequality (\[subadditivity\]). In particular the section is global.
[99]{}
O. Amini, S. Bloch, J. Burgos Gil, J. Fresán, *Feynman amplitudes and limits of heights*. Preprint `arxiv:1512.04862`.
M. Asakura, *Motives and algebraic de Rham cohomology*. In: The arithmetic and geometry of algebraic cycles (Banff, AB, 1998), 133–154, CRM Proc. Lecture Notes 24, 2000.
O. Biesel, D. Holmes, R. de Jong, *Néron models and the height jump divisor*. Preprint, `arxiv:1412.8207`.
S. Boyd, L. Vandenberghe, *Convex Optimization*. Cambridge University Press, 2009.
P. Brosnan, G. Pearlstein, *Jumps in the archimedean height*. Unpublished manuscript, 2006.
J. I. Burgos Gil, J. Kramer, U. Kühn, *The singularities of the invariant metric on the line bundle of Jacobi forms*. Preprint, `arxiv:1405.3075`. To appear in: M. Kerr and G. Pearlstein (eds.), Recent Advances in Hodge Theory, Cambridge Univ. Press.
P. Deligne, *Le déterminant de la cohomologie*. In: Current trends in arithmetical algebraic geometry (Arcata, Calif., 1985), Contemp. Math. 67 (1987), 93–177.
R. Hain, *Biextensions and heights associated to curves of odd genus*. Duke Math. J. 61 (1990), 859–898.
R. Hain, *Normal functions and the geometry of moduli spaces of curves*. In: G. Farkas and I. Morrison (eds.), Handbook of Moduli, Volume I. Advanced Lectures in Mathematics, Volume XXIV, International Press, Boston, 2013.
T. Hayama, G. Pearlstein, *Asymptotics of degenerations of mixed Hodge structures*. Preprint, `arxiv:1403.1971`. To appear in Adv. Math.
D. Holmes, R. de Jong, *Asymptotics of the Néron height pairing*. Math. Res. Lett. 22 no. 5 (2015), 1337–1371.
K. Kato, C. Nakayama, S. Usui, *SL(2)-orbit theorem for degeneration of mixed Hodge structure*. J. Algebraic Geom. 17 (2008), no. 3, 401–479.
D. Lear, *Extensions of normal functions and asymptotics of the height pairing*. PhD thesis, University of Washington, 1990.
G. Pearlstein, C. Peters, *Differential geometry of the mixed Hodge metric*. Preprint, `arxiv:1407.4082`.
G. Pearlstein, *$\mathrm{SL}_2$-orbits and degenerations of mixed Hodge structure*. J. Diff. Geometry 74 (2006), 1–67.
G. Pearlstein, *Variations of mixed Hodge structure, Higgs fields, and quantum cohomology*. manuscripta math. 102 (2000), no. 3, 269–310.
M. Saito, *Modules de Hodge polarisables*. Publ. Res. Inst. Math. Sci. 24 (1988), 849–995.
M. Saito, *Mixed Hodge modules*. Publ. Res. Inst. Math. Sci. 26 (1990), 221-333.
C. Peters, J. Steenbrink, *Mixed Hodge structures*. Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics, 52. Springer-Verlag, Berlin, 2008.
R. T. Rockafellar, *Convex analysis*, Princeton Math. Series, vol. 28, Princeton Univ. Press, 1970.
J. Steenbrink, S. Zucker, *Variation of mixed Hodge structure. I*. Invent. Math. 80 (1985), no. 3, 489–542.
|
---
abstract: 'Every pseudo-Anosov mapping class $\varphi$ defines an associated veering triangulation ${\tau}_\varphi$ of a punctured mapping torus. We show that generically, ${\tau}_\varphi$ is not geometric. Here, the word “generic” can be taken either with respect to random walks in mapping class groups or with respect to counting geodesics in moduli space. Tools in the proof include Teichmüller theory, the Ending Lamination Theorem, study of the Thurston norm, and rigorous computation.'
address:
- |
Department of Mathematics\
Temple University\
1805 N. Broad St\
Philadelphia, PA 19122
- |
Department of Mathematics\
Rice University MS-136\
1600 Main St.\
Houston, TX 77251
author:
- David Futer
- 'Samuel J. Taylor'
- William Worden
bibliography:
- 'biblio.bib'
title: Random veering triangulations are not geometric
---
[^1] [^2]
[^1]: Futer was partially supported by NSF grants DMS–1408682 and DMS–1907708.
[^2]: Taylor was partially supported by NSF grants DMS–1400498 and DMS–1744551.
|
---
abstract: 'We connect the Aspinwall-Morrison calculation to Gromov-Witten theory.'
author:
- Artur Elezi
title: 'The Aspinwall-Morrison calculation and Gromov-Witten theory'
---
**Introduction**
================
[I. A bit of history.]{} (See [@[5]] for a good reference on the history of the problem.) One of the problems in the old and recent story of mirror symetry has been the issue of multiple covers on a Calabi-Yau 3-fold $X$. In the pre Gromov-Witten era, this problem can be explained in terms of topological field theories.
Let $X$ be a Calabi-Yau threefold and $H_1,H_2,H_3\in H^2(X)$. The corresponding 3-point correlator in the A-model of $X$ is a path integral that can be expressed as follows: $$\langle H_1,H_2,H_3 \rangle=\int_{X}H_1H_2H_3+\sum_{\beta\in H_2(X)}N_{\beta}(H_1,H_2,H_3)q^{\beta}.$$ We explain the notation. The parameter $q=(q_1,...q_k)$ is a local coordinate on the K$\ddot{a}$hler moduli space of $X$. Let $(d_1,...,d_k)$ be the coordinates of $\beta$ with respect to an integral base of the Mori cone of $X$. Then $q^{\beta}:=q_1^{d_1}\cdot \cdot \cdot q_k^{d_k}$.
The path integral is not a well defined notion, but beyond that, and probably more importantly, there is no rigorous definition of $N_{\beta}(H_1,H_2,H_3)$ in the framework of topological field theories. Let $Z_i$ for $i=1,2,3$ be a cycle whose fundamental class is Poincar$\acute{e}$ dual to $H_i$. Heuristically, the “invariant” $N_{\beta}(H_1,H_2,H_3)$ is described as the “number” of holomorphic maps in $$\{f:\PP^1\rightarrow X~|~f_*([\PP^1])=\beta,f(0)\in Z_1, f(1)\in Z_2, f(\infty)\in Z_3\}.$$ This is certainly not precise, for there may be infinitely many such maps. Let $C\subset X$ be a smooth rational curve. Fix an isomorphism $g:\PP^1\rightarrow C$. For any degree $k$ multiple cover $f:\PP^1\rightarrow \PP^1$ the composition $f\circ g:\PP^1\rightarrow C$ satisfies $(f\circ g)_*([\PP^1])=k[C]$. One would then naturally ask:
What is the contribution of the space of degree $k$ multiple covers of $C$ to the “invariant” $N_{k[C]}(H_1,H_2,H_3)$?
Since this question is about the numbers $N_{k[C]}(H_1,H_2,H_3)$, it is not a well posed one. It can be made precise in the framework of Gromov-Witten theory.
The answer was conjectured in [@[4]] by looking at the classical example of a Calabi-Yau. If $X$ is a quintic threefold then $H^2(X)$ is one dimensional. Let $H$ be its generator. The 3-point correlator of the quintic can be calculated explicitly: $$\langle H,H,H \rangle=5+\sum_{d=1}^{\infty}n_dd^3\frac{q^d}{1-q^d},$$ where $n_d$ is the virtual number of degree $d$ rational curves (instantons) in the quintic. The instanton number $n_d$ agrees with the number of degree $d$ rational curves in the quintic if every rational curve of degree $d$ is smooth, isolated and with normal bundle $N=\mathcal O(-1)\oplus \mathcal O(-1)$. This is not the case for there are $6$-nodal rational plane quintic curves on a generic quintic threefold (see [@[12]]), hence a rigorous definition of the instanton numbers $n_d$ did not exist.
The last equation can be transformed as follows: $$\langle H,H,H \rangle=5+\sum_{d=1}^{\infty}(\sum_{k|d}n_kk^3)q^d.$$ By comparing it to the equation (1) we can see that: $$N_d(H,H,H)=\sum_{k|d}n_kk^3.$$ It looks that each degree $k$ rational curve $C$ in the quintic 3-fold $X$ contributes by: $$\int_{C}H\cdot \int_{C}H\cdot \int_{C}H$$ to $N_d(H,H,H)$ for any $d$ such that $k|d$.
For a general Calabi-Yau $X$, the (pre Gromov-Witten) multiple cover formula can be formulated as follows:
Let $C\subset X$ be a smooth, rational curve such that $N_{C/X}=\mathcal O_C(-1)\oplus \mathcal O_C(-1)$. The contribution of degree $k$ multiple covers of $C$ in $N_{k[C]}(H_1,H_2,H_3)$ is: $$\int_{C}H_1\cdot \int_{C}H_2\cdot \int_{C}H_3.$$
It was in this form that the multiple cover formula was proven by Aspinwall and Morrison in [@[1]] and by Voisin in [@[10]].
It follows from the above equation that: $$\begin{aligned}
& & N_{\beta}(H_1,H_2,H_3)=\sum_{\beta=d\gamma}n_{\gamma}\int_{\gamma}H_1 \int_{\gamma}H_2 \int_{\gamma}H_3 \nonumber \\ & & =(\sum_{\beta=d\gamma}n_{\gamma}d^{-3})\int_{\beta}H_1 \int_{\beta}H_2 \int_{\beta}H_3\end{aligned}$$ where $n_{\gamma}$ is the virtual number (instantons) of rational curves of type $\gamma$ in $X$.
A rigorous definition of $N_{\beta}$ and $n_{\beta}$ requires a new conceptual framework which is now known as Gromov-Witten theory. Let $X$ be a smooth, projective manifold and $\beta\in H_2(X)$. There is a moduli stack $\overline M_{0,n}(X,\beta)$ which parametrizes pointed, stable maps of degree $\beta$. Universal properties of these maduli stacks imply the existence of several features: $$\begin{aligned}
& & e=(e_1,e_2,...,e_n):\overline M_{0,n}(X,\beta)\rightarrow X^n,~ \pi_n:\overline M_{0,n}(X,\beta)\rightarrow \overline M_{0,n-1}(X,\beta) \nonumber \\ & & \pi:\overline M_{0,n}(X,\beta)\rightarrow \overline M_{0,0}(X,\beta),~\hat{\pi}:\overline M_{0,n}(X,\beta)\rightarrow \overline M_{0,n}.\end{aligned}$$ The map $e$ evaluates the pointed, stable map at the marked points, $\pi_n$ forgets the last marked point and collapses the unstable components of the source curve, $\pi$ forgets the marked points and $\hat{\pi}$ forgets the map and stabilizes the pointed source curve. The expected dimension of $\overline M_{0,n}(X,\beta)$ is $\text{dim}~X+\int_{\beta}(-K_X)+n-3$. The dimension of the moduli stack of stable maps may be greater than the expected dimension. In this case, a Chow class of the expected dimension has been constructed. It plays the role of the fundamental class, hence it is called the virtual fundamental class and denoted by $[\overline M_{0,n}(X,\beta)]^{\text{vir}}$ (see [@[11]],[\[13\]]{}).
Let $X$ be a Calabi-Yau threefold and $H_1,H_2,H_3\in H^2(X)$. In the Gromov-Witten setting: $$N_{\beta}(H_1,H_2,H_3):=\int_{[\overline M_{0,3}(X,\beta)]^{\text{virt}}}e_1^*(H_1)e_2^*(H_2)e_3^*(H_3).$$ The expected dimension of $\overline M_{0,0}(X,\beta)$ is zero. Let: $$N_{\beta}:=\text{deg}([\overline M_{0,0}(X,\beta)]^{\text{virt}})$$ By the divisor axiom: $$N_{\beta}(H_1,H_2,H_3)=N_{\beta}\int_{\beta}H_1\int_{\beta}H_2\int_{\beta}H_3.$$
Let $C\subset X$ be a smooth rational curve with $N_{C/X}=\mathcal O_C(-1)\oplus \mathcal O_C(-1)$. The moduli space $\overline M_{0,0}(X,d[C])$ contains a component of positive dimension, namely $\overline M_{0,0}(C,d)$. The dimension of this component is $2d-2$. Consider the following diagram: $$\begin{CD}
\overline M_{0,1}(C,d)@>e_1>>C \\
@VV \pi V \\
\overline M_{0,0}(C,d)
\end{CD}$$
The sheaf: $$V_d:=R^1\pi_*(\mathcal O_C(-1)\oplus \mathcal O_C(-1))$$ is locally free of rank $2d-2$. Let $\EE_d$ be its top chern class. An assertion of Kontsevich in [@[6]], which was proven by Behrend in [@[2]], states that the part of $[\overline M_{0,0}(X,\beta)]^{\text{virt}}$ supported in $\overline M_{0,0}(C,d)$ is Poincar$\acute{e}$ dual to $\EE_d$. The multiple cover formula in this context says that: $$\int_{\overline M_{0,0}(C,d)}\EE_d=\displaystyle d^{-3}$$ i.e. the curve $C$ contributes by $d^{-3}$ to to $N_{d[C]}$.
The multiple cover formula in this form was proven by Kontsevich [@[6]], Lian-Liu-Yau [@[7]], Manin [@[8]] and Pandharipande [@[9]].
By the divisor property, the multiple cover formula in this context follows from: $$\int_{\overline M_{0,3}(C,d)}e_1^*(h)e_2^*(h)e_3^*(h)\pi^*(\EE_d)=1$$ The instanton numbers $n_{\gamma}$ are defined inductively by: $$N_{\beta}=\sum_{\beta=k\gamma}n_{\gamma}k^{-3}$$ The point of this introduction is that the Aspinwall-Morrison calculation deals with concepts and questions that were not well defined at the time. Hence their calculation, although useful and convincing, is incomplete. The purpose of this paper is to relate the two calculations, hence justifying the Aspinwall-Morrison calculation and closing this historic chapter in the subject.
We show in passing the connection between the two formulations of the multiple cover formula for the quintic threefold: $$N_d(H,H,H)=d^3N_d=d^3\sum_{k|d}n_k(\frac{k}{d})^3=\sum_{k|d}n_kk^3$$
[**II. A review of the Aspinwall-Morrison calculation.**]{} Consider a Calabi-Yau threefold $X$ and a rational curve $C\subset X$ such that $N_{C/X}=\mathcal O_C(-1)\oplus \mathcal O_C(-1)$. Let: $$N_d(C):=\{f:\PP^1\rightarrow X ~|~ f(\PP^1)=C, \text{deg}f=d\}$$ be the space of parameterized maps from $\PP^1$ to $X$. Since $C$ is isolated, $N_d(C)$ is a component of the space of all maps from $\PP^1$ to $X$.
At a moduli point $[f]$, the tangent space and the obstruction space are given respectively by $H^0(f^*(T_X))$ and $H^1(f^*(T_X))$, i.e. locally $N_d(C)$ is given by dim $H^1(f^*(T_X))$ equations in the tangent space. The virtual dimension is: $$\text{dim}~H^0(f^*(T_X))-\text{dim}~H^1(f^*(T_X))=3.$$
The space $N_d(C)$ compactifies to $\overline N_d(C)=\PP^{2d+1}$. Let $\overline \Gamma$ be the compactification of the universal graph $\Gamma\subset N_d(C)\times \PP^1\times C$ and $H$ the hyperplane class in $\overline N_d(C)$..
The dimension of $H^1(f^*(T_X))$ is $2d-2$ for any $f$. These vector spaces fit together to form a bundle $\mathcal U_d$ over $N_d(C)$. Let $p_i$ be the $i$-th projection on $\overline N_d(C)\times \PP^1\times C$. The bundle $\mathcal U_d$ extends to: $$U_d:=R^1{p_1}_*(p_3^*(T_X|C)|_{\overline \Gamma})$$ over $\overline N_d(C)$. A calculation in [@[1]] yields $U_d=\mathcal O(-1)^{\oplus {d-2}}$. Based primarily on considerations from topological field theories, Aspinwall and Morrison asserted that the cycle corresponding to the degree $d$ multiple covers of $C$ is Poincar$\acute{e}$ dual to $c_{\text{top}}(U_d)=H^{2d-2}$. We will see that this is consistent with the notion of the virtual fundamental class.
Let $H_i\in H^2(X)$ for $i=1,2,3$ and $Z_i$ their Poincar$\acute{e}$ duals. The space: $$\{f\in N_d(C)~|~f(0)=0\}$$ gives rise to a linear subspace of $\overline N_d(C)$. Therefore: $$\begin{aligned}
& & \#\{f\in N_d(C)~|~f(0)=0,f(1)=1,f(\infty)=\infty\}\nonumber \\ & & =\int_{\overline N_d(C)}H\cdot H\cdot H\cdot c_{\text{top}}U_d=1.\end{aligned}$$
It follows that the contribution of $N_d(C)$ to: $$\#\{f:\PP^1\rightarrow X~|~f_*[\PP^1]=d[C],f(0)\in Z_1, f(1)\in Z_2, f(\infty)\in Z_3\}$$ is $$\int_{C}H_1\cdot \int_{C}H_2\cdot \int_{C}H_3.$$
We emphasize that the multiple cover formula in this approach follows from: $$\int_{\overline N_d(C)}H\cdot H\cdot H\cdot c_{\text{top}}U_d=\int_{\overline N_d(C)}H^{2d+1}=1.$$
[**III. The connection to the Gromov-Witten theory**]{}. The main result in this paper is the following:
There exists a birational morphism: $$\alpha:\overline M_{0,3}(C,d)\rightarrow {\overline N}_d(C)$$ such that:
1. $\alpha_*(e_i^*(h))=H$ for $i=1,2,3.$
2. $\alpha_*(e_1^*(h)e_2^*(h)e_3^*(h))=H^3$
3. $\alpha_*(e_1^*(h)e_2^*(h)e_3^*(h)\pi^*(\EE_d))=H^{2d+1}$.
This proposition implies that the equations $(15)$ and $(25)$ are equivalent, hence connecting the Aspinwall-Morrison calculation to the Gromov-Witten theory.
[**Acknowledgements**]{}. The problem was suggested to the author by Sheldon Katz (see also the note in [@[3]]) who was very helpful through this work. We would also like to thank Jun Li for fruitful discussions on the subject.
**Relation of the Aspinwall-Morrison formula with Gromov-Witten invariants**
============================================================================
The space of nonparameterized degree $d$ maps $f:\PP^1\rightarrow \PP^n$ has two particular compactifications that have been employed successfully especially in proving mirror theorems for projective spaces: the nonlinear sigma model (or the graph space): $$M^n_d:=\overline M_{0,0}(\PP^n\times \PP^1,(d,1))$$ and the linear sigma model: $$N^n_d:=\PP(H^0(\mathcal O_{\PP^1}(d))).$$ Elements of $N^n_d$ are $(n+1)$-tuples $[P_0,...,P_n]$ of degree $d$ polynomials in two variables $w_0,w_1$. The linear sigma model $N_d$ is a projective space via the identification $[P_0,...]=[\sum_{i}a_iw_0^iw_1^{d_i},...]=[a_0,...,a_d,...]$. Note that $N^1_d=\overline N_d(C)$ for $C\simeq \PP^1$. Let $H$ be the hyperplane class in $N^n_d$.
There exists a birational morphism $\phi:M^n_d\rightarrow N^n_d$. We describe this morphism set-theoretically. Let $(C',f)\in M^n_d$. There is a unique component $C_0$ of $C'$ that is mapped with degree $1$ to $\PP^1$. Let $C_1,...,C_r$ be the irreducible components of the rest of the curve and $q_i=(c_i,d_i)$ the nodes of $C'$ on $C_0$. Let $d-i$ be the degree of the map $p_2\circ f:C'\rightarrow \PP^n$ on $C_i$ for $i=0,1,...,r$. Let $R(w_0,w_1)=\prod_{i=1}^{r}(c_iw_1-d_iw_0)^{d_i}$. If the restriction of the map $p_2\circ f$ is given by $[Q_0,...,Q_n]$ then: $$\phi(C',f):=[RQ_0,...,RQ_n].$$ A proof of the fact that $\phi$ is a morphism is given by J. Li in [@[7]].
The first step in connecting the Aspinwall-Morrison calculation to Gromov-Witten invariants is showing that $M^n_d$ and $N^n_d$ are birational models for $\overline M_{0,3}(\PP^n,d)$.
There exists a birational map $\psi:\overline M_{0,3}(\PP^n,d)\rightarrow M^n_d$.
[**Proof**]{}. Consider the following diagram: $$\begin{CD}
\overline M_{0,4}(\PP^n,d)@>(\hat{\pi},e_4)>>\overline M_{0,4}\times \PP^n \\
@VV \pi_4 V \\
\overline M_{0,3}(\PP^n,d).
\end{CD}$$ Since $\overline M_{0,4}\simeq \PP^1$ and $e_4$ is stable in the fibers of $\pi_4$, the above diagram exhibits a stable family of maps of degree $(1,d)$ parametrized by $\overline M_{0,3}(\PP^n,d)$. Universal properties of $M^n_d$ yield a morphism: $$\psi:\overline M_{0,3}(\PP^n,d)\rightarrow M^n_d.$$ The map $\psi$ is an isomorphism in the smooth locus, hence it is a birational map.$\dagger$
Let $\pi_4:\overline M_{0,4}\rightarrow \overline M_{0,3}=\{pt\}$ be the map that forgets the last marked point and $\sigma_i$ be the section of the i-th marked point for $i=1,2,3$. Choose coordinates on $\overline M_{0,4}\simeq \PP^1$ such that the images of these three sections are respectively $0=[1,0],\infty=[0,1],1=[1,1]$. Let $$\alpha:=\phi\circ \psi:\overline M_{0,3}(\PP^n,d)\rightarrow N^n_d.$$
Let $h$ be the hyperplane class of $\PP^n$.
1. $\alpha_*(e_i^*(h))=H$ for $i=1,2,3$.
2. $\alpha_*(e_1^*(h)e_2^*(h)e_3^*(h))=H^3$
[**Proof**]{}. Let $$\nu_1:N_d---> \PP^n$$ be a rational map defined by $$\nu_1([P_0,P_1,...,P_n])=[P_0(1,0),P_1(1,0),...,P_n(1,0)].$$ This map is defined in the complement $U$ of a codimension $n+1$ linear subspace $P(W_1)$ of $N^n_d$. Clearly $\nu_1^*(h)=H$ on $U$. The preimage $D_{1,23}$ of $P(W_1)$ in $\overline M_{0,3}(\PP^n,d)$ is a sum of $d$ boundary divisors $D(\{x_1\},\{x_2,x_3\},d_1,d_2)$ with $d_1>0$ and $d_1+d_2=d$. The evaluation map $e_1$ over $U$ factors through the rational map $\nu_1$. It follows that $$e_1^*(h)=\alpha^*(H)+D_1,$$ where $D_1$[^1] is a divisor supported in $D_{1,23}$. Using the evaluations at $1$ and $\infty$ on $N^n_d$, we obtain: $$e_2^*(h)=\alpha^*(H)+D_2$$ and $$e_3^*(h)=\alpha^*(H)+D_3,$$ where $D_2$ is a divisor supported in $D_{2,13}$ and $D_3$ is supported in $D_{3,12}$.
The $\psi$-image of $D(\{x_1\},\{x_2,x_3\},d_1,d_2)$ does not detect the movement of the marking $x_1$ along its incident component, hence it is a codimension $2$ cycle in $M^n_d$. It follows that $\psi_*(D_1)=0$. Similarly $\psi_*(D_2)=0$ and $\psi_*(D_3)=0$. Both $\psi$ and $\phi$ are birational hence by the projection formula: $$\alpha_*(e_i^*(h))=H$$ for $i=1,2,3$.
Let $D'\in D_{1,23},D''\in D_{2,13},D'''\in D_{3,12}$ be irreducible boundary divisors. The intersection of any two of them either is $0$ or its image is a codimension $4$ cycle in $M^n_d$. It follows that: $$\psi_*(D'D'')=\psi_*(D'D''')=\psi_*(D''D''')=0.$$ Notice also that: $$D'D''D'''=0.$$ The projection formula yields: $$\psi_*(e_1^*(h)e_2^*(h)e_3^*(h)=\psi_*(\prod_{i}(\psi^*(\phi^*(H))+D_i))=\prod_{i}(\phi^*(H))=\phi^*(H^3).$$ The lemma follows from the fact that $\phi$ is a birational map.$\dagger$
Return now to the case $n=1$ of our interest.
Let $\rho:M^1_d\rightarrow \overline M_{0,0}(C,d)$ be the natural morphism. The composition: $$\rho\circ \psi:\overline M_{0,3}(C,d)\rightarrow \overline M_{0,0}(C,d)$$ is the map $\pi$ that forgets the $3$ marked points and stabilizes the source curve. Recall Kontsevich’s obstruction bundle $V_d$ on $\overline M_{0,0}(C,d)$. Its fiber is $H^1(C', f^*(\mathcal O(-1)\oplus \mathcal O(-1)))$. Its top chern class is $\EE_d$. We are now ready to exhibit the connection between the Aspinwall-Morrison calculation and Gromov-Witten invariants.
$\alpha_*\left(e_1^*(h)e_2^*(h)e_3^*(h)\pi^*(\EE_d)\right)=H^{2d+1}.$
[**Proof**]{}. Let $E_d$ be the top chern class of the bundle $\rho^*(V_d)$ on $M^1_d$. Recall from part II of the introduction that $H^{2d-2}$ is the top chern class of the Aspinwall-Morrison obstruction bundle $U_d$ on $N^1_d$. It is shown in [@[7]] that $\phi_*(E_d)=H^{2d-2}$. On the other hand $\psi^*(E_d)=\pi^*(\EE_d)$. But $\psi$ is birational, hence by the projection formula $\psi_*(\pi^*(\EE_d))=E_d$.
We compute: $$\begin{aligned}
& & \alpha_*(\prod_{i}e_i^*(h)\EE_d)=\alpha_*(\prod_{i}e_i^*(h)\psi^*(E_d))=\phi_*(\psi_*(\prod_{i}e_i^*(h))E_d) \nonumber \\ & & =\phi_*(\phi^*(H^3)E_d)=H^3\phi_*(E_d)=H^3H^{2d-2}=H^{2d+1}.\end{aligned}$$ The proposition is proven.$\dagger$
The last proposition yields: $$\int_{\overline M_{0,3}(C,d)}\prod_{i=1}^{3}e_i^*(h)\EE_d=\int_{\overline N_d(C)}\alpha_*(\prod_{i=1}^{3}e_i^*(h)\psi^*(E_d))=\int_{\overline N_d(C)}H^{2d+1}=1,$$
i.e. the Aspinwall-Morrison calculation is a pushforward of Kontsevich’s calculation from $\overline M_{0,3}(C,d)$ to the projective space $\overline N_d(C)$.
[\[Hum\]]{} P. S. Aspinwall and D. Morrison, [*Topological field theory and rational curves*]{}, Comm. Math. Phys., 151(2), (1993), 245-262 K. Behrend, [*Gromov-Witten invariants in algebraic geometry*]{}, Invent. Math. [**127**]{} (1997), 601-617. K. Behrend and B. Fantechi, [*The intrinsic normal cone*]{}, Invent. Math., 128 1997, 45-88. J. Bryan, S. Katz, N. C. Leung, [*Multiple covers and the integrality conjecture for rational curves in Calabi-Yau threefolds*]{}, preprint 1999, math.AG/9911056. P. Candelas, X. de la Ossa, P. Green and L. Parkes, [*A pair of Calabi-Yau manifolds as an exactly soluble superconformal theory*]{}, Nuc. Phys. [**539**]{} (1991), 21-74. D. Cox and S. Katz, [*Mirror Symmetry and Algebraic Geometry*]{}, AMS Surveys and Monographs in Mathematics, AMS, Providence RI 1999. M. Kontsevich, [*Enumeration of rational curves via torus actions*]{}, in [*The moduli space of curves*]{}, (R. Dijkgraaf, C. Faber, and G. van der Geer, eds.), Birkhauser, 1995, 335-168. J. Li and G. Tian, [*Virtual moduli cycles and Gromov-Witten invariants of algebraic varieties*]{},J. AMS 11 1998, 119-174. B. Lian, K. Liu, S.-T.Yau, [*Mirror Principle I*]{}, Asian J. Math. Vol.1, no. 4 (1997), 729-763. Yu. I. Mannin, [*Generating functions in algebraic geometry and sums over trees*]{}, in [*The moduli of curves*]{}, (R. Dijkgraaf, C. Faber, and G. van der Geer, eds.), Birkhauser, 1995, 401-417. R. Pandharipande, [*Hodge integrals and degenerate contributions*]{}, preprint 1998, math.AG/9811140. I. Vainsencher, [*Enumeration of n-fold tangent hyperplanes to a surface*]{}, J. Algebraic Geom., 4 1995, 503-526. C. Voisin, [*A mathematical proof of a formula of Aspinwall and Morrison*]{}, Compositio Math., 104(2) 1996, 135-151.
E-mail: [email protected]
Address: Department of Mathematics, Stanford University, Stanford CA, 94305.
[^1]: It can be shown that $D_1=-\sum_{d_1}d_1D(\{x_1\},\{x_2,x_3\},d_1,d-d_1)$ but this is not important in this paper.
|
---
abstract: |
In [@CHMY04], we studied $p$-mean curvature and the associated $p$-minimal surfaces in the Heisenberg group from the viewpoint of PDE and differential geometry. In this paper, we look into the problem through the variational formulation. We study a generalized $p$-area and associated ($p$-) minimizers in general dimensions.
We prove the existence and investigate the uniqueness of minimizers. Since this is reduced to solving a degenerate elliptic equation, we need to consider the effect of the singular set and this requires a careful study. We define the notion of weak solution and prove that in a certain Sobolev space, a weak solution is a minimizer and vice versa. We also give many interesting examples in dimension 2. An intriguing point is that, in dimension 2, a $C^2$-smooth solution from the PDE viewpoint may not be a minimizer. However, this statement is true for higher dimensions due to the relative smallness of the size of the singular set.
address:
- ' Institute of Mathematics, Academia Sinica, Taipei, Taiwan, R.O.C.'
- ' Department of Mathematics, Princeton University, Princeton, NJ 08544, U.S.A.'
author:
- 'Jih-Hsin Cheng'
- 'Jenn-Fang Hwang'
- Paul Yang
title: 'Existence and uniqueness for p-area minimizers in the Heisenberg group'
---
[^1]
[^2]
[^3]
Introduction and statement of the results
=========================================
The $p$-minimal (or X-minimal, H-minimal in the terminology of some authors, e.g., [@FSS01], [@GN96], [@Pau01]) surfaces have been studied extensively in the framework of geometric measure theory. Starting from the work [@CHMY04], we studied the subject from the viewpoint of partial differential equations and that of differential geometry (we use the term $p$-minimal since this is the notion of minimal surfaces in pseudohermitian geometry; “$p$” stands for “pseudohermitian”).
Let $\Omega $ be a bounded domain in $R^{2n}.$ Let $\vec{X}$ $=$ $%
(x_{1}, $ $x_{1^{\prime }},$ $x_{2},$ $x_{2^{\prime }},$ $..,$ $x_{n},$ $%
x_{n^{\prime }})$ $\in \Omega .$ For a graph $(\vec{X},$ $u(\vec{X}))$ in the Heisenberg group of dimension $2n+1$ with prescribed $p$-mean curvature $%
H$ $=$ $H(\vec{X}),$ the equation for $u$ $:$ $\Omega \subset R^{2n}$ $%
\rightarrow $ $R$ reads
$$div\frac{\nabla u-\vec{X}^{\ast }}{|\nabla u-\vec{X}^{\ast }|}=H
\label{eqn1.1}$$
where $\vec{X}^{\ast }$ $=$ $(x_{1^{\prime }},$ $-x_{1},$ $%
x_{2^{\prime }},$ $-x_{2},...,$ $x_{n^{\prime }},$ $-x_{n})$ (see (\[eqn2.10\]) in Section 2 for a geometric interpretation)$.$ In general, for a vector field $\vec{G}$ $=$ $(g_{1},g_{2},...,g_{2n})$ on $%
\Omega \subset R^{2n},$ we define $\vec{G}^{\ast }$ $\equiv $ $(g_{2},$ $%
-g_{1},$ $g_{4},$ $-g_{3},$ $...,$ $g_{2n},$ $-g_{2n-1}).$ The equation (\[eqn1.1\]) is the Euler-Lagrange equation (away from the singular set [$\nabla u-\vec{X}^{\ast }=0$]{}) of the following energy functional (called the $p$-area of the graph defined by $u$ if $H$ $=$ $0,$ see Section 2):
$$\mathcal{X}(u)=\int_{\Omega }\{|\nabla u-\vec{X}^{\ast }|+Hu\}dx_{1}\wedge
dx_{1^{\prime }}\wedge ...\wedge dx_{n}\wedge dx_{n^{\prime }}.
\label{eqn1.2}$$
Since we consider the variation over the whole domain, the singular set will cause the main difficulty in the study. In order to explain this, we generalize $\mathcal{X}(\cdot )$ by considering an arbitrary vector field $\vec{F}$ $=$ $\vec{F}(\vec{X})$ instead of $-\vec{X}^{\ast }$ in the following form:
$$\mathcal{F}_{q}(u)\equiv \int_{\Omega }\{|\nabla u+\vec{F}%
|^{q}+qHu\}dx_{1}\wedge dx_{2}\wedge ...\wedge dx_{m} \label{eqn1.3}$$
for $1\leq q<\infty$, where $\Omega$ $\subset$ $R^{m}$. Let $S(u)$ denote the singular set of $u$, consisting of the points where $\nabla u+\vec{F}$ $=$ $0.$ Let $%
u_{\varepsilon }$ $=$ $u+\varepsilon \varphi .$ It is easy to compute (see Section 3 for the case $q$ $=$ $1$) the first variation of $\mathcal{F}_{q}:$ (omitting the Euclidean volume element)
$$\begin{aligned}
&&\frac{d\mathcal{F}_{q}(u_{\varepsilon })}{d\varepsilon }|_{\varepsilon
=0\pm } \label{eqn1.3''}\\
&=&c_{q}\int_{S(u)}|\nabla \varphi |^{q}+\int_{\Omega \backslash
S(u)}q|\nabla u+\vec{F}|^{q-2}(\nabla u+\vec{F})\cdot \nabla \varphi
+\int_{\Omega }qH\varphi \notag\end{aligned}$$
where $c_{q}$ $=$ $\pm 1$ for $q$ $=$ $1$ and $c_{q}$ $=$ $0$ for $%
1<q<\infty .$
For $q$ $=$ $1$, can we ignore the term $\pm \int_{S(u)}|\nabla \varphi |$? A recent paper of Balogh answered this question completely. In [@Bal03] Balogh studied the size of the singular set $S(u)$ (called the characteristic set in [@Bal03]). He showed (Theorem 3.1(2) in [@Bal03]) that for $\vec{F}$ $=$ $-\vec{X}^{\ast }$ in $R^{2n}$, $S(u)$ has locally finite $n$-dimensional Hausdorff measure if $u$ $\in $ $C^{2}$. We obtained the same result as Lemma 5.4 in [@CHMY04] by a different argument (we used only elementary linear algebra and the implicit function theorem in the proof; also we were not aware of [@Bal03] at the time [@CHMY04] was written). In this paper, we generalize this result to the situation of general $\vec{F}$ (see Theorem D below and its proof in Section 6). For $u$ $\in $ $C^{1,1}$ and $\vec{F}$ $=$ $-\vec{X}^{\ast }$ in $R^{2n}$, Balogh showed (Theorem 3.1(1) in [@Bal03]) that $dim_{E}S(u)$ $<$ $2n-\delta$ where $dim_{E}$ denotes the Hausdorff dimension with respect to the Euclidean metric and $\delta$ depends on the Lipschitz constant of $\nabla u$. He also proved the existence of $u$ $\in$ $\cap_{0<\alpha <1}C^{1,\alpha }$ such that $S(u)$ has positive Lebesgue measure for any $\vec{F}$ $\in $ $C^{1}(\Omega )$ where $\Omega$ $\subset$ $R^{m}$ is a given bounded domain (Theorem 4.1(2) in [@Bal03]). In this paper, we consider functions $u$ of class $W^{1,1}$ so that the size of $S(u)$ may be large according to Balogh. Therefore for the case of $q$ $=$ $1$ in (\[eqn1.3”\]), we can not neglect the contribution of the singular set to define the weak solutions (see Definition 3.2) to the Euler-Lagrange equation of $\mathcal{F}_{q}$:
$$div\frac{\nabla u+\vec{F}}{|\nabla u+\vec{F}|^{2-q}}=H. \label{eqn1.4}$$
Equation (\[eqn1.4\]) has been studied in various situations. For $%
\vec{F}$ $=$ $0,$ $H$ $=$ $0,$ (\[eqn1.4\]) is known to be the $q$-harmonic equation for $1<q<\infty $ while it is the equation associated to the least gradient problem for $q$ $=$ $1$ (see, for instance, [@SWZ92], [@Juu04], [@JL04], etc.)$.$ Geometrically there is a dichotomy for the 1-form $\Theta $ $\equiv $ $dz+F_{I}dx_{I}$ associated to the vector field $\nabla u+\vec{F},$ where $\vec{F}$ $=$ $(F_{I}).$ Namely, the hyperplane distribution defined by the kernel of $\Theta $ might be either integrable or (completely) nonintegrable ($\Theta $ is called a contact form in this case). When $\vec{F}$ $=$ $0,$ this is the integrable case. For the nonintegrable case (e.g. $\vec{F}$ $=$ $-\vec{X}^{\ast }),$ the quantity of the left side in (\[eqn1.4\]) with $q$ $=$ $1$ can be realized as the $p$-mean curvature of the graph defined by $u$ in pseudohermitian geometry (see (\[eqn2.12\])).
We study equation (\[eqn1.4\]) with $q$ $=$ $1$:
$$div\frac{\nabla u+\vec{F}}{|\nabla u+\vec{F}|}=H \label{eqn1.4'}$$
**Definition 1.1.** Let $\Omega$ be a domain in $R^{m}$, $m\geq 1$. We say $u\in C^{2}(\Omega )$ is a $C^{2}$ smooth solution to (\[eqn1.4’\]) if and only if (\[eqn1.4’\]) holds in $\Omega \backslash
S(u).$
In [@CHMY04] and [@CH04], we considered $C^{2}$-smooth solutions $u$ to (\[eqn1.1\]) (i.e., (\[eqn1.4’\]) with $\vec{F}$ $=$ $-\vec{X}^{\ast }$) with $H$ $=$ $0$ in dimension $2$ and, among other things, we proved a Bernstein-type theorem. Later in [@GP05] the authors obtained a similar Bernstein-type theorem through a different approach. The description of the singular set for a $C^{2}$-smooth solution to (\[eqn1.1\]) occupies a central position in [@CHMY04]. As a geometric application, we can show the nonexistence of $C^{2}$-smooth, closed surfaces of genus $\geq 2$ with bounded $p$-mean curvature in any pseudohermitian 3-manifold. In [@CHMY04] we also proved a uniqueness theorem for $C^{2}$-smooth solutions for the Dirichlet problem of (\[eqn1.4’\]) in $R^{2n}$. Recently Ritoré and Rosales proved a rigidity result for $C^{2}$-smooth surfaces of nonzero constant $p$-mean curvature and an Alexandrov-type theorem in the 3-dimensional Heisenberg group (see Theorem 6.1 and Theorem 6.10 in [@RR05], respectively).
In this paper we consider $W^{1,1}$ minimizers for
$$\mathcal{F}(u)\equiv \int_{\Omega }\{|\nabla u+\vec{F}|+Hu\}dx_{1}\wedge
dx_{2}\wedge ...\wedge dx_{m} \label{eqn1.3'}$$
((\[eqn1.3\]) with $q$ $=$ $1).$ In Section 3 we define and show that in the space $W^{1,1}$, a minimizer for (\[eqn1.3’\]) is a weak solution to the equation (\[eqn1.4’\]) and vice versa (see Theorem 3.3). In order to overcome the trouble caused by singular sets which are not negligible, we introduce the notion of ”regular value”. Suppose $u$ $\in
$ $W^{1,1},$ $\varphi $ $\in $ $W_{0}^{1,1}.$ Define $u_{\varepsilon }$ $%
\equiv $ $u$ $+$ $\varepsilon \varphi $ for $\varepsilon $ $\in $ $R.$ We prove that there are at most countably many $\varepsilon $’s for which
$$\int_{S(u_{\varepsilon })}\mid \nabla \varphi \mid \neq 0$$
(cf. (\[eqn1.3”\]) for $q$ $=$ $1).$ We call such an $\varepsilon $ singular, otherwise regular. That is, the above integral vanishes for almost all (regular) $\varepsilon $ (see Lemma 3.1)$.$ So we do not need to worry about the size of the singular set for regular $\varepsilon $’s. The idea of considering regular values plays a central role both in the proof of the equivalence between minimizers and weak solutions and in the proof of the uniqueness theorems in Section 5.
In Section 4 we prove the existence of a Lipschitz continuous minimizer for $%
\mathcal{F(\cdot )}$ with a given boundary value in the case of $H$ $=$ $0$ under the following condition on $\vec{F}$:
$$\partial _{K}F_{I}=\partial _{I}f_{K},\ \ I,K=1,...,m \label{eqn1.5}$$
for $C^{1}$-smooth functions $f_{K}$’s (cf. (\[eqn4.11\])). We require $\Omega $ to be a p-convex domain (see Definition 4.1).
**Theorem A.** *Let* $\Omega $ *be a p-convex bounded domain in* $R^{m},m\geq 2$, *with* $\partial \Omega \in C^{2,\alpha }$* *$(0<\alpha <1)$*.* *Let* $\varphi \in C^{2,\alpha }(\bar{%
\Omega}).$* Suppose* $\vec{F}$* *$\in $* *$%
C^{1,\alpha }(\bar{\Omega})$* satisfies the condition (\[eqn1.5\]) (or (\[eqn4.11\])) for* $C^{1,\alpha }$*-smooth and bounded* $f_{K}$*’s in* $\Omega .$* Then there exists a Lipschitz continuous minimizer* $%
u$* *$\in $ $C^{0,1}(\bar{\Omega})$ *for* $\mathcal{F}(\cdot
) $* with* $H$ $=$ $0$ *such that* $u=\varphi $* on* $%
\partial \Omega .$
We note that a $C^{2}$-smooth bounded domain with positively curved (positive principal curvatures) boundary is p-convex. Also condition (\[eqn1.5\]) includes the case $\vec{F}$ $=$ $-\vec{X}^{\ast }$. We can actually find out all the solutions to (\[eqn1.5\]) (see (\[eqn4.11”\])). We notice that, for $n=1$, Pauls ([@Pau01]) proved the existence of a continuous $W^{1,p}$ minimizer for $\mathcal{X}(\cdot )$ under the assumption that the graph of the prescribed boundary function $\varphi $ satisfies the bounded slope condition (see [@GT83]).
The idea of the proof of Theorem A is to invoke Theorem 11.8 in [@GT83] for a family of elliptic approximating equations (see also [@Pau01]). Namely we first solve the Dirichlet problem for the following equations:
$$\begin{aligned}
Q_{\varepsilon }u &\equiv &div(\frac{\nabla u+\vec{F}}{\sqrt{\varepsilon
^{2}+|\nabla u+\vec{F}|^{2}}})=0\text{ \ \ in }\Omega ,\\
u &=&\varphi \text{ \ on }\partial \Omega \end{aligned}$$
(see (\[eqn4.1\])). We end up obtaining a uniform $C^{1}$ bound for solutions to the above equations, and a subsequence of solutions converges to a Lipschitz continuous minimizer as $\varepsilon$ $\rightarrow$ $0$. In Section 4 we give the details of the proof.
In Section 5 we tackle the problem of uniqueness of minimizers in the Heisenberg group of arbitrary dimension (see Theorem B). We also generalize the comparison principle in [@CHMY04] (cf. Theorem C, Theorem C$^{\prime }$ there) to a weak version and for a wide class of $\vec{F}$’s (see Theorem C below).
**Theorem B.** *Let* $\Omega $ *be a bounded domain in* $%
R^{2n}$*. Let* $u,v$* *$\in $* *$W^{1,2}(\Omega )$* be two minimizers for* $\mathcal{F}(\cdot )$* such that* $%
u-v$* *$\in $* *$W_{0}^{1,2}(\Omega )$*. Suppose* $H$ $\in $ $L^{\infty }(\Omega )$ and $\vec{F}$* *$\in W^{1,2}(\Omega
)$* satisfying* $div\vec{F}^{\ast }$* *$>$* *$0$* (a.e.). Then* $u\equiv v$* in* $\Omega
$* (a.e.).*
We remark that in the specific case $\vec{F}$ $=$ $-\vec{X}^{\ast }$ the assumptions in Theorem B are satisfied. On the other hand, the condition $div \vec{F}^{\ast }>0$ is essential in Theorem B. Let $\Omega =B_{2}-{\bar B}_{1}\subset R^{2}$ where $B_{r}$ denotes the open ball of radius $r$. Consider the case $\vec{F}=0$ and $H=\frac{1}{r}$. Let $u=f(r),v=g(r)$, and $f\neq g$ with the properties that $f(1)=g(1)$, $f(2)=g(2)$, and $f^{\prime}>0,g^{\prime}>0$ for $1\leq r \leq 2$. Then it is easy to see that $u$ and $v$ are two minimizers for the associated $\mathcal{F}(\cdot )$ (see also page 162 in [@CHMY04]).
**Theorem C.** *Let* $\Omega $* be a bounded domain in* $%
R^{2n}.$* Let* $\vec{F}$* (a vector field)* $\in
W^{1,2}(\Omega )$* satisfy* $div\vec{F}^{\ast }$* *$>$* *$0$* (a.e.).* *Suppose* $u,v\in
W^{1,2}(\Omega )$* satisfy the following conditions:*
$$\begin{aligned}
divN(u) &\geq &divN(v)\text{ in }\Omega \text{ (in the weak sense);} \\
u &\leq &v\text{ on }\partial \Omega .\end{aligned}$$
*Then* $u\leq v$* in* $\Omega .$
In Sections 6, we study the relation between $C^{2}$-smooth solutions and minimizers. In [@CHMY04] (Theorem B there), we proved that if $u$ is a $C^{2}$-smooth solution to (\[eqn1.1\]) in dimension 2 with $H$ bounded near a singular point $p_{0}$, then either $p_{0}$ is isolated in $S(u)$ or there exists a small neighborhood $B$ of $p_{0}$ which intersects with $S(u)$ in exactly a $C^{1}$-smooth curve $\Gamma$ through $p_{0}$ (the condition on $H$ can be weaker). Moreover, $\Gamma$ divides $B$ into two disjoint nonsingular domains $B^{+}$ and $B^{-}$, and $N(u)(p_{0}^{+})$ $\equiv $ $\lim_{p\in B^{+}\rightarrow
p_{0}}N(u)(p)$ and $N(u)(p_{0}^{-})$ $\equiv $ $\lim_{p\in
B^{-}\rightarrow p_{0}}N(u)(p)$ exist. Also $N(u)(p_{0}^{+})$ $=$ $-N(u)(p_{0}^{-})$ (see Proposition 3.5 in [@CHMY04]). In Section 6 and the first part of Section 7 (see Proposition $6.2$, Theorem 6.3, (\[eqn6.4\]), and (\[eqn6.5\])), we will generalize such a situation and give a criterion for $u$ to be a minimizer. In particular, suppose $u$ is $C^{2}$-smooth. Then Proposition $6.2$ or Theorem 6.3 gives a criterion for $u$ to be a minimizer in the situation $H_{m-1}(S(u))$ $>$ $0$ while if $H_{m-1}(S(u))$ $=$ $0$, $u$ must be a minimizer (see Lemma 6.1).
In [@Pau01], Pauls constructed two different $C^{2}$ (in fact $C^{\infty }$) smooth solutions to the $p$-minimal surface equation ((\[eqn1.1\]) with $H$ $=$ $0$ or (\[eqn1.4’\]) with $H$ $=$ $0$ and $\vec{F}$ $=$ $-\vec{X}%
^{\ast })$ with the same $C^{\infty }$-smooth boundary value and the same $p$-area in $\Omega$ $\subset$ $R^{2}$. These two solutions do not satisfy the criterion in Proposition $6.2$ or Theorem 6.3, hence none of them is a minimizer. We can also see this fact according to Theorem B (uniqueness of minimizers). In Section 7 we construct the actual minimizer for Pauls’ example (see Example 7.3). In dimensions higher than 2, the situation is quite different. The size of the singular set can be relatively small under a suitable condition on $\vec{%
F}$ $=$ $(F_{I}).$ For $x\geq 0$, let $[x]$ denote the largest integer less or equal than $x.$ In Section 6 we obtain an estimate for the size of the singular set and a condition on $\vec{F}$ for a $C^{2}$-smooth solution to (\[eqn1.4’\]) to be a minimizer (see Theorems D and E below). Recall that $dim_{E}$ denotes the Hausdorff dimension with respect to the Euclidean metric.
**Theorem D.** *Let* $\Omega $* be a domain in* $R^{m}.$* Suppose* $u$* *$\in $* *$C^{2}(\Omega )$*and* $F_{I}$* *$\in $* *$C^{1}(\Omega ).$* Then for any* $p$* *$\in $* *$\Omega ,$* there exists a neighborhood* $V$* of* $p$* in* $\Omega $* such that* $S(u)$* *$\cap $* *$V$* is a submanifold of* $V$ *satisfying*
$$dim_{E}\mathit{(S(u)\cap V)\leq m-[}\frac{rank\text{ }(\partial _{J}F_{I}-\partial _{I}F_{J})(p)+1}{2}\mathit{%
].} \label{eqn1.5'}$$
**Theorem E.** *Let* $\Omega $* be a bounded domain in* $%
R^{m}$, $m\geq 2$. * Suppose* $u$* *$\in $* *$C^{2}(\Omega )$* *$\cap $* *$C^{0}(\bar{\Omega})$* is a* $C^{2}$*-smooth solution to (\[eqn1.4’\]) with* $H$* *$\in $* *$C^{0}(\Omega \backslash S(u))$* *$\cap $* *$%
L^{\infty }(\Omega )$ *and* $F_{I}$ $\in $ $C^{1}(\Omega ).$* Suppose there holds*
$$\mathit{\lbrack }\frac{rank\text{ }(\partial _{J}F_{I}-\partial _{I}F_{J})+1}{2}\mathit{]\geq 2}
\label{eqn1.6}$$
*for all* $p$ $\in $ $\Omega .$ *Then* $u$* is a weak solution to (\[eqn1.4’\]) and a minimizer for (\[eqn1.3’\]) if in addition* $u$ $\in $ $W^{1,1}(\Omega ).$
**Corollary F.** *Let* $\Omega $* be a bounded domain in* $%
R^{2n}.$* Suppose* $u$* *$\in $* *$C^{2}(\Omega )$* *$\cap $* *$C^{0}(\bar{\Omega})$* is a* $C^{2}$*-smooth solution to the* $p$*-minimal surface equation ((\[eqn1.1\]) with* $H$ $=$ $0$*). Then in dimension* $\geq $ $4$ $(n\geq 2),$ $u$* is a weak solution to the* $p$*-minimal surface equation and a minimizer for (\[eqn1.2\]) with* $H$ $=$ $0$ *if in addition* $u$ $\in $ $W^{1,1}(\Omega ).$
In Section 8 we study the uniqueness of solutions to elliptic approximating equations $Q_{\varepsilon }u$ $=$ $H$ (see (\[eqn4.1\])), ${\varepsilon }>0$. Since this is an elliptic equation for a given ${\varepsilon }>0$, the uniqueness of solutions follows essentially from the known elliptic theory (see e.g. [@GT83]). But for the reader’s convenience, we include a proof here as the Appendix.
We were aware of the paper [@Pau05] while this work was being done. After this paper was submitted, we were informed of the work [@RR05]. Some problems related to this paper were studied in [@Pau05] and [@RR05]. We are grateful to Andrea Malchiodi for many discussions, in particular, in the study of Example 7.3. We would also like to thank the referee for stimulating comments and pointing out many grammatical errors.
Hypersurfaces in the Heisenberg group
=====================================
In this section we introduce some basic notions for a hypersurface in a pseudohermitian manifold. By viewing the Heisenberg group or $R^{2n+1}$ as a suitable pseudohermitian manifold, we give geometric interpretations of $%
(1.1)$ and $(1.4)$.
Let $(M,J,\Theta )$ be a $(2n+1)$-dimensional pseudohermitian manifold with an integrable $CR$ structure $J$ and a global contact form $\Theta $ such that the bilinear form $G\equiv \frac{1}{2}d\Theta (\cdot ,J\cdot )$ is positive definite on the contact bundle $\xi \equiv \ker \Theta $ ([Lee86]{}). The metric $G$ is usually called the Levi metric. Consider a hypersurface $\Sigma $ $\subset $ $M.$ A point $p$ $\in $ $\Sigma $ is called singular if $\xi $ coincides with $T\Sigma $ at $p.$ Otherwise, $p$ is called nonsingular and $\mathcal{V}$ $\equiv $ $\xi \cap T\Sigma $ is $%
2n-1$ dimensional in this case. There is a unique (up to sign) unit vector $%
N $ $\in $ $\xi $ that is perpendicular to $\mathcal{V}$ with respect to the Levi metric $G.$ We call $N$ the Legendrian normal or the $p$-normal (”$p$” stands for ”pseudohermitian”). Suppose that $\Sigma $ bounds a domain $%
\Omega $ in $M.$ We define the $p$-area $2n$-form $\mathcal{A}$ by computing the first variation ($\mathcal{A}$ will be computed below for the case of the Heisenberg group), away from the singular set, of the standard volume in the $p$-normal $N:$
$$\delta _{fN}\int_{\Omega }\Theta \wedge (d\Theta )^{n}=c(n)\int_{\Sigma }f%
\mathcal{A} \label{eqn2.1}$$
where $f$ is a $C^{\infty }$-smooth function on $\Sigma $ with compact support away from the singular points, and $c(n)$ $=$ $2^{n}n!$ is a normalization constant. The sign of $N$ is determined by requiring that $%
\mathcal{A}$ is positive with respect to the induced orientation on $\Sigma
. $ So we can talk about the $p$-area of $\Sigma $ by integrating $\mathcal{A%
}$ over $\Sigma $ (which might not be closed from now on)$.$ Then we define the $p$-mean curvature $H$ of $\Sigma $ as the first variation of the $p$-area in the direction of $N:$ (the support of $f$ now is also assumed to be away from the boundary of $\Sigma $)$$\delta _{fN}\int_{\Sigma }\mathcal{A=-}\int_{\Sigma }fH\mathcal{A}.
\label{eqn2.2}$$
Consider the Heisenberg group viewed as a (flat) pseudohermitian manifold $%
(R^{2n+1},$ $\Theta _{0},$ $J_{0}).$ Here $\Theta _{0}$ $\equiv $ $dz+$ $%
\sum_{j=1}^{n}(x_{j}dx_{j^{\prime }}-x_{j^{\prime }}dx_{j})$ at a point $(%
\vec{X},$ $z)$ $\equiv $ $(x_{1},$ $x_{1^{\prime }},$ ..., $x_{n},$ $%
x_{n^{\prime }},$ $z)$ $\in $ $R^{2n+1}$ and $J_{0}(\mathring{e}_{j})$ $%
\equiv $ $\mathring{e}_{j^{\prime }},$ $J_{0}(\mathring{e}_{j^{\prime }})$ $%
\equiv $ $-\mathring{e}_{j}$ where $$\mathring{e}_{j}\equiv \frac{\partial }{\partial x_{j}}+x_{j^{\prime }}\frac{%
\partial }{\partial z},\text{ \ }\mathring{e}_{j^{\prime }}\equiv \frac{%
\partial }{\partial x_{j^{\prime }}}-x_{j}\frac{\partial }{\partial z}
\label{eqn2.3}$$
$j=1,2,...n$ span $\xi _{0}$ $\equiv $ $\ker \Theta _{0}.$ Let $%
\Sigma $ be a graph defined by $z=u(\vec{X}).$ Note that $\mathring{e}_{j}$ ’s and $\mathring{e}_{j^{\prime }}$’s form an orthonormal basis with respect to the Levi metric $G_{0}$ $=$ $(\sum_{j=1}^{n}dx_{j}$ $\wedge $ $%
dx_{j^{\prime }})$ $(\cdot ,$ $J_{0}\cdot ).$ Observe that an element $v$ $=$ $\sum_{j=1}^{n}(a_{j}\mathring{e}_{j}$ $+$ $b_{j^{\prime }}\mathring{e}%
_{j^{\prime }})$ $\in $ $\xi _{0}\cap T\Sigma $ satisfies $d(z-u(\vec{X}))$ $%
(v)$ $=$ $0.$ It follows that $$\sum_{j=1}^{n}[(u_{x_{j}}-x_{j^{\prime }})a_{j}+(u_{x_{j^{\prime
}}}+x_{j})b_{j^{\prime }}]=0. \label{eqn2.4}$$
Let $N\equiv $ $-D^{-1}\sum_{j=1}^{n}[(u_{x_{j}}-x_{j^{\prime }})\mathring{%
e}_{j}+(u_{x_{j^{\prime }}}+x_{j})\mathring{e}_{j^{\prime }}]$ where $D$ $%
\equiv $ $(\sum_{j=1}^{n}[(u_{x_{j}}-x_{j^{\prime
}})^{2}\ +\ (u_{x_{j^{\prime }}}+x_{j})^{2}])^{1/2}.$ It is easy to see that $N$ is perpendicular to $\xi _{0}\cap T\Sigma $ by (2.4), that it is of the unit length w.r.t. $G_{0}$ and hence $N$ is the $p$-normal (that the associated $\mathcal{%
A}$ is positive will be shown below). We can now compute $\Theta _{0}\wedge
(d\Theta _{0})^{n}$ $=$ $c(n)$ $dz$ $\wedge $ $dx_{1}$ $\wedge $ $%
dx_{1^{\prime }}$ $\wedge $ $...$ $\wedge $ $dx_{n}$ $\wedge $ $%
dx_{n^{\prime }}$ and
$$\iota _{N}\{\Theta _{0}\wedge (d\Theta
_{0})^{n}\}=-c(n)D^{-1}\{(I)+(II)+(III)\} \label{eqn2.5}$$
where $\iota _{N}$ means taking the interior product with $N$ and ($d\hat{x}_{I}$ deleted)
$$\begin{aligned}
(I) &=&\sum_{j=1}^{n}[(u_{x_{j}}-x_{j^{\prime }})x_{j^{\prime
}}-(u_{x_{j^{\prime }}}+x_{j})x_{j}]dx_{1}\wedge dx_{1^{\prime }}\wedge
...\wedge dx_{n}\wedge dx_{n^{\prime }} \\
(II) &=&-\sum_{j=1}^{n}(u_{x_{j}}-x_{j^{\prime }})dz\wedge dx_{1}\wedge
dx_{1^{\prime }}...d\hat{x}_{j}\wedge dx_{j^{\prime }}...\wedge dx_{n}\wedge
dx_{n^{\prime }} \\
(III) &=&\sum_{j=1}^{n}(u_{x_{j^{\prime }}}+x_{j})dz\wedge dx_{1}\wedge
dx_{1^{\prime }}...dx_{j}\wedge d\hat{x}_{j^{\prime }}...\wedge dx_{n}\wedge
dx_{n^{\prime }}.\end{aligned}$$
It follows that
$$\begin{aligned}
&&\delta _{fN}\int_{\Omega }\Theta _{0}\wedge (d\Theta _{0})^{n}
\label{eqn2.6} \\
&=&\int_{\Omega }L_{fN}\{\Theta _{0}\wedge (d\Theta _{0})^{n}\}=\int_{\Omega
}d(\iota _{fN}\{\Theta _{0}\wedge (d\Theta _{0})^{n}\}) \notag \\
&=&\int_{\Sigma }f\iota _{N}\{\Theta _{0}\wedge (d\Theta _{0})^{n}\} \notag\end{aligned}$$
by the formula $L_{v}$ $=$ $\iota _{v}\circ d$ $+$ $d\circ \iota
_{v}$ and Stokes’ theorem. Substituting (2.5) into (2.6) and comparing (2.6) with (2.1) gives
$$\mathcal{A=-}D^{-1}\{(I)+(II)+(III)\} \label{eqn2.7}$$
which simplifies to $Ddx_{1}\wedge dx_{1^{\prime }}\wedge ...\wedge
dx_{n}\wedge dx_{n^{\prime }}$ on $\Sigma $ ($z=u(\vec{X})).$ Next we compute
$$\delta _{fN}\int_{\Sigma }\mathcal{A=}\int_{\Sigma }L_{fN}\mathcal{A=}%
\int_{\Sigma }\iota _{fN}\circ d\mathcal{A}. \label{eqn2.8}$$
Here we have used Stokes’ theorem and the condition that the support of $f$ is away from the singular set and the boundary of $\Sigma .$ Noting that $D$ $=$ $|\nabla u-\vec{X}^{\ast }|$ where $\vec{X}^{\ast }$ $=$ $(x_{1^{\prime }},$ $-x_{1},$ $x_{2^{\prime }},$ $-x_{2},...,$ $x_{n^{\prime
}},$ $-x_{n})$, we can easily deduce that $d(D^{-1}(I))$ $=$ $0$ and
$$d\{D^{-1}[(II)+(III)]\}=(div\frac{\nabla u-\vec{X}^{\ast }}{|\nabla u-\vec{X}%
^{\ast }|})dz\wedge dx_{1}\wedge dx_{1^{\prime }}\wedge ...\wedge
dx_{n}\wedge dx_{n^{\prime }}.$$
It follows from (2.7) and (2.5) that
$$\iota _{N}\circ d\mathcal{A}=-(div\frac{\nabla u-\vec{X}^{\ast }}{|\nabla u-%
\vec{X}^{\ast }|})\mathcal{A}. \label{eqn2.9}$$
Substituting (2.9) into (2.8) and comparing (2.8) with (2.2), we obtain the following expression for the $p$-mean curvature $H_{\Sigma }$ of the graph $\Sigma $ $=$ $\{(%
\vec{X},$ $u(\vec{X}))\}:$$$H_{\Sigma }=div\frac{\nabla u-\vec{X}^{\ast }}{|\nabla u-\vec{X}^{\ast }|}.
\label{eqn2.10}$$
Next we consider a general vector field $\vec{F}$ $=$ $(F_{I})$ instead of $-\vec{X}^{\ast }.$ Let $\Theta _{\vec{F}}$ $\equiv $ $dz$ $+$ $%
\sum_{I}F_{I}dx_{I}$ where $I$ ranges over $1,$ $1^{\prime },$ $...,$ $n,$ $%
n^{\prime }.$ Assume that $\Theta _{\vec{F}}$ is a contact form, i.e., $%
\Theta _{\vec{F}}$ $\wedge $ $(d\Theta _{\vec{F}})^{n}$ $\neq $ $0$ everywhere (satisfied for $\vec{F}$ $=$ $-\vec{X}^{\ast }$ as shown previously). For instance, the condition is equivalent to $\partial F_{1^{\prime }}/\partial x_{1}$ $-$ $\partial
F_{1}/\partial x_{1^{\prime }}$ $\neq $ $0$ in the case $n$ $=$ $1.$ Define
$$e_{I}=\frac{\partial }{\partial x_{I}}-F_{I}\frac{\partial }{\partial z},%
\text{ \ }I=1,1^{\prime },...,n,n^{\prime }. \label{eqn2.11}$$
It is easy to see that $\Theta _{\vec{F}}$ annihilates the $e_{I}$’s. Define the $CR$ structure $J_{\vec{F}}$ on the contact bundle $\ker \Theta _{%
\vec{F}}$ by $J_{\vec{F}}(e_{j})$ $=$ $e_{j^{\prime }}$ and $J_{\vec{F}%
}(e_{j^{\prime }})$ $=$ $-e_{j}$ for $j$ $=$ $1,$ $2,$ $...,$ $n.$ For the $%
2 $-dimensional case ($n$ $=$ $1$), we can find a nonvanishing scalar function $\lambda $ ($=$ $2(\partial F_{1^{\prime }}/\partial x_{1}$ $-$ $%
\partial F_{1}/\partial x_{1^{\prime }})^{-1}$) such that $\{e_{1},$ $%
e_{1^{\prime }}\}$ forms an orthonormal basis with respect to the Levi metric $G_{\vec{F}} $ associated to $(J_{\vec{F}},$ $\lambda \Theta _{\vec{F}%
}).$ Let $\psi $ $\equiv $ $z$ $-$ $u(x_{1},x_{1^{\prime }})$ be a defining function for the graph of $u.$ By a formula in Section 2 of [@CHMY04], we can compute the $p$-mean curvature $H_{\vec{F}}$ with respect to the pseudohermitian structure $(J_{\vec{F}},$ $\lambda \Theta _{\vec{F}})$ as follows:
$$\begin{aligned}
H_{\vec{F}} &=&-div_{b}\frac{\nabla _{b}\psi }{|\nabla _{b}\psi |_{G_{\vec{F}%
}}} \label{eqn2.12} \\
&=&-e_{1}(\frac{e_{1}\psi }{D_{\vec{F}}})-e_{1^{\prime }}(\frac{e_{1^{\prime
}}\psi }{D_{\vec{F}}}) \notag \\
&=&\frac{\partial }{\partial x_{1}}(\frac{u_{x_{1}}+F_{1}}{|\nabla u+\vec{F}|%
})+\frac{\partial }{\partial x_{1^{\prime }}}(\frac{u_{x_{1^{\prime
}}}+F_{1^{\prime }}}{|\nabla u+\vec{F}|}) \notag \\
&=&div\frac{\nabla u+\vec{F}}{|\nabla u+\vec{F}|}. \notag\end{aligned}$$
Here we have used $|\nabla _{b}\psi |_{G_{\vec{F}}}$ $=$ $\sqrt{%
(e_{1}\psi )^{2}+(e_{1^{\prime }}\psi )^{2}}$ $=$ $|\nabla u+\vec{F}|$ by (\[eqn2.11\]).
Minimizers in the Heisenberg group
==================================
In this section we deduce some properties of a minimizer in the Heisenberg group. In fact we consider a more general area functional (this is just (\[eqn1.3’\])):
$$\mathcal{F}(u)\equiv \int_{\Omega }\{\mid \nabla u+\vec{F}\mid +Hu\}
\label{eqn3.1}$$
where $\Omega \subset R^{m}$ is a bounded domain, $\vec{F}$ is an arbitrary (say, $L^{1})$ vector field on $\Omega ,$ and $H$ $\in $ $%
L^{\infty }(\Omega )$ (we omit the Euclidean volume element).
**Definition 3.1.** $u\in W^{1,1}(\Omega )$ is called a minimizer for $%
\mathcal{F}(u)$ $\equiv $ $\int_{\Omega }\{|\nabla u+\vec{F}|$ $+$ $Hu\}$ if $\mathcal{F}(u)$ $\leq $ $\mathcal{F}(u+\varphi )$ for any $\varphi \in
W_{0}^{1,1}(\Omega )$, where $\vec{F}$ $\in$ $L^{1}(\Omega )$ and $H$ $\in $ $%
L^{\infty }(\Omega )$.
We are going to investigate the first variation of $\mathcal{F}$. Let $u,\varphi \in
W^{1,1}(\Omega )$ and $u_{\varepsilon }\equiv u+\varepsilon \varphi $ for $%
\varepsilon \in R.$ It follows that $u_{\varepsilon }-u_{\hat{\varepsilon}%
}=(\varepsilon -\hat{\varepsilon})\varphi .$ Let $S(u_{\varepsilon })$, the singular set of $u_{\varepsilon }$, denote the set of points where $\nabla
u_{\varepsilon }+\vec{F}=0.$ So from (\[eqn3.1\]) (noting that $\mid \nabla u_{\varepsilon }+\vec{F}\mid$ $=$ $\mid \varepsilon -\hat{\varepsilon}\mid$ $\mid \nabla \varphi \mid$ on $S(u_{\hat{\varepsilon}})$) we have
$$\begin{aligned}
\mathcal{F}(u_{\varepsilon }) &=&\mid \varepsilon -\hat{\varepsilon}\mid
\int_{S(u_{\hat{\varepsilon}})}\mid \nabla \varphi \mid +\int_{\Omega
\backslash S(u_{\hat{\varepsilon}})}\mid \nabla u_{\varepsilon }+\vec{F}\mid
\label{eqn3.2} \\
&&+\int_{\Omega }Hu_{\hat{\varepsilon}}+\int_{\Omega }(\varepsilon -\hat{%
\varepsilon})H\varphi . \notag\end{aligned}$$
Since $\mid \nabla u_{\varepsilon }+\vec{F}\mid ^{2}-\mid \nabla
u_{\hat{\varepsilon}}+\vec{F}\mid ^{2}=2(\varepsilon -\hat{\varepsilon}%
)(\nabla u_{\hat{\varepsilon}}+\vec{F})\cdot \nabla \varphi +(\varepsilon -%
\hat{\varepsilon})^{2}\mid \nabla \varphi \mid ^{2},$ we compute from ([eqn3.2]{})
$$\begin{aligned}
\frac{\mathcal{F}(u_{\varepsilon })-\mathcal{F}(u_{\hat{\varepsilon}})}{%
\varepsilon -\hat{\varepsilon}} &=&\frac{|\varepsilon -\hat{\varepsilon}|}{%
\varepsilon -\hat{\varepsilon}}\int_{S(u_{\hat{\varepsilon}})}|\nabla
\varphi |+\int_{\Omega \backslash S(u_{\hat{\varepsilon}})}\frac{2(\nabla u_{%
\hat{\varepsilon}}+\vec{F})\cdot \nabla \varphi +(\varepsilon -\hat{%
\varepsilon})|\nabla \varphi |^{2}}{|\nabla u_{\varepsilon }+\vec{F}%
|+|\nabla u_{\hat{\varepsilon}}+\vec{F}|} \\
&&+\int_{\Omega }H\varphi .\end{aligned}$$
Note that the integrand of the middle term in the right-hand side of the above formula actually equals $(\varepsilon -\hat{\varepsilon})^{-1}$ $(\mid \nabla u_{\varepsilon
}+\vec{F}\mid -$ $\mid \nabla u_{\hat{\varepsilon}}+\vec{F}\mid )$ whose absolute value is less than or equal to $\mid \nabla \varphi \mid .$ Therefore by Lebesque’s dominated convergence theorem, we can easily take the limit as $\varepsilon \rightarrow \hat{\varepsilon}\pm $ ($+$: the right-hand limit; $-$: the left-hand limit)$,$ and obtain
$$\frac{d\mathcal{F}(u_{\hat{\varepsilon}\pm })}{d\varepsilon }=\pm \int_{S(u_{%
\hat{\varepsilon}})}\mid \nabla \varphi \mid +\int_{\Omega \backslash S(u_{%
\hat{\varepsilon}})}N(u_{\hat{\varepsilon}})\cdot \nabla \varphi
+\int_{\Omega }H\varphi \label{eqn3.3}$$
where $N(v)\equiv $ $\frac{\nabla v+\vec{F}}{|\nabla v+\vec{F}|}$ is defined on $\Omega \backslash S(v).$ Note that $N(u_{\hat{\varepsilon}%
})\cdot \nabla \varphi $ $\in L^{1}(\Omega \backslash S(u_{\hat{\varepsilon}%
}))$ since $|N(u_{\hat{\varepsilon}})\cdot \nabla \varphi |$ $\leq $ $|N(u_{%
\hat{\varepsilon}})|$ $|\nabla \varphi |$ $=$ $|\nabla \varphi |$ and $%
\nabla \varphi $ $\in $ $L^{1}(\Omega )$ by the assumption. Also from the above argument, we have the estimate
$$\frac{\mid \mathcal{F}(u_{\varepsilon })-\mathcal{F}(u_{\hat{\varepsilon}})\mid }
{\mid \varepsilon -\hat{\varepsilon}\mid }\leq \int_{\Omega }\mid \nabla
\varphi \mid +||H||_{\infty }\int_{\Omega }|\varphi |.$$
Namely, $\mathcal{F}(u_{\varepsilon })$ is Lipschitz continuous in $\varepsilon $ for $\varphi \in W^{1,1}(\Omega ).$ Let $\kappa (\varepsilon
) $ denote the Lebesque measure of the set $S(u_{\varepsilon })\cap \{\nabla
\varphi \neq 0\}.$ We claim that there are at most countably many $%
\varepsilon$’s with $\kappa (\varepsilon )>0$ for a fixed $\varphi
.$ First observe that $S(u_{\varepsilon _{1}})\cap S(u_{\varepsilon _{2}})$ $%
\subset $ $\{\nabla \varphi =0\},$ and hence ($S(u_{\varepsilon _{1}})\cap
\{\nabla \varphi \neq 0\})$ $\cap $ ($S(u_{\varepsilon _{2}})\cap \{\nabla
\varphi \neq 0\})$ = $\emptyset$ (empty). Let $|\Omega |$ denote the volume of the bounded domain $\Omega .$ So the number of $\varepsilon $ such that $%
\kappa (\varepsilon )>\frac{1}{n}$ for any positive integer is at most $%
[n|\Omega |]+1$ where $[x]$ denotes the largest integer less than or equal to $x.$ Therefore there are at most countably many $\varepsilon$’s with $\kappa (\varepsilon )>0.$ We call such an $\varepsilon $ singular, otherwise regular (i.e., $\kappa (\varepsilon )=0)$. By (\[eqn3.3\]), we obtain (\[eqn3.4\]) in the following Lemma.
**Lemma 3.1**. *(1)* $\mathcal{F}(u_{\varepsilon })$* is Lipschitz continuous in* $\varepsilon $* for* $\varphi \in W^{1,1}(\Omega ).$* (2) There are at most countably many singular* $\varepsilon$*’s. (3) For a regular* $\varepsilon ,$* *$%
\int_{S(u_{\varepsilon })}\mid \nabla \varphi \mid =0,$* *$\frac{d%
\mathcal{F}(u_{\varepsilon })}{d\varepsilon }$* exists, and*
$$\frac{d\mathcal{F}(u_{\varepsilon })}{d\varepsilon }\mathit{=}\int_{\Omega
\backslash S(u_{\varepsilon })}\mathit{N(u}_{\varepsilon }\mathit{)\cdot
\nabla \varphi +}\int_{\Omega }H\varphi \mathit{.} \label{eqn3.4}$$
Next for $\varepsilon _{2},\varepsilon _{1}$ regular with $\varepsilon
_{2}>\varepsilon _{1},$ we compute the difference of $\frac{d\mathcal{F}%
(u_{\varepsilon })}{d\varepsilon }$ for $\varepsilon =\varepsilon
_{2},\varepsilon _{1}$ by (\[eqn3.4\]). Using $\kappa (\varepsilon _{j})$ $%
=$ $0,$ $j=1,2$ to shrink the domain of the integral, we obtain
$$\frac{d\mathcal{F}(u_{\varepsilon _{2}})}{d\varepsilon }-\frac{d\mathcal{F}%
(u_{\varepsilon _{1}})}{d\varepsilon }=\int_{\Omega \backslash \lbrack
S(u_{\varepsilon _{2}})\cup S(u_{\varepsilon _{1}})]}[N(u_{\varepsilon
_{2}})-N(u_{\varepsilon _{1}})]\cdot \nabla \varphi \geq 0. \label{eqn3.5}$$
Here we have used Lemma $5.1^{\prime}$ (also holds for $u,v\in W^{1,1}$) in [@CHMY04] to conclude the last inequality in (\[eqn3.5\]) by noting that $\nabla \varphi $ $=$ ($\varepsilon _{2}-\varepsilon _{1})^{-1}(\nabla
u_{\varepsilon _{2}}-\nabla u_{\varepsilon _{1}}).$ We have the following result.
**Lemma 3.2.** *(1)* $\frac{d\mathcal{F}(u_{\varepsilon })}{%
d\varepsilon }$* is an increasing function of* $\varepsilon $* for* $\varepsilon $* regular. (2) Let* $\varepsilon _{j}$*,* $j=1,2,...,$* be a sequence of decreasing (increasing, respectively) regular numbers tending to* $\hat{\varepsilon}$* (*$%
\hat{\varepsilon}$* may be singular) as* $j\rightarrow \infty $*. Then we have*$$\lim_{j\rightarrow \infty }\frac{d\mathcal{F}(u_{\varepsilon _{j}})}{%
d\varepsilon }=\frac{d\mathcal{F}(u_{\hat{\varepsilon}+})}{d\varepsilon }%
\text{ \ (}=\frac{d\mathcal{F}(u_{\hat{\varepsilon}-})}{d\varepsilon },\text{
respectively).} \label{eqn3.6}$$
Note that we have the precise expressions for the right-hand limit $%
\frac{d\mathcal{F}(u_{\hat{\varepsilon}+})}{d\varepsilon }$ and the left-hand limit $\frac{d\mathcal{F}(u_{\hat{\varepsilon}-})}{d\varepsilon }$ at $\hat{\varepsilon}$ in (\[eqn3.3\]).
**Proof.** (1) follows from (\[eqn3.5\]). To prove (2), first observe that $\int_{S(u_{\varepsilon _{j}})}|\nabla \varphi |$ $=0$ by the definition of $\varepsilon _{j}$ being regular. Therefore we have
$$\int_{\cup _{j=1}^{\infty }S(u_{\varepsilon _{j}})}|\nabla \varphi |=0.
\label{eqn3.7}$$
Let $S_{\infty }\equiv \cup _{j=1}^{\infty }S(u_{\varepsilon _{j}}).$ Since $|N(u_{\varepsilon _{j}})|$ $\leq $ $1,$ we estimate $\mid
\int_{S_{\infty }}N(u_{\varepsilon _{j}})\cdot \nabla \varphi \mid \leq
\int_{S_{\infty }}|\nabla \varphi |=0$ by (\[eqn3.7\]). So we obtain
$$\int_{S_{\infty }}N(u_{\varepsilon _{j}})\cdot \nabla \varphi =0.
\label{eqn3.8}$$
It then follows from (\[eqn3.4\]) and (\[eqn3.8\]) that
$$\begin{aligned}
\frac{d\mathcal{F}(u_{\varepsilon _{j}})}{d\varepsilon } &=&\int_{\Omega
\backslash S(u_{\varepsilon _{j}})}N(u_{\varepsilon _{j}})\cdot \nabla
\varphi +\int_{\Omega }H\varphi \label{eqn3.9} \\
&=&\int_{\Omega \backslash S_{\infty }}N(u_{\varepsilon _{j}})\cdot \nabla
\varphi +\int_{\Omega }H\varphi . \notag\end{aligned}$$
On the other hand, observe that $\lim_{j\rightarrow \infty
}N(u_{\varepsilon _{j}})=N(u_{\hat{\varepsilon}})$ in $\Omega \backslash
\lbrack S_{\infty }\cup S(u_{\hat{\varepsilon}})]$ and
$$N(u_{\varepsilon _{j}})=\frac{(\nabla u_{\hat{\varepsilon}}+\vec{F}%
)+(\varepsilon _{j}-\hat{\varepsilon})\nabla \varphi }{|(\nabla u_{\hat{%
\varepsilon}}+\vec{F})+(\varepsilon _{j}-\hat{\varepsilon})\nabla \varphi |}=%
\frac{(\varepsilon _{j}-\hat{\varepsilon})\nabla \varphi }{|\varepsilon _{j}-%
\hat{\varepsilon}||\nabla \varphi |} \label{eqn3.10}$$
in $S(u_{\hat{\varepsilon}})\backslash S_{\infty }.$ Now we compute
$$\begin{aligned}
&&\int_{\Omega \backslash S_{\infty }}N(u_{\varepsilon _{j}})\cdot \nabla
\varphi \label{eqn3.11} \\
&=&(\int_{S(u_{\hat{\varepsilon}})\backslash S_{\infty }}+\int_{\Omega
\backslash \lbrack S_{\infty }\cup S(u_{\hat{\varepsilon}})]})N(u_{%
\varepsilon _{j}})\cdot \nabla \varphi \notag \\
&=&\frac{\varepsilon _{j}-\hat{\varepsilon}}{|\varepsilon _{j}-\hat{%
\varepsilon}|}\int_{S(u_{\hat{\varepsilon}})\backslash S_{\infty }}|\nabla
\varphi |+\int_{\Omega \backslash \lbrack S_{\infty }\cup S(u_{\hat{%
\varepsilon}})]}N(u_{\varepsilon _{j}})\cdot \nabla \varphi \notag \\
&\rightarrow &\pm \int_{S(u_{\hat{\varepsilon}})}|\nabla \varphi
|+\int_{\Omega \backslash S(u_{\hat{\varepsilon}})}N(u_{\hat{\varepsilon}%
})\cdot \nabla \varphi \notag\end{aligned}$$
as $j\rightarrow \infty $ ($+$ for decreasing $\varepsilon _{j}$; $%
-$ for increasing $\varepsilon _{j}$)$.$ Here we have used (\[eqn3.10\]) and Lebesque’s dominated convergence theorem. By (\[eqn3.9\]), ([eqn3.11]{}), and in view of (\[eqn3.3\]), we have proved (\[eqn3.6\]).
Q.E.D.
**Definition 3.2.** Let $\Omega \subset R^{m}$ be a bounded domain. Let $\vec{F}$ be an $L^{1}_{loc}$ vector field on $\Omega .$ Let $H$ $%
\in $ $L^{1}_{loc}(\Omega ).$ We say $u\in W^{1}(\Omega )$ is a weak solution to the equation (\[eqn1.4’\]), i.e., $divN(u)$ $=$ $H$ in $\Omega $ if and only if for any $\varphi \in C^{\infty}_{0}(\Omega ),$ there holds$$\int_{S(u)}|\nabla \varphi |+\int_{\Omega \backslash S(u)}N(u)\cdot \nabla
\varphi +\int_{\Omega }H\varphi \geq 0. \label{eqn3.12}$$
Recall that $N(u)\equiv $ $\frac{\nabla u+\vec{F}}{|\nabla u+\vec{F}|%
}$, $S(u)\equiv \{\nabla u+\vec{F}=0\},$ and $N(u)\cdot \nabla \varphi $ $%
\in L^{1}(\Omega \backslash S(u))$ since $|N(u)\cdot \nabla \varphi |$ $\leq
$ $|N(u)|$ $|\nabla \varphi |$ $=$ $|\nabla \varphi |$ and $\nabla \varphi $ $\in $ $L^{1}(\Omega )$ by assumption.$.$ Note that with $\varphi $ replaced by $-\varphi $ in (\[eqn3.12\]), we also have $%
-\int_{S(u)}|\nabla \varphi |$ $+$ $\int_{\Omega \backslash S(u)}N(u)\cdot
\nabla \varphi $ $+$ $\int_{\Omega }H\varphi $ $\leq $ $0.$ Moreover, if the $(m-1)$-dimensional Hausdorff measure of $S(u)$ vanishes, then the equality holds in (\[eqn3.12\]). We remark that in Definition 3.2 for the case $H$ $\in $ $L^{\infty}(\Omega )$, the space $C^{\infty}_{0}(\Omega )$ of test functions can be replaced by $W^{1,1}_{0}(\Omega )$ since the former is dense in the latter in the $W^{1,1}$ norm ([@GT83]). Note that in the definition of a minimizer, we require $u$ $\in$ $W^{1,1}(\Omega )$, $\vec{F}$ $\in$ $L^{1}(\Omega )$, and $H$ $\in $ $L^{\infty}(\Omega )$ while for the definition of a weak solution, $u$ can be in a larger space $W^{1}(\Omega )$, $\vec{F}$ $\in$ $L^{1}_{loc}(\Omega )$, and $H$ $\in $ $L^{1}_{loc}(\Omega )$.
**Theorem 3.3.** *Let *$u\in W^{1,1}(\Omega ),$ $\vec{F}$ $\in$ $L^{1}(\Omega ),$ *and* $H$ $\in $ $L^{\infty}(\Omega )$.* Then* $%
u$* is a minimizer for* $\mathcal{F}(\cdot )$* if and only if* $u$* is a weak solution to the equation* $divN(u)$* *$=$* *$H$*.*
**Proof.** Suppose $u$ is a minimizer for $\mathcal{F}(u).$ Then $\frac{%
d\mathcal{F}(u_{0+})}{d\varepsilon }$ $\geq $ $0,$ and hence (\[eqn3.12\]) follows from (\[eqn3.3\]) (letting $\hat{\varepsilon}=0$ in (\[eqn3.3\])). So $u$ is a weak solution. Conversely, suppose $u$ is a weak solution. Since $\mathcal{F}(u_{\varepsilon })$ is Lipschitz continuous in $%
\varepsilon $ by Lemma 3.1 (1), $\frac{d\mathcal{F}(u_{\varepsilon })}{%
d\varepsilon }$ exists a.e. (in fact at least for regular $\varepsilon $) and it is integrable. Moreover, we have$$\mathcal{F}(u+\varphi )-\mathcal{F}(u)=\int_{0}^{1}\frac{d\mathcal{F}%
(u_{\varepsilon })}{d\varepsilon }d\varepsilon . \label{eqn3.13}$$
On the other hand, from Lemma 3.2 and the definition of weak solution (Definition 3.2), we obtain that $\frac{d\mathcal{F}(u_{\varepsilon })}{d\varepsilon
}$ $\geq $ $0$ for any regular $\varepsilon \in \lbrack 0,1]$ in view of (\[eqn3.3\]) (take $\hat{\varepsilon}=0)$. By Lemma 3.1 (2), $\frac{d%
\mathcal{F}(u_{\varepsilon })}{d\varepsilon }$ $\geq $ $0$ a.e.. It follows from (\[eqn3.13\]) that $\mathcal{F}(u+\varphi )$ $\geq $ $\mathcal{F}(u).$ That is to say, $u$ is a minimizer for $\mathcal{F}(u).$
Q.E.D.
Existence of minimizers-proof of Theorem A
==========================================
Let $\Omega $ be a bounded domain in $R^{m},m\geq 2$. Consider the following elliptic approximation $u=u_{\varepsilon }$ ($\varepsilon $ $>$ $0)$ (a geometric interpretation can be found in [@Pau01]) with given boundary value $\varphi $ ($\in C^{2,\alpha }(\bar{\Omega}),$ $0$ $<$ $%
\alpha $ $<$ $1,$ say) :
$$\begin{aligned}
Q_{\varepsilon }u &\equiv &div(\frac{\nabla u+\vec{F}}{\sqrt{\varepsilon
^{2}+|\nabla u+\vec{F}|^{2}}})=0\text{ \ \ in }\Omega , \label{eqn4.1} \\
u &=&\varphi \text{ \ on }\partial \Omega \notag\end{aligned}$$
where $\vec{F}=(F_{I})$, $I=1,...,m$. In the case of $m=2n$, $I$ ranges over $1$, $1^{\prime}$, ..., $n$, $n^{\prime}$ (e.g., $F_{I}=$ $-x_{I^{\prime }}$ for the case of a p-minimal surface. Here we use the convention that $x_{j^{{\prime}{\prime}}}$ $=$ $-x_{j}$, $j$ $=$ $1$, ..., $n$). We will make use of Theorem 11.8 in [@GT83] to solve (\[eqn4.1\]) in $C^{2,\alpha }(\bar{\Omega})$ (then a subsequence of $u_{\varepsilon }$ will converge to what we want)$.$ First we check that $Q_{\varepsilon }$ is elliptic. A direct computation shows that (summation convention applies)
$$\begin{aligned}
Q_{\varepsilon }u &=&\frac{u_{II}(\varepsilon ^{2}+|\nabla u+\vec{F}%
|^{2})-(u_{I}+F_{I})(u_{J}+F_{J})u_{IJ}}{[\varepsilon ^{2}+|\nabla u+\vec{F}%
|^{2}]^{3/2}} \label{eqn4.2} \\
&&+\frac{(\varepsilon ^{2}+|\nabla u+\vec{F}|^{2})\partial
_{I}F_{I}-(u_{I}+F_{I})(u_{J}+F_{J})\partial _{I}F_{J}}{[\varepsilon
^{2}+|\nabla u+\vec{F}|^{2}]^{3/2}} \notag \\
&=&a_{IJ}({\varepsilon },x,\nabla u)u_{IJ}+b({\varepsilon },x,\nabla u) \notag\end{aligned}$$
where $$a_{IJ}({\varepsilon },x,\nabla u)=\frac{\delta _{IJ}(\varepsilon ^{2}+|\nabla u+\vec{F}%
|^{2})-(u_{I}+F_{I})(u_{J}+F_{J})}{[\varepsilon ^{2}+|\nabla u+\vec{F}%
|^{2}]^{3/2}} \label{eqn4.3}$$
and
$$b({\varepsilon },x,\nabla u)=\frac{(\varepsilon ^{2}+|\nabla u+\vec{F}|^{2})\partial
_{I}F_{I}-(u_{I}+F_{I})(u_{J}+F_{J})\partial _{I}F_{J}}{[\varepsilon
^{2}+|\nabla u+\vec{F}|^{2}]^{3/2}}.$$
For $0$ $\neq $ $(p_{I})$ $\in R^{m},$ we compute from (\[eqn4.3\]) that
$$\begin{aligned}
a_{IJ}p_{I}p_{J} &=&\frac{(\varepsilon ^{2}+|\nabla u+\vec{F}%
|^{2})p_{I}^{2}-(u_{I}+F_{I})(u_{J}+F_{J})p_{I}p_{J}}{[\varepsilon
^{2}+|\nabla u+\vec{F}|^{2}]^{3/2}} \label{eqn4.4} \\
&\geq &\frac{\varepsilon ^{2}p_{I}^{2}}{[\varepsilon ^{2}+|\nabla u+\vec{F}%
|^{2}]^{3/2}}>0. \notag\end{aligned}$$
Here we have used Cauchy’s inequality ($\nabla u+%
\vec{F})\cdot (p_{I})|^{2}$ $\leq $ $|\nabla u+\vec{F}|^{2}p_{I}^{2}$ (noting that $p_{I}^{2}$ means the sum $\Sigma _{I}p_{I}^{2}).$ It follows from (\[eqn4.2\]) and (\[eqn4.4\]) that $Q_{\varepsilon }$ is elliptic.
To apply Theorem 11.8 in [@GT83], we need to get an apriori estimate in $C^{1}(\bar{\Omega})$ -norm at least. Suppose $u_{\varepsilon }$ is a $%
C^{2,\alpha }(\bar{\Omega})$ solution to the equation $Q_{\epsilon }u$ $=$ $%
0 $ (assuming $\vec{F}\in C^{1,\alpha }(\bar{\Omega});$ later replacing $%
\vec{F}$ by $\sigma \vec{F}$), $u$ $=$ $\sigma \varphi $ on $\partial \Omega
,$ $0$ $\leq $ $\sigma $ $\leq $ $1.$ In the case of $F_{I}$ $=$ $%
-x_{I^{\prime }},$ $\partial _{I}F_{I}$ $=$ $0,$ $(u_{I}+F_{I})(u_{J}+F_{J})%
\partial _{I}F_{J}$ $=$ $-(u_{J^{\prime }}+F_{J^{\prime }})(u_{J}+F_{J})$ $=$ $0,$ and hence $b(\varepsilon ,x,\nabla u)$ $=$ $0.$ Since $Q_{\varepsilon }$ is elliptic, it follows from the maximum principle (see e.g. Problem 10.1 in [GT83]{}) that
$$\sup_{\Omega }\text{ }|u_{\varepsilon }|\leq \sup_{\partial \Omega }\text{ }%
|u_{\varepsilon }|=\sup_{\partial \Omega }\text{ }|\sigma \varphi |\leq
\sup_{\partial \Omega }\text{ }|\varphi |. \label{eqn4.5}$$
Note that the right hand side is independent of $\varepsilon .$ For a general $\vec{F},$ we will invoke the comparison principle for a second order, quasilinear operator with a ”tail” term (namely, Theorem 10.1 in [@GT83]). First we can find the comparison functions as shown below. Let $||\ \ ||_{\infty}$ denote the supremum norm. Let $B_{R}$ denote the ball of radius $R$, centered at the origin.
**Lemma 4.1**. *Let* $\Omega $ $\subset B_{R}$ $\subset R^{m}$ *be a bounded domain. Suppose* $\vec{F}$ $\in $ $%
C^{1}(\Omega )$ *be such that* $F_{I}$ *and* $\partial _{I}F_{J}$ *are all bounded in* $\Omega .$ *Then there are* $C^{\infty }$*-smooth functions*
$$w=e^{x_{1}+\kappa R}+ e^{x_{2}+\kappa R},\ w^{\prime}=-e^{x_{1}+{\kappa}^{\prime}R}- e^{x_{2}+{\kappa}^{\prime}R}$$
*in* $R^{m}$*, where* $\kappa =\ \kappa (\varepsilon ,\ R,\ ||F_{I}||_{\infty},$ $||\partial _{I}F_{J}||_{\infty})$ $>$ $0$ *and* ${\kappa}^{\prime} =\ {\kappa}^{\prime} (\varepsilon ,\ R,\ ||F_{I}||_{\infty},$ $||\partial _{I}F_{J}||_{\infty})$ $>$ $0$*, such that* $Q_{\varepsilon }w$ $>$ $0$ *and* $%
Q_{\varepsilon }w^{\prime }$ $<$ $0$ *in* $\Omega .$ *Moreover, we can choose* $\kappa$ *and* ${\kappa}^{\prime}$ *independent of* $\varepsilon$ *(but depending on* $\varepsilon_{0}$*) for* $0<\varepsilon \leq {\varepsilon}_{0}$*, a positive constant.*
**Proof**. Let $w$ have the above expression with $\kappa $ to be determined later. Let $%
w_{1}$ $\equiv $ $\partial _{x_{1}}w,$ $w_{11}$ $\equiv $ $\partial
_{x_{1}}^{2}w,$ $w_{12}$ $\equiv $ $\partial _{x_{2}}\partial _{x_{1}}w,$ and so on. It follows that
$$\begin{aligned}
w_{11} &=&w_{1}=e^{x_{1}+\kappa R},\text{ }w_{22}=w_{2}=e^{x_{2}+\kappa R},\text{ and} \label{eqn4.6}
\\
w_{IJ} &=&0,\text{ otherwise.} \notag\end{aligned}$$
In view of (\[eqn4.2\]) with $u$ replaced by $w,$ we compute the dominating (will be clear soon) term in the numerator, which is cubic in $w$ as follows:
$$\begin{aligned}
&&(\sum_{J}w_{J}^{2})(\sum_{I}w_{II})-\sum_{I,J}w_{I}w_{J}w_{IJ}
\label{eqn4.7} \\
&=&w_{1}^{2}w_{22}+w_{2}^{2}w_{11}\text{ \ (by
(\ref{eqn4.6}))} \notag \\
&=&e^{2x_{1}+x_{2}+3\kappa R}+e^{2x_{2}+x_{1}+3\kappa R}%
\text{ \ (by (\ref{eqn4.6})).} \notag\end{aligned}$$
It is easy to see that any other term in the expansion of the numerator is bounded by either $c_{1}e^{2\kappa R},$ $c_{2}e^{\kappa R}$ or $%
c_{3}$ for $\kappa $ large$.$ Here $c_{i}\ =\ c_{i}(\varepsilon ,\ R,\ ||F_{I}||_{\infty},\ ||\partial _{I}F_{J}||_{\infty})$, $i=1,\ 2,\ 3$, are independent of $\kappa .$ Therefore we have $Q_{\varepsilon }w$ $>$ $0$ in $\Omega $ by (\[eqn4.7\]) for a large $\kappa$ $=$ $\kappa (\varepsilon ,\ R,\ ||F_{I}||_{\infty},\ ||\partial _{I}F_{J}||_{\infty})$. Moreover, $\kappa $ is independent of $\varepsilon$ for $0<\varepsilon \leq {\varepsilon}_{0}$, a positive constant. Similarly, we can find ${\kappa}^{\prime} =\ {\kappa}^{\prime} (\varepsilon ,\ R,\ ||F_{I}||_{\infty},\ ||\partial _{I}F_{J}||_{\infty})\ >0$ such that $Q_{\varepsilon }w^{\prime }$ $<$ $0$ in $\Omega $.
Q.E.D.
**Proposition 4.2.** *Let* $\Omega \subset B_{R} \subset R^{m}$ *be a bounded domain. Let* $\vec{F}$ $\in $ $C^{1}(\Omega )$ *such that* $F_{I}$ *and* $%
\partial _{I}F_{J}$ *are all bounded in* $\Omega .$ *Suppose* $u_{\varepsilon }$ $\in $ $C^{2}(\Omega )\cap C^{0}(\bar{\Omega})$ *satisfies (\[eqn4.1\]), i.e.,* $Q_{\varepsilon }u_{\varepsilon }$ $=$ $0$ *in* $\Omega $ *and* $%
u_{\varepsilon }$ $=$ $\sigma \varphi $ $\in $ $C^{0},$ $0$ $\leq $ $\sigma $ $\leq $ $1,$ *on* $\partial \Omega $. *Then there exists a constant* $C$ $=$ $ C(\varepsilon ,\ R,$ $||F_{I}||_{\infty},$ $||\partial _{I}F_{J}||_{\infty},$ $||\varphi ||_{\infty})$ *(independent of* $\sigma$*) such that*
$$\sup_{\Omega }|u_{\varepsilon }|\leq C. \label{eqn4.8}$$
*Moreover,* *the bounds hold uniformly* *for* $0<\varepsilon \leq {\varepsilon}_{0}$*, a positive constant.*
**Proof.** Let $w$, $w^{\prime }$ be the comparison functions as in Lemma 4.1. On $\partial \Omega ,$ $w$ $\leq $ $\sigma \varphi +C_{1}$ $%
= $ $u_{\varepsilon }+C_{1}$ for some constant $C_{1}$ $=$ $C_{1}(\varepsilon ,\ R,$ $||F_{I}||_{\infty},$ $||\partial _{I}F_{J}||_{\infty},$ $||\varphi ||_{\infty})$ (independent of $\sigma$) independent of $\varepsilon$ for $0<\varepsilon \leq {\varepsilon}_{0}$, a positive constant. On the other hand, we have $Q_{\varepsilon }w$ $>$ $0=Q_{\varepsilon }(u_{\varepsilon }+C_{1})$ in $\Omega $ by Lemma 4.1 and the observation that $Q_{\varepsilon }(u_{\varepsilon }+C_{1})$ $=$ $%
Q_{\varepsilon }u_{\varepsilon }$. Now we apply the comparison principle for quasilinear operators (e.g. Theorem 10.1 in [@GT83]) to conclude that
$$w\leq u_{\varepsilon }+C_{1}\text{ in }\Omega . \label{eqn4.9}$$
Similarly, there is a constant $C_{2}$ $=$ $C_{2}(\varepsilon ,\ R,$ $||F_{I}||_{\infty},$ $||\partial _{I}F_{J}||_{\infty},$ $||\varphi ||_{\infty})$ (independent of $\sigma$) independent of $\varepsilon$ for $0<\varepsilon \leq {\varepsilon}_{0}$, such that $w^{\prime }$ $\geq $ $\sigma \varphi -C_{2}$ $=$ $%
u_{\varepsilon }-C_{2}$ on $\partial \Omega $ and $Q_{\varepsilon }w^{\prime
}$ $<$ $0$ $=$ $Q_{\varepsilon }(u_{\varepsilon }-C_{2})$ in $\Omega .$ So we obtain from the comparison principle that
$$w^{\prime }\geq u_{\varepsilon }-C_{2}\text{ in }\Omega . \label{eqn4.10}$$
Thus (\[eqn4.8\]) follows from (\[eqn4.9\]) and (\[eqn4.10\]).
Q.E.D.
For the gradient estimate, we will reduce the problem to a gradient estimate at the boundary. We need to require a condition on $\vec{F}.$ Suppose there are $C^{1}$-smooth functions $f_{K}$’s ($K$ $=$ $1,$ $...,$ $m)$ in $\Omega $ such that
$$\partial _{K}F_{I}=\partial _{I}f_{K}. \label{eqn4.11}$$
We remark that if both $F_{I}$ and $G_{I}$ satisfy the condition ([eqn4.11]{}), so does $F_{I}$ $+$ $G_{I}$. In fact, we can write down all the (local) solutions to (\[eqn4.11\]). It is easy to see from (\[eqn4.11\]) that $\partial _{K}$ $(\partial _{J}F_{I}$ $-$ $\partial _{I}F_{J})$ $=$ $0$ for all $I,$ $J,$ $K$ $=$ $1,$ $...,$ $m.$ It follows that
$$\partial _{J}F_{I}-\partial _{I}F_{J}=C_{IJ} \label{eqn4.11'}$$
where the constants $C_{IJ}$ satisfy the skew- symmetric relation: $C_{IJ}=-C_{JI}.$ Since the left-hand side of (\[eqn4.11’\]) is linear in $\vec{%
F},$ the general solutions are the solutions to $\partial _{J}F_{I}$ $-$ $%
\partial _{I}F_{J}$ $=$ $0$ plus a special solution. Let $\omega $ $\equiv $ $\sum_{I}F_{I}dx_{I}.$ Then $d\omega $ $=$ $0$ if $\partial _{J}F_{I}$ $-$ $%
\partial _{I}F_{J}$ $=$ $0.$ So locally there is a function $g$ such that $%
\omega $ $\equiv $ $dg.$ Hence $F_{I}$ $=$ $\partial _{I}g.$ On the other hand, we observe that $\tilde{F}_{I}$ $\equiv $ $\frac{1}{2}%
\sum_{K}C_{IK}x_{K}$ is a special solution to (\[eqn4.11’\]). So the general solutions to (\[eqn4.11’\]) are
$$F_{I}=\partial _{I}g+\frac{1}{2}\sum_{K}C_{IK}x_{K}. \label{eqn4.11''}$$
It is then easy to verify that $\vec{F}$ $=$ $(F_{I})$ having the form (\[eqn4.11”\]) are also solutions to (\[eqn4.11\]) for $f_{K}$ $%
=$ $\partial _{K}g$ $+$ $\frac{1}{2}\sum_{J}C_{JK}x_{J}.$
**Proposition 4.3.** *Let* $\Omega \subset R^{m}$ *be a bounded domain. Let* $\vec{F}$ $=$ $(F_{I})$ $\in $ $C^{1}(\Omega )$ *satisfy the condition (\[eqn4.11\]), for all* $I,K$ $=$ $1,$ $...,$ $%
m$*, where all* $f_{K}$*’s are bounded. Suppose* $u_{\varepsilon }$ $%
\in $ $C^{2}(\bar{\Omega})$ *satisfies the equation* $Q_{\varepsilon
}u_{\varepsilon }$ $=$ $H_{0}$*, a constant,* *in* $\Omega $. *Then we have*
$$\sup_{\Omega }|\partial _{K}u_{\varepsilon }|\leq \sup_{\partial \Omega
}|\partial _{K}u_{\varepsilon }|+2||f_{K}||_{\infty}. \label{eqn4.12}$$
**Proof.** Write $\nabla u+\vec{F}=(u_{I}+F_{I}).$ Let $D_{\varepsilon
}(u)\equiv \sqrt{\varepsilon ^{2}+|\nabla u+\vec{F}|^{2}}.$ Compute (summing over $J$ while fixing $I$ and $K$)
$$\begin{aligned}
&&\partial _{K}\frac{u_{I}+F_{I}}{D_{\varepsilon }(u)} \label{eqn4.13} \\
&=&\frac{u_{IK}+\partial _{K}F_{I}}{D_{\varepsilon }(u)}-\frac{%
(u_{I}+F_{I})(u_{J}+F_{J})(u_{JK}+\partial _{K}F_{J})}{D_{\varepsilon
}^{3}(u)} \notag \\
&=&\frac{\delta _{IJ}-\nu _{I}(u)\nu _{J}(u)}{D_{\varepsilon }(u)}\partial
_{J}(u_{K}+f_{K}) \notag\end{aligned}$$
where $\nu _{I}(u)\equiv (u_{I}+F_{I})/D_{\varepsilon }(u)$ and we have used the condition (\[eqn4.11\]). Now for $v\in C_{0}^{2}(\Omega ),$ we compute
$$\begin{aligned}
0&=&\int_{\Omega }(Q_{\varepsilon }u_{\varepsilon }-H_{0})\partial
_{K}v=\int_{\Omega }\partial _{I}\frac{(u_{\varepsilon })_{I}+F_{I}}{%
D_{\varepsilon }(u_{\varepsilon })}\partial _{K}v\text{ (summing over }I%
\text{)} \label{eqn4.14} \\
&=&-\int_{\Omega }\frac{(u_{\varepsilon })_{I}+F_{I}}{D_{\varepsilon
}(u_{\varepsilon })}\partial _{I}\partial _{K}v \notag \\
&=&\int_{\Omega }\partial _{K}\frac{(u_{\varepsilon })_{I}+F_{I}}{%
D_{\varepsilon }(u_{\varepsilon })}\partial _{I}v\text{ \ (}\partial
_{I}\partial _{K}=\partial _{K}\partial _{I}\text{)} \notag \\
&=&\int_{\Omega }\{a_{IJ}(\varepsilon ,x,\nabla u_{\varepsilon })\partial
_{J}[(u_{\varepsilon })_{K}+f_{K}]\}\partial _{I}v\text{ (summing over }I%
\text{ and }J\text{)} \notag\end{aligned}$$
by (\[eqn4.13\]) with $u$ replaced by $u_{\varepsilon }.$ Here $%
a_{IJ}(\varepsilon ,x,\nabla u_{\varepsilon })$ $=$ $[\delta _{IJ}-\nu
_{I}(u_{\varepsilon })\nu _{J}(u_{\varepsilon })]/D_{\varepsilon
}(u_{\varepsilon })$ (cf. (\[eqn4.3\])). It is then easy to see that ([eqn4.14]{}) holds also for $v\in C_{0}^{1}(\Omega )$ (use the regularization $%
v_{h}$ of (7.13) in [@GT83] to approximate $v$). So $(u_{\varepsilon
})_{K}+f_{K}$ is a weak solution to the equation $Lw$ $\equiv $ $\partial
_{I}\{a_{IJ}(\varepsilon ,x,\nabla u_{\varepsilon })\partial _{J}w\}$ $=$ $0$ (cf. (8.2) in [@GT83])$.$ By (\[eqn4.4\]), this is an elliptic equation in divergence form$.$ So by the maximum principle (e.g. Theorem 8.1 in [GT83]{} with $b^{i}=$ $c^{i}=$ $d=$ $0$ and $a_{IJ}$ bounded), we have$$\sup_{\Omega }|(u_{\varepsilon })_{K}+f_{K}|\leq \sup_{\partial \Omega
}|(u_{\varepsilon })_{K}+f_{K}|.$$
Then (\[eqn4.12\]) follows.
Q.E.D.
For a general $\vec{F},$ the bound for $\nabla u_{\varepsilon }$ may depend on $\varepsilon $ if we invoke the maximum principle for a more general situation (for instance, Theorem 8.16 in [@GT83]).
To perform the boundary gradient estimate, we need a comparison function to apply the comparison principle. Let $\Omega $ $\subset $ $R^{m}$ be a bounded domain with coordinates denoted by $x_{1},$ $x_{2},$ $%
...,$ $x_{m}.$ We call a coordinate system orthonormal if it is obtained by a translation and a rotation from $x_{1},$ $x_{2},$ $...,$ $x_{m}.$ We define a certain notion of convexity for $\Omega $ as follows.
**Definition 4.1.** We call $\Omega $ $\subset $ $R^{m}$ parabolically convex or p-convex in short if for any $p\in \partial \Omega ,$ there exists an orthonormal coordinate system $(\tilde{x}_{1},$ $\tilde{x}_{2},$ $...,$ $\tilde{x}_{m})$ with the origin at $p$ and $%
\Omega \subset \{a\tilde{x}_{1}^{2}-\tilde{x}_{2}<0\}$ where $a>0$ is independent of $p.$
Note that a $C^{2}$-smooth bounded domain with positively curved (positive principal curvatures) boundary is p-convex.
**Proposition 4.4**. *Let* $\Omega $ $\subset $ $R^{m}$ *be a p-convex bounded domain. Suppose* $u_{\varepsilon }$ $\in $ $C^{2}(\Omega )\cap C^{1}(%
\bar{\Omega})$ *satisfies* $Q_{\varepsilon }u_{\varepsilon }$ $=$ $0$ *in* $%
\Omega $ *and* $u_{\varepsilon }$ $=$ $\sigma \varphi $ $\in $ $C^{2}(\bar{%
\Omega})$ *on* $\partial \Omega $ *with* $\vec{F}$ $\in $ $C^{1}(\bar{\Omega})$ *for* $0$ $\leq $ $\sigma $ $\leq $ $1$. *Then there exists a constant* $C$ $=$ $C(\varepsilon ,\ a,$ $||F_{I}||_{\infty},$ $||\partial _{I}F_{J}||_{\infty},$ $||\partial _{I}\varphi ||_{\infty},$ $||\partial _{I}\partial _{J}\varphi ||_{\infty})$ *(independent of* $\sigma$*)* *such that*
$$\sup_{\partial \Omega }|\nabla u_{\varepsilon }|\leq C. \label{eqn4.15}$$
*Moreover,* *the bounds hold uniformly* *for* $0<\varepsilon \leq {\varepsilon}_{0}$*, a positive constant.*
**Proof**. Given $p$ $\in $ $\partial \Omega ,$ we have an orthonormal coordinate system $(\tilde{x}_{1},$ $\tilde{x}_{2},$ $...,$ $\tilde{x}_{m})$ as in the definition of p-convexity. Consider the comparison function $w$ $=$ $\alpha G$ $+$ $\sigma
\varphi $ where $G$ is the function $\tilde{G}\equiv $ $a\tilde{x}_{1}^{2}-%
\tilde{x}_{2}$ viewed as a function of $(x_{I}),$ $I=1,\ 2,\ ...,\ m,$ for large $\alpha $ to be determined. In view of the invariance of $Q_{\varepsilon }(u)$ under the coordinate changes of translations and rotations, we compute ($\tilde{Q}_{\varepsilon },$ $\tilde{D%
}_{\varepsilon }$ being the corresponding operator, quantity of $%
Q_{\varepsilon },$ $D_{\varepsilon }$ with respect to $(\tilde{x}_{I}),$ respectively)
$$\begin{aligned}
Q_{\varepsilon }(w) &=&\tilde{Q}_{\varepsilon }(\tilde{w})\text{ \ (}\tilde{w%
}\text{ is }w\text{ viewed as a function of }(\tilde{x}_{I}))
\label{eqn4.16} \\
&=&\frac{P(\tilde{G})\alpha ^{3}+A\alpha ^{2}+B\alpha +E}{\tilde{D}%
_{\varepsilon }(\tilde{w})^{3}} \notag\end{aligned}$$
by (\[eqn4.2\]) where $P(\tilde{G})$ is the corresponding quantity of $P(G)$ $\equiv $ $G_{x_{1}}^{2}G_{x_{2}x_{2}}$ $-$ $2G_{x_{1}}G_{x_{2}}$$\ $$G_{x_{1}x_{2}}$ $+$ $%
G_{x_{2}}^{2}G_{x_{1}x_{1}}$ with respect to $(\tilde{x}_{I}),$ and $A$ is a function of $a,\ F_{I},\ \partial _{I}F_{J},\ \partial _{I}\varphi ,\ \partial _{I}\partial _{J}\varphi $ while $B,$ $E$ are functions of $\varepsilon ,\ a,\ F_{I},\ \partial _{I}F_{J},\ \partial _{I}\varphi ,\ \partial _{I}\partial _{J}\varphi .$ Moreover, a direct computation shows that $%
P(\tilde{G})$ $=$ $2a.$ Since $a>0,$ $Q_{\varepsilon }(w)$ $\geq $ ($\leq $, respectively) $0$ $=$ $Q_{\varepsilon }(u_{\varepsilon })$ for positive (negative, respectively) large $\alpha$ $=$ $\alpha (\varepsilon ,\ a,$ $||F_{I}||_{\infty},$ $||\partial _{I}F_{J}||_{\infty},$ $||\partial _{I}\varphi ||_{\infty},$ $||\partial _{I}\partial _{J}\varphi ||_{\infty})$ by (\[eqn4.16\]). Note that $\alpha $ is independent of $%
\sigma $ and independent of $\varepsilon $ for $0<\varepsilon \leq {\varepsilon}_{0}.$ On the other hand, $w$ $=$ $\alpha G$ $+$ $%
\sigma \varphi $ $\leq $ ($\geq $, respectively) $\sigma \varphi $ $=$ $%
u_{\varepsilon }$ on $\partial \Omega $ since $G$ $\leq $ $0$ on $\bar{\Omega%
}$ by the p-convexity. Therefore $w$ $\leq $ ($\geq $, respectively)$%
u_{\varepsilon }$ in $\Omega $ by the comparison principle for second order quasilinear operators (see e.g. Theorem 10.1 in [@GT83]). Noting that $%
G(p)=0 $ and hence $w(p)$ $=$ $\sigma \varphi (p)$ $=$ $u_{\varepsilon }(p)$, we then have
$$\frac{\partial u_{\varepsilon }}{\partial \nu }\leq (\geq ,\text{
respectively})\frac{\partial w}{\partial \nu } \label{eqn4.17}$$
where $\nu $ $=$ $-{\partial}_{{\tilde{x}}_{2}}$ at $p.$ Observe that $%
\frac{\partial w}{\partial \nu }$ (for either positive or negative $\alpha )$ is bounded by a constant depending on $\varepsilon ,\ a,$ $||F_{I}||_{\infty},$ $||\partial _{I}F_{J}||_{\infty},$ $||\partial _{I}\varphi ||_{\infty},$ $||\partial _{I}\partial _{J}\varphi ||_{\infty}$, but independent of $\sigma $, $p$ (moreover, the bounds hold for $0<\varepsilon \leq {\varepsilon}_{0}$, a positive constant), so is $\frac{\partial u_{\varepsilon }}{\partial \nu }$ by (\[eqn4.17\]). Since $u_{\varepsilon }$ $=$ $\sigma \varphi $ on $\partial \Omega $ and $u_{\varepsilon }$ $-$ $\sigma\varphi$ $\in$ $C^{1}(\bar{\Omega})$, we can easily show that the derivatives of $u_{\varepsilon }$ $-$ $\sigma\varphi$ in the $\tilde{x}_{1}$, $\tilde{x}_{3}$, $...,$ $\tilde{x}_{m})$ (except $\tilde{x}_{2}$) directions all vanish at $p$. It follows that in the $\tilde{x}_{j}$ $(j\neq 2)$ direction, the derivative of $u_{\varepsilon }$ is the same as the derivative of $\sigma \varphi $. So of course it is bounded by $||\nabla \varphi ||_{\infty}$ (note that $0$ $\leq $ $\sigma $ $\leq $ $1$)$.$ Altogether we have proved (\[eqn4.15\]).
Q.E.D.
**Proof of Theorem A.**
In order to apply Theorem 11.8 in [@GT83] to solve the Dirichlet problem (\[eqn4.1\]), we consider a family of equations:
$$\begin{aligned}
Q_{\varepsilon ,\sigma }u &\equiv &div\frac{\nabla u+\sigma \vec{F}}{\sqrt{%
\varepsilon ^{2}+|\nabla u+\sigma \vec{F}|^{2}}}=0\text{ in }\Omega
\label{eqn4.18} \\
u &=&\sigma \varphi \text{ \ on }\partial \Omega ,\text{ \ }0\leq \sigma
\leq 1. \notag\end{aligned}$$
Express $Q_{\varepsilon ,\sigma }u$ $=$ $a_{IJ}(\varepsilon ,x,\nabla u;\sigma
)u_{IJ}$ $+$ $b(\varepsilon ,x,\nabla u;\sigma )$ where $a_{IJ}(\varepsilon ,x,\nabla u;\sigma )$ and $%
b(\varepsilon ,x,\nabla u;\sigma )$ are given by (\[eqn4.3\]) with $\vec{F}$ replaced by $\sigma \vec{F}.$ It is then easy to check that the conditions (i), (ii), (iii) on page 287 of [@GT83] are satisfied. To have an apriori Hölder estimate for $\nabla u,$ we invoke Theorem 13.2 in [@GT83]. Comparing (\[eqn4.18\]) with (13.2) in [@GT83] gives
$$\mathbf{A(}x,u,\nabla u)=\frac{\nabla u+\sigma \vec{F}}{\sqrt{\varepsilon
^{2}+|\nabla u+\sigma \vec{F}|^{2}}},B(x,u,\nabla u)=0.$$
Following pages 319-320 of [@GT83], we find $\bar{a}^{IJ}$ $%
\equiv $ $D_{p_{J}}A^{I}$ $=$ $a_{IJ}(\varepsilon ,x,\nabla u;\sigma )$ and $\lambda
(\varepsilon ,x,u,$ $\nabla u)$ $=$ $\varepsilon ^{2}/[\varepsilon ^{2}+|\nabla u+\sigma
\vec{F}|^{2}]^{3/2}$ by (\[eqn4.4\])$.$ Therefore we can take $\lambda
_{K} $ $=$ $\varepsilon ^{2}/[\varepsilon ^{2}+(K+C)^{2}]^{3/2}$ in (13.4) of [@GT83], in which $K$ $\equiv $ $|u|_{1;\Omega }$ (see page 53 in [@GT83] for the notation) and $C$ $\equiv $ $||\vec{F}||_{\infty}$. Similarly we estimate
$$|D_{p_{J}}A^{I}|=|a_{IJ}(\varepsilon ,x,\nabla u;\sigma )|\leq \frac{1}{\sqrt{\varepsilon
^{2}+|\nabla u+\sigma \vec{F}|^{2}}}\leq \frac{1}{\varepsilon }.$$
So we can take $\Lambda _{K}$ $=$ $\varepsilon ^{-1}.$ Since both $%
D_{z}A^{I}$ and $B$ vanish, we compute
$$\begin{aligned}
|\delta _{J}A^{I}|+|B| &=&|D_{x_{J}}A^{I}| \\
&=&\frac{|(\varepsilon ^{2}+|\nabla u+\sigma \vec{F}|^{2})\partial
_{J}(\sigma F_{I})-(u_{I}+\sigma F_{I})(u_{L}+\sigma F_{L})\partial
_{J}(\sigma F_{L})|}{[\varepsilon ^{2}+|\nabla u+\sigma \vec{F}|^{2}]^{3/2}}
\\
&\leq &\frac{(\frac{3}{2}+n)\sup_{K,I}|\partial _{K}F_{I}|}{\sqrt{%
\varepsilon ^{2}+|\nabla u+\sigma \vec{F}|^{2}}}.\end{aligned}$$
Therefore we can take an upper bound $\mu _{K}$ $=$ $\varepsilon
^{-1}(\frac{3}{2}+n)\sup_{J,I}|\partial _{J}F_{I}|.$ Now by Theorem 13.2 in [@GT83], we have an apriori Hölder bound for $\nabla u$ in terms of $%
n,$ $K$ $(\equiv $ $|u|_{1;\Omega }),$ $\Lambda _{K}/\lambda _{K},$ $\mu
_{K}/\lambda _{K},$ size of $\Omega ,$ and $|\varphi |_{2;\Omega }.$ On the other hand, we observe that Lemma 4.1, Propositions 4.2-4.4 still hold for $%
Q_{\varepsilon ,\sigma }$ instead of $Q_{\varepsilon }.$ So we have an apriori $C^{1}$ bound for solutions of (\[eqn4.18\]), independent of $%
\sigma $ and $\varepsilon $ (for $0<\varepsilon \leq {\varepsilon}_{0}$). Altogether we have obtained an apriori $%
C^{1,\beta }(\bar{\Omega})$ ($\beta >0)$ bound for solutions of ([eqn4.18]{}), independent of $\sigma $ (but depend on $\varepsilon $). By Theorem 11.8 in [@GT83], we obtain
**Theorem 4.5**. *Let* $\Omega $ *be a p-convex bounded domain in* $R^{m},m\geq 2,$ *with* $\partial \Omega \in C^{2,\alpha }$* *$(0<\alpha <1)$*.* *Let* $\varphi \in C^{2,\alpha }(\bar{%
\Omega}).$* Suppose* $\vec{F}$* *$\in $* *$%
C^{1,\alpha }(\bar{\Omega})$* satisfies the condition (\[eqn4.11\]) for* $C^{1,\alpha }$*-smooth and bounded* $f_{K}$*’s in* $\Omega .$ *Then there exists a solution* $u_{\varepsilon }$ $\in $ $%
C^{2,\alpha }(\bar{\Omega})$ *of the Dirichlet problem:* $Q_{\varepsilon
}(u)=0 $ *in* $\Omega ,$ $u=\varphi $ *on* $\partial \Omega $ *for given* $%
\varepsilon $ $>$ $0.$
**(Proof of Theorem A Continued)**
Propositions 4.2-4.4 tell us that there exists a constant $C$ $=$ $C(\varepsilon ,\ a,\ R,$ $||F_{I}||_{\infty},$ $||\partial _{I}F_{J}||_{\infty},$ $||\varphi ||_{\infty},$ $||\partial _{I}\varphi ||_{\infty},$ $||\partial _{I}\partial _{J}\varphi ||_{\infty},$ $||f_{I}||_{\infty})$ such that
$$\sup_{\Omega }|u_{\varepsilon }|+\sup_{\Omega }|\nabla u_{\varepsilon }|\leq
C. \label{eqn4.19}$$
Moreover, the bounds hold uniformly for $0<\varepsilon \leq {\varepsilon}_{0}$, a positive constant.
In view of (\[eqn4.19\]) we can find a subsequence $u_{\varepsilon _{j}}$ $(0<\varepsilon _{j}\leq \varepsilon _{0},\ \varepsilon _{j}\rightarrow 0)$ converging to $u_{0}$ in $C^{0}$ by the Arzela-Ascoli theorem. Then the Lipschitzianity of $u_{0}$ follows by taking the limit of ratios: ($x\neq y$)
$$\begin{aligned}
\left| \frac{u_{\varepsilon _{j}}(x)-u_{\varepsilon _{j}}(y)}{x-y}\right|
\ (\leq C). \end{aligned}$$
Next we claim that $u_{0}$ is a minimizer for $\mathcal{F(\cdot )}$ (see (\[eqn1.3\])) such that $u_{0}$ $=$ $\varphi $ on $\partial \Omega .$ Observe that $W^{1,q}(\Omega )$ is compactly imbedded in $L^{1}(\Omega )$ (e.g., Theorem 7.26 in [@GT83]). So we may as well assume that $%
u_{\varepsilon _{j}}$ converges to $u_{0}$ in $L^{1}(\Omega ).$ Also note that $|\vec{p}+\vec{F}|$ is convex in $\vec{p}$ since $|\lambda \vec{p}_{1}$ $+$ $(1-\lambda )\vec{p}_{2}$ $+$ $\vec{F}|$ $=$ $|\lambda (\vec{p}_{1}+\vec{%
F})$ $+$ $(1-\lambda )(\vec{p}_{2}+\vec{F})|$ $\leq $ $\lambda |\vec{p}_{1}+%
\vec{F}|$ $+$ $(1-\lambda )|\vec{p}_{2}+\vec{F}|$ for $0$ $\leq $ $\lambda $ $\leq $ $1.$ We can therefore apply Theorem 4.1.2 in [@Morrey66] to conclude the lower semicontinuity of $\mathcal{F(\cdot )}$ (see (\[eqn1.3\]))$:$
$$\mathcal{F(}u_{0}\mathcal{)\leq }\lim \inf_{j\rightarrow \infty }\mathcal{F(}%
u_{\varepsilon _{j}}). \label{eqn4.20}$$
Now for $v$ $\in $ $W^{1,1}$ with $v$ $-$ $\varphi $ $\in $ $%
W_{0}^{1,1},$ we estimate
$$\begin{aligned}
\mathcal{F(}u_{\varepsilon _{j}}) &\equiv &\int_{\Omega }\mid \nabla
u_{\varepsilon _{j}}+\vec{F}\mid \text{ (omitting volume element)}
\label{eqn4.21} \\
&\leq &\int_{\Omega }\sqrt{\varepsilon _{j}^{2}+|\nabla u_{\varepsilon _{j}}+%
\vec{F}|^{2}} \notag \\
&\leq &\int_{\Omega }\sqrt{\varepsilon _{j}^{2}+|\nabla v+\vec{F}|^{2}}
\notag \\
&\leq &\varepsilon _{j}\text{ }vol(\Omega )+\int_{\Omega }|\nabla v+\vec{F}|
\notag\end{aligned}$$
where we have used the fact that the Dirichlet solution $%
u_{\varepsilon _{j}}$ $\in$ $C^{2}(\bar{\Omega})$ is also a minimizer for $\mathcal{F}_{\varepsilon
_{j}}(u)$ $\equiv $ $\int_{\Omega }\sqrt{\varepsilon _{j}^{2}+|\nabla u+\vec{%
F}|^{2}}$. Taking the limit infimum of (\[eqn4.21\]) and making use of (\[eqn4.20\]), we finally obtain that $\mathcal{F(}u_{0}\mathcal{)}$ $%
\mathcal{\leq }$ $\int_{\Omega }|\nabla v+\vec{F}|$ $\equiv $ $\mathcal{F(}v%
\mathcal{)}$. That is to say, $u_{0}$ is a minimizer for $\mathcal{F(}\cdot
\mathcal{)}$.
Q.E.D.
Uniqueness of minimizers-proof of Theorems B and C
==================================================
Recall (see Section 3) that $\Omega \subset R^{m}$ denotes a bounded domain and $\mathcal{F}(u)$ $\equiv $ $\int_{\Omega }\{|\nabla u+\vec{F}|$ $%
+ $ $Hu\}$ for $u\in W^{1,1}(\Omega )$, $\vec{F}$ $\in $ $L^{1}(\Omega )$, and $H$ $\in$ $L^{\infty}(\Omega )$. We will prove two ($W^{1,1}$) minimizers for $\mathcal{F}(u)$ with the same ”boundary value” have the same normal vector ”almostly”.
**Theorem 5.1.** *Let* $u,v$* *$\in $* *$%
W^{1,1}(\Omega )$* be two minimizers for* $\mathcal{F}(u)$*such that* $u-v$* *$\in $* *$W_{0}^{1,1}(\Omega ).$*Let* $u_{\varepsilon }$* *$\equiv $* *$u+\varepsilon (v-u).$* Then for any pair of regular* $\varepsilon _{1},$* *$%
\varepsilon _{2}$* *$\in $* *$[0,1],$* there holds* $%
N(u_{\varepsilon _{1}})=N(u_{\varepsilon _{2}})$* in* $\Omega
\backslash \lbrack S(u_{\varepsilon _{1}})\cup S(u_{\varepsilon _{2}})](a.e.).$
**Proof.** By (\[eqn3.13\]) with $\varphi =v-u,$ we have
$$0=\mathcal{F}(v)-\mathcal{F}(u)=\int_{0}^{1}\frac{d\mathcal{F}%
(u_{\varepsilon })}{d\varepsilon }d\varepsilon . \label{eqn5.1}$$
As in the proof of Theorem 3.3, the same argument shows that $\frac{d%
\mathcal{F}(u_{\varepsilon })}{d\varepsilon }\geq 0$ for any regular $%
\varepsilon $ $\in $ $[0,1].$ In view of (\[eqn5.1\]) and Lemma 3.2(1), $\frac{d\mathcal{F}%
(u_{\varepsilon })}{d\varepsilon }$ $=$ $0$ for any regular $\varepsilon $ $%
\in $ $[0,1].$ It follows from (\[eqn3.4\]) that $\int_{\Omega \backslash
S(u_{\varepsilon })}N(u_{\varepsilon })\cdot \nabla (v-u)$ $=$ $0$. Therefore for any pair of regular $\varepsilon _{1},$ $\varepsilon _{2}$ $%
\in $ $[0,1],$ there holds
$$\int_{\Omega \backslash \lbrack S(u_{\varepsilon _{1}})\cup S(u_{\varepsilon
_{2}})]}[N(u_{\varepsilon _{2}})-N(u_{\varepsilon _{1}})]\cdot \nabla
(v-u)=0. \label{eqn5.2}$$
Here we have used $\int_{S(u_{\varepsilon _{1}})\backslash
S(u_{\varepsilon _{2}})}N(u_{\varepsilon _{2}})\cdot \nabla (v-u)$ $=$ $0$ and $\int_{S(u_{\varepsilon _{2}})\backslash S(u_{\varepsilon
_{1}})}N(u_{\varepsilon _{1}})\cdot \nabla (v-u)$ $=$ $0$ by observing that for $j=1,2,$ $|N(u_{\varepsilon _{j}})\cdot \nabla (v-u)|$ $\leq $ $|\nabla
(v-u)|$ and $\int_{S(u_{\varepsilon _{j}})}|\nabla (v-u)|=0$ from the definition of $\varepsilon _{j}$ being regular. Write $v-u$ $=$ $%
(u_{\varepsilon _{2}}-u_{\varepsilon _{1}})/(\varepsilon _{2}-\varepsilon
_{1})$ for $\varepsilon _{2}\neq \varepsilon _{1}.$ By Lemma 5.1’ in [CHMY04]{}, the integrand in (\[eqn5.2\]) is
$$\frac{|\nabla u_{\varepsilon _{2}}+\vec{F}|+|\nabla u_{\varepsilon _{1}}+%
\vec{F}|}{2(\varepsilon _{2}-\varepsilon _{1})}|N(u_{\varepsilon
_{2}})-N(u_{\varepsilon _{1}})|^{2}.$$
It then follows that $N(u_{\varepsilon _{1}})=N(u_{\varepsilon _{2}})$ in $%
\Omega \backslash \lbrack S(u_{\varepsilon _{1}})\cup S(u_{\varepsilon
_{2}})].$
Q.E.D.
For a vector field $\vec{G}$ $=$ $(g_{1},g_{2},...,g_{2n})$ on $%
\Omega \subset R^{2n},$ we recall that $\vec{G}^{\ast }$ $\equiv $ $(g_{2},$ $%
-g_{1},$ $g_{4},$ $-g_{3},$ $...,$ $g_{2n},$ $-g_{2n-1}).$
**Lemma 5.2.** *Let* $u,v$* *$\in $* *$%
W^{1}(\Omega )$* where the domain* $\Omega $* is contained in* $R^{2n}.$* Let* $u_{\varepsilon }$* *$\equiv $* *$u+\varepsilon (v-u).$* Suppose* $N(u_{\varepsilon
_{1}})=N(u_{\varepsilon _{2}})$* in* $\Omega \backslash \lbrack
S(u_{\varepsilon _{1}})\cup S(u_{\varepsilon _{2}})]$* for a pair* $\varepsilon _{1},$* *$\varepsilon _{2}$* such that* $%
\varepsilon _{1}$* *$\neq $* *$\varepsilon _{2}$*. Then for* $j=1,2,$* there holds*
$$(\nabla u_{\varepsilon _{j}}+\vec{F})^{\ast }\cdot (\nabla v-\nabla u)=0%
\text{ in }\Omega \text{ (a.e.)}\mathit{.} \label{eqn5.3}$$
**Proof.** We will prove (\[eqn5.3\]) only for $j=1$ (similar argument works also for $j=2)$ For $p$ $\in $ $S(u_{\varepsilon _{1}}),$ $%
\nabla u_{\varepsilon _{1}}+\vec{F}=0.$ So (\[eqn5.3\]) holds obviously. For $p$ $\in $ $S(u_{\varepsilon _{2}}),$ (\[eqn5.3\]) also holds by observing that $\nabla v-\nabla u$ $=$ $[(\nabla u_{\varepsilon _{1}}+\vec{F}
$ $)-$ $(\nabla u_{\varepsilon _{2}}+\vec{F})]/(\varepsilon _{1}-\varepsilon
_{2})$ $=$ $(\nabla u_{\varepsilon _{1}}+\vec{F}$ $)/(\varepsilon
_{1}-\varepsilon _{2})$ and $\vec{G}^{\ast }\cdot \vec{G}$ $=$ $0.$ For the remaining case: $p$ $\in $ $\Omega \backslash \lbrack S(u_{\varepsilon
_{1}})\cup S(u_{\varepsilon _{2}})],$ we observe that for $j=1,2,$
$$N(u_{\varepsilon _{j}})^{\ast }\cdot \nabla u_{\varepsilon _{j}}=\frac{\vec{F%
}^{\ast }\cdot \nabla u_{\varepsilon _{j}}}{|\nabla u_{\varepsilon _{j}}+%
\vec{F}|}=\vec{F}^{\ast }\cdot N(u_{\varepsilon _{j}}). \label{eqn5.4}$$
Here we have used the property $\vec{G}^{\ast }\cdot \vec{G}$ $=$ $0$ twice. Since $N(u_{\varepsilon _{1}})=N(u_{\varepsilon _{2}})$ in $\Omega
\backslash \lbrack S(u_{\varepsilon _{1}})\cup S(u_{\varepsilon _{2}})]$ by assumption (hence $N(u_{\varepsilon _{1}})^{\ast }=N(u_{\varepsilon
_{2}})^{\ast }$ also), we take the difference of (\[eqn5.4\]) for $j=1$ and $j=2$ to obtain
$$N(u_{\varepsilon _{1}})^{\ast }\cdot (\nabla u_{\varepsilon _{2}}-\nabla
u_{\varepsilon _{1}})=0. \label{eqn5.5}$$
Formula (\[eqn5.3\]) for $j=1$ on $\Omega \backslash \lbrack
S(u_{\varepsilon _{1}})\cup S(u_{\varepsilon _{2}})]$ then follows from ([eqn5.5]{}) by noting that $v-u$ $=$ $(u_{\varepsilon _{2}}-u_{\varepsilon
_{1}})/(\varepsilon _{2}-\varepsilon _{1}).$
Q.E.D.
We will use the following general criterion to prove the uniqueness of minimizers and a comparison principle for weak functions later.
**Theorem 5.3.** *Let* $\Omega $* be a bounded domain in* $R^{2n}.$* Let $w\in
W_{0}^{1,p}(\Omega )$, $\sigma \in W^{1,q}(\Omega )$, where $1\leq p<\infty$, $q=\frac{p}{p-1}$ ($q=\infty$ for $p=1$).* * Let $\vec{F}$ (a vector field) $\in$ $W^{1,1}(\Omega )\cap L^{q}(\Omega )$ satisfying $div\vec{F}%
^{\ast }$ *$>$* *$0$* (a.e.) or* $div\vec{F}^{\ast
} $* *$<$* *$0$ *(a.e.). Suppose* $%
(\nabla \sigma +\vec{F})^{\ast }\cdot \nabla w$* *$=$* *$0$* in* $\Omega $* (a.e.). Then* $w$* *$\equiv $* *$0$* in* $\Omega $* (a.e.).*
**Proof.** Take $\omega _{j}$ $\in C_{0}^{\infty }(\Omega )\rightarrow
w $ in $W^{1,p}$ and $\vec{F}_{\bar{k}}\in C^{\infty }(\Omega )$ $\rightarrow$ $\vec{F}$ in $W^{1,1}\cap L^{q}$. Suppose $\omega _{j}$ does not vanish identically. Then there exists a decreasing sequence of positive numbers $a_{i}$ converging to $0$ such that $\Omega _{j,i}$ $\equiv $ $\{|\omega _{j}|$ $>a_{i}\}$ $%
\subset \subset $ $\Omega $ is not empty for large $i$ and $\partial \Omega
_{j,i}$ is $C^{\infty }$ smooth (by Sard’s theorem; note that $|\omega _{j}|$ is $C^{\infty }$ smooth where $\omega _{j}$ $\neq $ $0).$ Also we take $%
v_{k} $ $\in C^{\infty }(\Omega )\rightarrow \sigma $ in $W^{1,2}.$ Consider
$$I_{j,i,k,\bar{k}}\equiv \int_{\partial \Omega _{j,i}}|\omega _{j}|\text{ }(\nabla
v_{k}+\vec{F}_{\bar{k}})^{\ast }\cdot \nu \label{eqn5.6}$$
where $\nu $ denotes the boundary normal. We first compute
$$\begin{aligned}
\int_{\partial \Omega _{j,i}}|\omega _{j}|\text{ }(\nabla v_{k}+\vec{F}_{\bar{k}}%
)^{\ast }\cdot \nu &=&a_{i}\int_{\partial \Omega _{j,i}}(\nabla v_{k}+\vec{F}_{\bar{k}}%
)^{\ast }\cdot \nu \label{eqn5.7} \\
&=&a_{i}\int_{\Omega _{j,i}}div[(\nabla v_{k})^{\ast }+\vec{F}^{\ast }_{\bar{k}}]
\notag \\
&=&a_{i}\int_{\Omega _{j,i}}div\vec{F}^{\ast }_{\bar{k}} \notag\end{aligned}$$
Here we have used Green’s theorem for the second equality and $div(\nabla
v_{k})^{\ast }=0$ for the third equality in (\[eqn5.7\]). It follows from (\[eqn5.7\]) that
$$\lim_{i\rightarrow \infty }I_{j,i,k,{\bar{k}}}=0. \label{eqn5.8}$$
On the other hand, a similar reasoning gives
$$\begin{aligned}
I_{j,i,k,{\bar{k}}} &=&\int_{\Omega _{j,i}}\nabla |\omega _{j}|\cdot (\nabla v_{k}+%
\vec{F}_{\bar{k}})^{\ast }+|\omega _{j}|\text{ }div[(\nabla v_{k})^{\ast }+\vec{F}%
^{\ast }_{\bar{k}}] \label{eqn5.9} \\
&=&\int_{\Omega _{j,i}}\nabla |\omega _{j}|\cdot (\nabla v_{k}+\vec{F}_{\bar{k}}%
)^{\ast }+|\omega _{j}|\text{ }div\vec{F}^{\ast }_{\bar{k}}. \notag\end{aligned}$$
Observe that $\cup _{i}\Omega _{j,i}$ $=$ $\{|\omega _{j}|$ $>$ $0\}$ $=$ $%
\Omega \backslash \{\omega _{j}$ $=0\},$ $(\Omega \backslash \{\omega _{j}$ $%
=0\})$$\backslash \Omega _{j,i}$ $=$ $\cup _{l=i}^{\infty }(\Omega _{j,l+1}$ $\backslash \Omega _{j,l})$, $\vec{F}_{\bar{k}}$ $\in$ $W^{1,1}(\Omega )$, and hence
$$\begin{aligned}
&&(\int_{\Omega _{j,i}}-\int_{\Omega \backslash \{\omega _{j}=0\}})\{\nabla
|\omega _{j}|\cdot (\nabla v_{k}+\vec{F}_{\bar{k}})^{\ast }+|\omega _{j}|\text{ }div%
\vec{F}^{\ast }_{\bar{k}}\} \label{eqn5.10} \\
&=&-\Sigma _{l=i}^{\infty }\int_{\Omega _{j,l+1}\backslash \Omega
_{j,l}}\{\nabla |\omega _{j}|\cdot (\nabla v_{k}+\vec{F}_{\bar{k}})^{\ast }+|\omega
_{j}|\text{ }div\vec{F}^{\ast }_{\bar{k}}\} \notag \\
&=&-\Sigma _{l=i}^{\infty }\text{ }(I_{j,l+1,k,{\bar{k}}}-I_{j,l,k,{\bar{k}}})=I_{j,i,k,{\bar{k}}} \notag\end{aligned}$$
by (\[eqn5.8\]). It follows from (\[eqn5.9\]), (\[eqn5.10\]) that
$$\begin{aligned}
0 &=&\int_{\Omega \backslash \{\omega _{j}=0\}}\nabla |\omega _{j}|\cdot
(\nabla v_{k}+\vec{F}_{\bar{k}})^{\ast }+|\omega _{j}|\text{ }div\vec{F}^{\ast }_{\bar{k}}
\notag \\
&=&\int_{\Omega }\nabla |\omega _{j}|\cdot (\nabla v_{k}+\vec{F}_{\bar{k}})^{\ast
}+|\omega _{j}|\text{ }div\vec{F}^{\ast }_{\bar{k}}. \notag\end{aligned}$$
Here we have used $\nabla |\omega _{j}|=0$ if $\omega _{j}=0$ (p.152 in [@GT83]). Letting ${\bar{k}}$ $\rightarrow $ $\infty $ in the above formula gives
$$\begin{aligned}
0 &=&\int_{\Omega }\nabla |\omega _{j}|\cdot
(\nabla v_{k}+\vec{F})^{\ast }+|\omega _{j}|\text{ }div\vec{F}^{\ast }.
\label{eqn5.11} \end{aligned}$$
Letting $k$ $\rightarrow $ $\infty $ in the first term of (\[eqn5.11\]), we then estimate by using the assumption $\nabla
w\cdot (\nabla \sigma +\vec{F})^{\ast }$ $=$ $0$
$$\begin{aligned}
&&\int_{\Omega }\nabla |\omega _{j}|\cdot (\nabla \sigma +\vec{F})^{\ast }
\label{eqn5.12} \\
&=&\int_{\{\omega _{j}>0\}}(\nabla \omega _{j}-\nabla w)\cdot (\nabla \sigma
+\vec{F})^{\ast }-\int_{\{\omega _{j}<0\}}(\nabla \omega _{j}-\nabla w)\cdot
(\nabla \sigma +\vec{F})^{\ast } \notag \\
&\longrightarrow &0\text{ \ \ \ \ }as\text{ \ }j\rightarrow \infty . \notag\end{aligned}$$
Here we have used $\omega _{j}$ $\rightarrow w$ in $W^{1,p}$ and $%
(\nabla \sigma +\vec{F})^{\ast }$ $\in L^{q}(\Omega )$ by assumption. For the second term of (\[eqn5.11\]), we have $$\lim_{j\rightarrow \infty }\int_{\Omega }|\omega _{j}|\text{ }div\vec{F}%
^{\ast }=\int_{\Omega }|w|\text{ }div\vec{F}^{\ast }>0\text{ or }<0
\label{eqn5.13}$$
if $w\neq 0$ (noting that $div\vec{F}^{\ast }>0$ or $<0$ by assumption)$.$ By (\[eqn5.11\]), (\[eqn5.12\]), and (\[eqn5.13\]), we reach a contradiction. Therefore $w\equiv 0$ in $\Omega $ (a.e.).
Q.E.D.
**Remark.** If $\vec{F}$ does not satisfy the condition in Theorem 5.3, then the theorem may not hold as shown by the following examples. Let $%
\Omega $ $=$ $(0,\pi )$ $\times $ $(0,\pi )$ $\subset $ $R^{2}.$ Let $w$ $=$ $\sin x\sin y$ $\in $ $W_{0}^{1,2}.$ Then $\nabla w$ $=$ $(\cos x\sin y,$ $%
\sin x\cos y).$ Take $\sigma $ $=$ $0$ and $\vec{F}$ $=$ $(\cos x\sin y,$ $%
\sin x\cos y).$ It is easy to see that $\vec{F}^{\ast }$ $=$ $(\sin x\cos y,$ $-\cos x\sin y),$ $div\vec{F}^{\ast }$ $=$ $0,$ and $(\nabla \sigma +\vec{F}%
)^{\ast }\cdot \nabla w$* *$=$* *$\vec{F}^{\ast }\cdot
\nabla w$* *$=$ $0.$ With the same $\sigma $ $(=0)$ and $w$ as above, we can also take $\vec{F}$ $=$ $\sin x$ $(\cos x\sin y,$ $\sin x\cos
y).$ Then still $(\nabla \sigma +\vec{F})^{\ast }\cdot \nabla w$* *$%
= $* *$\vec{F}^{\ast }\cdot \nabla w$* *$=$ $0$ while $div%
\vec{F}^{\ast }$ $=$ $\cos x$ $\sin x$ $\cos y$ has no definite sign in $%
\Omega .$
**Proof of Theorem B.**
The proof follows from Theorem 5.1, Lemma 5.2, and Theorem 5.3 with $p=q=2$, $%
\sigma $ $=$ $u_{\varepsilon _{1}}$, and $w$ $=$ $v-u$.
Q.E.D.
Next we want to prove a comparison principle for weak sub- and super- solutions (a comparison principle for $C^{2}$-smooth functions has been studied in [@CHMY04]. See Theorem C and Theorem C’ there). First we need to define relevant differential inequalities in some weak sense. Let $\Omega \subset R^{m}$ denote a bounded domain. Recall that $N(u)\equiv $ $\frac{\nabla u+\vec{F}}{%
|\nabla u+\vec{F}|}$ is defined on $\Omega \backslash S(u)$ ($\vec{F}$, say, is an $L^{1}_{loc}$ vector field in $\Omega ).$
**Definition 5.1.** Let $H$ $\in$ $L^{1}_{loc}(\Omega )$. We say $u\in W^{1,1}(\Omega )$ satisfies $%
divN(u)\geq H$ ($\leq H,$ respectively) in the weak sense in $\Omega $ if and only if for any $\varphi \in C_{0}^{\infty}(\Omega )$ and $\varphi \geq 0,$ there holds
$$\begin{aligned}
-\int_{S(u)}|\nabla \varphi |+\int_{\Omega \backslash S(u)}N(u)\cdot \nabla
\varphi +\int_{\Omega }H\varphi &\leq &0 \label{eqn5.14} \\
(\int_{S(u)}|\nabla \varphi |+\int_{\Omega \backslash S(u)}N(u)\cdot \nabla
\varphi +\int_{\Omega }H\varphi &\geq &0,\text{ respectively).}
\label{eqn5.15}\end{aligned}$$
Recall that we defined the weak solution to $divN(u)$ $=$ $H$ in Section 3 (see (\[eqn3.12\])). The following result justifies the above definitions.
**Proposition 5.4.** *Let* $H$ $\in$ $L^{\infty}(\Omega ).$ *Then* $u\in W^{1,1}(\Omega )$* satisfies* $%
divN(u)$* *$\geq $ $H$* and* $divN(u)$* *$\leq $ $H$* in the weak sense if and only if* $u\in W^{1,1}(\Omega )$*is a weak solution to the equation* $divN(u)$* *$=$* *$H.$
**Proof.** Since $C_{0}^{\infty}(\Omega )$ is dense in $W_{0}^{1,1}(\Omega )$, (\[eqn5.14\]) and (\[eqn5.15\]) hold for every $\varphi \in C_{0}^{\infty}(\Omega )$ if and only if they hold for every $\varphi \in W_{0}^{1,1}(\Omega )$. Write $\varphi =\varphi ^{+}-\varphi ^{-}$ for $\varphi \in
W_{0}^{1,1}(\Omega )$ where $\varphi ^{+}$ $\equiv $ $\max \{\varphi ,0\}$ and $\varphi ^{-}$ $\equiv $ $\max \{-\varphi ,0\}.$ Express
$$\begin{aligned}
&&\int_{S(u)}|\nabla \varphi |+\int_{\Omega \backslash S(u)}N(u)\cdot \nabla
\varphi +\int_{\Omega }H\varphi \label{eqn5.16} \\
&=&\{\int_{S(u)}|\nabla \varphi ^{+}|+\int_{\Omega \backslash S(u)}N(u)\cdot
\nabla \varphi ^{+}+\int_{\Omega }H\varphi ^{+}\} \notag \\
&&-\{-\int_{S(u)}|\nabla \varphi ^{-}|+\int_{\Omega \backslash
S(u)}N(u)\cdot \nabla \varphi ^{-}+\int_{\Omega }H\varphi ^{-}\}. \notag\end{aligned}$$
Note that $\varphi ^{+}\geq 0$ and $\varphi ^{-}\geq 0.$ Now suppose $u$ is a weak solution to $divN(u)$* *$\leq $ $H$* and* $divN(u)$* *$\geq $ $H.$ Then the right-hand side of (\[eqn5.16\]) is nonnegative by our definitions. So the left hand side of (\[eqn5.16\]) is nonnegative, i.e., (\[eqn3.12\]) holds. Conversely, suppose $u$ is a weak solution to $divN(u)$* *$=$* *$H.$ That is to say, the left hand side of (\[eqn5.16\]) is nonnegative (note that $\varphi $ is not restricted to be nonnegative here). By taking $\varphi \geq 0$ i.e. $\varphi
^{-}=0$ ($\varphi \leq 0$ i.e. $\varphi ^{+}=0,$respectively$)$ in ([eqn5.16]{}), we obtain (\[eqn5.15\]) ((\[eqn5.14\]), respectively).
Q.E.D.
**Definition 5.2.** $u,v\in W^{1}(\Omega )$ satisfy $divN(u)\geq
divN(v)$ in $\Omega $ in the weak sense if and only if for any $\varphi \in
W_{0}^{1,1}(\Omega )$ and $\varphi \geq 0,$ there holds
$$-\int_{S(u)}|\nabla \varphi |+\int_{\Omega \backslash S(u)}N(u)\cdot \nabla
\varphi \leq +\int_{S(v)}|\nabla \varphi |+\int_{\Omega \backslash
S(v)}N(v)\cdot \nabla \varphi . \label{eqn5.17}$$
**Definition 5.3.** $u,v\in W^{1,1}(\Omega )$ satisfy $u\leq v$ on $%
\partial \Omega $ if and only if $(u-v)^{+}$ $\equiv $ $\max (u-v,0)$ $\in
W_{0}^{1,1}(\Omega ).$
**Theorem 5.5.** *Suppose* $u,v\in W^{1,1}(\Omega )$*satisfy the following conditions:*
$$\begin{aligned}
divN(u) &\geq &divN(v)\text{ in }\Omega \text{ (in the weak sense);} \\
u &\leq &v\text{ on }\partial \Omega .\end{aligned}$$
*Then* $N(u)=N(v)$* on* $\{u>v\}\backslash \lbrack S(u)\cup
S(v)].$
**Proof.** Let $\varphi =(u-v)^{+}.$ The condition $u\leq v$ on $%
\partial \Omega $ implies that $\varphi $ $\in $ $W_{0}^{1,1}(\Omega ).$ Let $v_{\varepsilon }\equiv v+\varepsilon \varphi .$ From Lemma 3.2 (1), $\frac{d%
\mathcal{F}(v_{\varepsilon })}{d\varepsilon }$ is increasing in regular $%
\varepsilon .$ It follows that $\frac{d\mathcal{F}(v_{0+})}{d\varepsilon }%
\leq \frac{d\mathcal{F}(v_{1-})}{d\varepsilon }$ by Lemma 3.2 (2). In view of the formula (\[eqn3.3\]), we have
$$+\int_{S(v)}|\nabla \varphi |+\int_{\Omega \backslash S(v)}N(v)\cdot \nabla
\varphi \leq -\int_{S(v_{1})}|\nabla \varphi |+\int_{\Omega \backslash
S(v_{1})}N(v_{1})\cdot \nabla \varphi . \label{eqn5.18}$$
Observe that $v_{1}=u$ on $\{u>v\}$ and $\varphi =0$ on $\{u\leq v\}.$ So the right hand side of (\[eqn5.18\]) equals the left hand side of ([eqn5.17]{}). It follows that
$$-\int_{S(u)}|\nabla \varphi |+\int_{\Omega \backslash S(u)}N(u)\cdot \nabla
\varphi =+\int_{S(v)}|\nabla \varphi |+\int_{\Omega \backslash
S(v)}N(v)\cdot \nabla \varphi . \label{eqn5.19}$$
Write$$\begin{aligned}
&&\int_{\Omega \backslash S(u)}N(u)\cdot \nabla \varphi -\int_{\Omega
\backslash S(v)}N(v)\cdot \nabla \varphi \label{eqn5.20} \\
&=&\int_{\Omega \backslash \lbrack S(u)\cup S(v)]}(N(u)-N(v))\cdot \nabla
\varphi \notag \\
&&+\int_{S(v)\backslash S(u)}N(u)\cdot \nabla \varphi -\int_{S(u)\backslash
S(v)}N(v)\cdot \nabla \varphi . \notag\end{aligned}$$
We claim $$-\int_{S(u)}|\nabla \varphi |-\int_{S(u)\backslash S(v)}N(v)\cdot \nabla
\varphi =0. \label{eqn5.21}$$
Since $\varphi =0$ on $\{u\leq v\},$ we only have to discuss the case that $%
u>v.$ In this case, $\varphi =u-v$ and hence $\nabla \varphi $ $=$ ($\nabla
u $ $+$ $\vec{F})$ $-$ $(\nabla v$ $+$ $\vec{F})$ $=$ $-(\nabla v$ $+$ $\vec{%
F})$ in $S(u)$ (and $=0$ in $S(u)\cap S(v))$. So $N(v)\cdot \nabla \varphi $ $= $ $\frac{\nabla v+\vec{F}}{|\nabla v+\vec{F}|}\cdot \lbrack -(\nabla v$ $%
+ $ $\vec{F})]$ $=$ $-|\nabla v$ $+$ $\vec{F}|$ in $S(u)\backslash S(v).$ It is now clear that (\[eqn5.21\]) holds. Similarly there also holds
$$-\int_{S(v)}|\nabla \varphi |+\int_{S(v)\backslash S(u)}N(u)\cdot \nabla
\varphi =0. \label{eqn5.22}$$
Combining (\[eqn5.19\]), (\[eqn5.20\]), (\[eqn5.21\]), (\[eqn5.22\]) gives
$$\int_{\Omega \backslash \lbrack S(u)\cup S(v)]}(N(u)-N(v))\cdot \nabla
\varphi =0. \label{eqn5.23}$$
By Lemma 5.1’ in [@CHMY04] (which works also for $u,v$ $\in $ $W^{1,1}(\Omega
) $), we have
$$(N(u)-N(v))\cdot \nabla \varphi =\frac{|\nabla u+\vec{F}|+|\nabla v+\vec{F}|%
}{2}|N(u)-N(v)|^{2} \label{eqn5.24}$$
on $\{u>v\}\backslash \lbrack S(u)\cup S(v)]$ (where $\varphi =u-v$)$.$ Noting that $\varphi =0$ on $\{u\leq v\}$ and substituting (\[eqn5.24\]) into (\[eqn5.23\]), we finally obtain $N(u)$ $=$ $N(v)$ on $%
\{u>v\}\backslash \lbrack S(u)\cup S(v)].$
Q.E.D.
We can now prove the comparison principle for weak sub- and super- solutions.
**Proof of Theorem C.**
By Theorem 5.5 and Lemma 5.2 (switching the roles of $u$ and $v$ and taking $\Omega $ $=$ $\{u>v\}$, $\varepsilon _{1}=0,$ $\varepsilon _{2}=1$), we obtain ($\nabla v+\vec{F})^{\ast }$ $\cdot $ $\nabla (u-v)^{+}$ $=$ $0$. Then we apply Theorem 5.3 (with $p=q=2$, $\sigma $ $=$ $v$, and $w$ $=$ $(u-v)^{+}$) to conclude that $(u-v)^{+}$ $=$ $0$ in $\Omega .$ That is to say, $u\leq v$* *in* *$\Omega .$
Q.E.D.
When a smooth solution is a minimizer
=====================================
In this section we determine when a smooth solution is a minimizer. We will prove Theorem D, Theorem E, and Corollary F. We first prove a result for the case $H_{m-1}(S(u))$ $=$ $0$, in which a $C^{2}$-smooth solution must be a weak solution.
**Lemma 6.1**. *Let* $\Omega $* be a bounded domain in* $%
R^{m}.$* Suppose* $u$* *$\in $* *$C^{1}(\Omega )$* *$\cap $* *$C^{2}(\Omega \backslash S(u))$* *$\cap
$* *$C^{0}(\bar{\Omega})$* satisfies (\[eqn1.4’\]) in* $%
\Omega \backslash S(u)$* with* $\vec{F}$ $\in$ $C^{1}(\Omega \backslash S(u))$ *and* $H$* *$\in $* *$%
C^{0}(\Omega \backslash S(u))$* *$\cap $* *$L^{1}_{loc}
(\Omega ).$* Suppose* $H_{m-1}(S(u)),$* the* $m-1$*dimensional Hausdorff measure of* $S(u),$* vanishes. Then* $u$* is a weak solution to (\[eqn1.4’\]) and a minimizer for (\[eqn1.3’\]) if* $u$ $\in $ $W^{1,1}(\Omega )$ *and* $H$ $\in$ $L^{\infty}(\Omega )$ *also.*
**
**Proof.** By Theorem 3.3, it suffices to prove that for any $\varphi $ $\in $ $C_{0}^{\infty }(\Omega )$ (3.12) holds. That is,
$$\int_{S(u)}|\nabla \varphi |+\int_{\Omega \backslash S(u)}N(u)\cdot \nabla
\varphi +\int_{\Omega }H\varphi \geq 0.$$
Write $\Omega $ $%
=$ $\Omega _{+}$ $\cup $ $\Omega _{0}$ $\cup $ $\Omega _{-}$ where $\Omega
_{+}$ $\equiv $ $\{\varphi $ $>$ $0\},$ $\Omega _{-}$ $\equiv $ $\{\varphi $ $<$ $0\},$ and $\Omega _{0}$ $\equiv $ $\{\varphi $ $=$ $0\}.$ If $\Omega
_{+}$ $\neq $ $\emptyset ,$ then there exists a sequence of $\varepsilon
_{j} $ $>$ $0$ approaching $0,$ such that $\Omega _{\varepsilon _{j}}$ $%
\equiv $ $\{\varphi $ $>$ $\varepsilon _{j}\}$ $\neq $ $\emptyset ,$ $\cup
_{j=1}^{\infty }\Omega _{\varepsilon _{j}}$ $=$ $\Omega _{+}$ and $\partial
\Omega _{\varepsilon _{j}}$ are $C^{\infty }$-smooth by Sard’s theorem. Since $u$* *$\in $* *$C^{1}(\Omega ),$ $S(u)$ $\cap $ $\bar{%
\Omega}_{\varepsilon _{j}}$ is compact. Together with the condition $%
H_{m-1}(S(u))$ $=$ $0,$ for any $\alpha $ $>$ $0,$ we can find a finite cover of balls $B_{r_{k}}(p_{k})$ of center $p_{k}$ and radius $r_{k},$ $%
k=1, $ $2$, ...,$K$ for $S(u)$ $\cap $ $\bar{\Omega}_{\varepsilon _{j}}$ such that
$$\sum_{k=1}^{K}H_{m-1}(\partial B_{r_{k}}(p_{k}))<\alpha . \label{eqn7.1}$$
On the other hand we compute by the divergence theorem and the equation ([eqn1.4’]{})
$$\begin{aligned}
&&\int_{\partial (\Omega _{\varepsilon _{j}}\backslash \cup
B_{r_{k}}(p_{k}))}(\varphi -\varepsilon _{j})N(u)\cdot \nu \label{eqn7.2} \\
&=&\int_{\Omega _{\varepsilon _{j}}\backslash \cup B_{r_{k}}(p_{k})}\nabla
\varphi \cdot N(u)+(\varphi -\varepsilon _{j})H. \notag\end{aligned}$$
Since $\varphi -\varepsilon _{j}$ $=$ $0$ on $\partial \Omega
_{\varepsilon _{j}},$ we can estimate the boundary term in (\[eqn7.2\]) as follows:
$$\begin{aligned}
&\mid &\int_{\partial (\Omega _{\varepsilon _{j}}\backslash \cup
B_{r_{k}}(p_{k}))}(\varphi -\varepsilon _{j})N(u)\cdot \nu \mid
\label{eqn7.3} \\
&\leq &\{\max_{\Omega }|\varphi -\varepsilon _{j}|\}H_{m-1}(\cup
_{k=1}^{K}\partial B_{r_{k}}(p_{k})) \notag \\
&\leq &\alpha \max_{\Omega }|\varphi -\varepsilon _{j}| \notag\end{aligned}$$
by (\[eqn7.1\]) and the fact that $|N(u)|$ $=$ $|\nu |$ $=$ $1.$ Letting $%
\alpha \rightarrow 0$ in (\[eqn7.3\]) gives
$$\int_{\Omega _{\varepsilon _{j}}\backslash S(u)}\nabla \varphi \cdot
N(u)+(\varphi -\varepsilon _{j})H=0 \label{eqn7.4}$$
in view of (\[eqn7.2\]). Letting $\varepsilon _{j}$ $\rightarrow
$ $0$ in (\[eqn7.4\]), we obtain
$$\int_{\Omega _{+}\backslash S(u)}\nabla \varphi \cdot N(u)+\int_{\Omega
_{+}}\varphi H=0 \label{eqn7.5}$$
by noting that the volume of $\{0$ $<$ $\varphi $ $\leq $ $%
\varepsilon _{j}\}$ tends to $0$ as $\varepsilon _{j}$ $\rightarrow $ $0.$ Similarly we also have$$\int_{\Omega _{-}\backslash S(u)}\nabla \varphi \cdot N(u)+\int_{\Omega
_{-}}\varphi H=0. \label{eqn7.6}$$
On the other hand, it is obvious that the integral of $\varphi H$ over $\Omega _{0}$ vanishes since $\varphi $ $=$ $0$ on $\Omega _{0}.$ Observing that $\nabla \varphi $ $=$ $0$ a.e. on $\Omega _{0}$ in view of Lemma 7.7 in [@GT83], we conclude that$$\int_{\Omega _{0}\backslash S(u)}\nabla \varphi \cdot N(u)=0. \label{eqn7.7}$$
It now follows from (\[eqn7.5\]), (\[eqn7.6\]), and ([eqn7.7]{}) that $$\int_{\Omega \backslash S(u)}\nabla \varphi \cdot N(u)+\int_{\Omega }\varphi
H=0 \label{eqn7.8}$$
for $\varphi $ $\in $ $C_{0}^{\infty }(\Omega ).$ Comparing (\[eqn7.8\]) with (\[eqn3.12\]) and noting that the first integral of (\[eqn3.12\]) is zero by $%
H_{m-1}(S(u)) $ $=$ $0,$ we have completed the proof.
Q.E.D.
**Proof of Theorem D.**
Write $\nabla u$ $+$ $\vec{F}$ $=$ $%
(u_{I}+F_{I})_{I=1}^{m}.$ Consider the map $G:$ $p\in \Omega $ $\rightarrow $ $((u_{I}+F_{I})(p))_{I=1}^{m}.$ Computing the differential $dG$ of $G$ at a singular point $p$ (where $G(p)$ $=$ $0),$ we obtain $(\partial _{J}u_{I}$ $+
$ $\partial _{J}F_{I})$ in matrix form (note that $G$ $\in$ $C^{1}$). From elementary linear algebra we compute
$$\begin{aligned}
&&rank\text{ }(\partial _{J}u_{I}+\partial _{J}F_{I})+rank\text{ }(\partial
_{I}u_{J}+\partial _{I}F_{J}) \label{eqn7.9} \\
&\geq &rank\text{ }\{(\partial _{J}u_{I}+\partial _{J}F_{I})-(\partial
_{I}u_{J}+\partial _{I}F_{J})\} \notag \\
&=&rank\text{ }(\partial _{J}F_{I}-\partial _{I}F_{J}). \notag\end{aligned}$$
Observing that $rank$ $(\partial _{J}u_{I}+\partial _{J}F_{I})$ $=$ $rank$ $(\partial _{I}u_{J}+\partial _{I}F_{J})$ (the transpose has the same rank)$,$ we can deduce from (\[eqn7.9\]) that $rank$ $dG(p)$ $\geq $ $%
\mathit{[}\frac{rank\text{ }(h_{JI}(p))+1}{2}\mathit{]}$ where $h_{JI}$ $%
\equiv $ $(\partial _{J}F_{I}-\partial _{I}F_{J}).$ It follows that
$$dim(Ker\text{ }dG(p))\leq m-\mathit{[}\frac{rank\text{ }(h_{JI}(p))+1}{2}\mathit{%
].} \label{eqn7.10}$$
Then by the implicit function theorem there exists an open neighborhood $V$ of $p$ in $\Omega$ such that $G^{-1}(0)\cap V=S(u)\cap V$ is a submanifold of $V,$ having (Euclidean) dimension $dim_{E}$ bounded by the right side of (\[eqn7.10\]).
Q.E.D.
**Proof of Theorem E.**
It suffices to prove that $H_{m-1}(S(u))$ $=$ $0$ in view of Lemma 6.1. Combining (\[eqn1.5’\]) and (\[eqn1.6\]), we bound $dim_{E}S(u)$ by $m-2.$ It follows that $H_{m-1}(S(u))$ $=$ $0$.
Q.E.D.
**Proof of Corollary F.**
For $m=2n$, $\vec{F}=-\vec{X}^{\ast }$, we compute $rank\text{ }(h_{JI})=2n$. Therefore $(\ref{eqn1.6})$ is reduced to $n\geq 2$, hence $m\geq 4$.
Q.E.D.
We remark that the condition $(\ref{eqn1.6})$ does not hold in dimension $m=2$. So $H_{1}(S(u))$ may not vanish. Therefore a $C^2$-smooth solution may not be a minimizer in this case (see Example 7.4). We will discuss the general situation that $H_{m-1}(S(u))$ $>$ $0$ below.
First we will give a criterion for, in particular, a $C^{2}$-smooth solution to be a minimizer. Let $\Omega $ be a domain in $R^{m}$. Let $\Gamma $ $\subset $ $
\Omega $ be a $m-1$ dimensional, orientable, $C^{1}$-smooth submanifold. Let $B$ $\subset\subset $ $\Omega
$ be an open neighborhood of a point in $\Gamma $ with $C^{1}$-smooth boundary and $\bar{B}$ being compact. Suppose $\Gamma \cap B$ divides $B$ into two disjoint parts (note that $\Gamma $ may or may not contain some singular points). That is, $B\backslash \Gamma $ $=$ $B\backslash $ $(\Gamma \cap B)$ $=$ $B^{+}\cup B^{-}$ where $B^{+}$ and $B^{-}$ are disjoint domains (proper open and connected) (see Figure 1(a) or Figure 1(b) below). Suppose $u$ is $C^{2}$-smooth in $\Omega \backslash \Gamma $ and has no singular points in $\Omega \backslash \Gamma $. Let $\vec{F}$ $\in$ $C^{1}(\Omega )$ for simplicity. Suppose also $N^{+}(u)$ and $%
N^{-}(u) $ (restrictions of $N(u)$ to $B^{+}$ and $B^{-},$ respectively) are continuous up to $\Gamma \cap B$, i.e., $N^{+}(u)$ $\in$ $C^{0}(\bar{B^{+}})$, $N^{-}(u)$ $\in$ $C^{0}(\bar{B^{-}})$, so that $divN^{\pm }(u)$ $=$ $H$ in $B^{\pm
},$ respectively$.$ Let $\nu ^{+}$ and $\nu ^{-}$ denote the outward unit normals to $\Gamma \cap B$ with respect to $B^{+}$ and $B^{-},$ respectively$%
.$ Note that $\nu ^{+}$ $=$ $-\nu ^{-}.$
**Proposition 6.2.*** Suppose we have the situation described above. Then* $u$* is a weak solution to (\[eqn1.4’\])* *on* $B$* with* $H$* *$\in$ $C^{0}(B\backslash \Gamma )$ $\cap$ $L^{\infty}(B)$* if and only if along* $\Gamma \cap B,$ *there holds*
$$\mathit{(N}^{+}\mathit{(u)-N}^{-}\mathit{(u))\cdot \nu }^{+}\mathit{=(N}^{+}%
\mathit{(u)-N}^{-}\mathit{(u))\cdot \nu }^{-}\mathit{=0.} \label{eqn6.1}$$
Note that for $u$ $\in$ $W^{1,1}(B)$, $u$ is a weak solution to (\[eqn1.4’\]) if and only if $u$ is a minimizer for (\[eqn1.3’\]) in view of Theorem 3.3.
**Proof**. Using the divergence theorem, we compute
$$\begin{aligned}
&&\int_{B\backslash \Gamma }N(u)\cdot \nabla \varphi +H\varphi
=(\int_{B^{+}}+\int_{B^{-}})(N(u)\cdot \nabla \varphi +H\varphi ) \label{eqn6.3} \\
&=&\int_{\partial B^{+}}\varphi N^{+}(u)\cdot \nu ^{+}+\int_{\partial
B^{-}}\varphi N^{-}(u)\cdot \nu ^{-} \notag \\
&=&\int_{\Gamma \cap B}\varphi (N^{+}(u)-N^{-}(u))\cdot \nu ^{+}. \notag\end{aligned}$$
Here we have used $\nu ^{-}$ $=$ $-\nu ^{+}$ and $divN(u)$ $=$ $H$ in both $B^{+}$ and $B^{-}.$ Observing that $H_{m}(S(u)\cap B)$ $=$ $0$ since $H_{m}(S(u)\cap B)$ $\leq$ $H_{m}(\Gamma \cap B)$ $=$ $0$, we conclude from (\[eqn3.12\]) (also $\varphi $ replaced by $-\varphi )$ that $u$ is a weak solution to (\[eqn1.4’\]) if and only if
$$\int_{B\backslash \Gamma }N(u)\cdot \nabla \varphi +H\varphi =0 \label{eqn6.2}$$
for all $\varphi $ $\in $ $C_{0}^{\infty}(B)$. On the other hand, (\[eqn6.2\]) holds if and only if (\[eqn6.1\]) holds by (\[eqn6.3\]).
Q.E.D.
In order to have a criterion for a more general situation, we extend Proposition 6.2 as follows. Let $\Omega $ $\subset $ $R^{m}
$ be a bounded domain. Let $A$ $\subset $ $\Gamma $ $\subset $ $\Omega $ such that $\Gamma $ is relatively closed in $\Omega $, $H_{m-1}(\bar{A})$ $=$ $0$, and $\Gamma \backslash A$ is a $C^{1}$-smooth $m-1$ dimensional manifold. Suppose $\Omega \backslash \Gamma$ $=$ ${\cup}_{j=1}^{\infty}{\Omega}_{j}$, the union of at most countably many domains ${\Omega}_{j}$. For each $j$, we have $\partial{\Omega}_{j}$ $\subset$ $\partial\Omega \cup \Gamma$. We can view $\Omega\backslash\Gamma$ as domains ${\Omega}_{j}$ obtained by cutting apart along ${\Gamma}$ and $\Gamma \backslash A$ as the union of two copies of $\Gamma \backslash A$. Let $\nu _{j}$ denote the outward unit normal to $\partial{\Omega}_{j}$. Then $\nu _{j}$ exists for any point $p$ $\in$ $\partial{\Omega}_{j}$ $\cap$ $(\Gamma \backslash A)$. At $p$, there is another $l$ ($l$ may equal $j$) such that $\nu _{l}$ $=$ $-\nu _{j}$. Let $\vec{F}$ $\in$ $C^{1}(\Omega \backslash \Gamma)$ for simplicity and $H$ $\in$ $C^{0}(\Omega\backslash\Gamma )$ $\cap$ $L^{1}_{loc}(\Omega )$. Suppose $u$ $\in$ $C^{1}(\Omega\backslash\Gamma )$ has no singular points in $\Omega\backslash\Gamma$. Let $N_{j}(u)$ denote the restriction of $N(u)$ on ${\Omega}_{j}$.
**Theorem 6.3**. *Suppose we have the situation described above. Furthermore, suppose* $N_{j}(u)$ $\in$ $C^{0}({\Omega}_{j}$ $\cup$ $(\partial{\Omega}_{j}$ $\cap$ $(\Gamma \backslash A)))$ $\cap$ $C^{1}({\Omega}_{j})$ *satisfies* $divN_{j}(u)$ $=$ $H$ *in* ${\Omega}_{j}$ for any $j$. *Then* $u$ *is a weak solution to (\[eqn1.4’\]) in* $\Omega$ *if and only if for each* $p$ $\in$ $\Gamma \backslash A$*, there exist* $j$, $l$ *as described above, such that at* $p$, *there holds*
$$\mathit{(N}_{j}\mathit{(u)-N}_{l}\mathit{(u))\cdot \nu }_{j}\mathit{=(N}_{j}%
\mathit{(u)-N}_{l}\mathit{(u))\cdot \nu }_{l}\mathit{=0}\mathit{.}$$
We should remind the reader that for $u$ $\in$ $W^{1,1}(\Omega )$ and $H$ $\in$ $L^{\infty}(\Omega )$, $u$ is a weak solution to (\[eqn1.4’\]) if and only if $u$ is a minimizer for (\[eqn1.3’\]) in view of Theorem 3.3.
**Proof**. Let $U$ $\subset \subset $ $\Omega $ have compact closure in $\Omega $, and suppose that the boundary $\partial U$ is $C^{1}$-smooth. For $\varphi $ $\in $ $C_{0}^{\infty}(\Omega )$ with support contained in $U,$ we compute
$$\begin{aligned}
\int_{U}N(u)\cdot \nabla \varphi +H\varphi &=&\int_{U\backslash \Gamma
}N(u)\cdot \nabla \varphi +H\varphi \label{eqn6.3'} \\
&=&\int_{\partial (U\backslash \Gamma )}\varphi N(u)\cdot \nu \notag \\
&=&\sum_{(j,l)}\int_{\partial{\Omega}_{j}\cap
(\Gamma \backslash A)\cap U}\varphi (N_{j}(u)\cdot {\nu}_{j}+N_{l}(u)\cdot
{\nu}_{l}) \notag \\
&=&\sum_{(j,l)}\int_{\partial{\Omega}_{j}\cap
(\Gamma \backslash A)\cap U}\varphi (N_{j}(u)-N_{l}(u))\cdot {\nu}_{j}. \notag\end{aligned}$$
For the last equality we have used $\nu_{l}$ $=$ $-\nu_{j}$. Now observe that $u$ is a weak solution in $\Omega $ if and only if the first term of (\[eqn6.3’\]) vanishes for any $\varphi $ $\in $ $C_{0}^{\infty }(\Omega )$ and associated $U.$ On the other hand, this is equivalent to concluding that $(N_{j}(u)$ $-$ $N_{l}(u))$ $\cdot $ $\nu_{j}$ $=$ $0$ on $\Gamma\backslash A$ by (\[eqn6.3’\]).
Q.E.D.
We remark that it is possible that $u$ $\in$ $C^{1}\backslash C^{2}$ while $N(u)$ $\in$ $C^{1}$ in the nonsingular domain. For instance, let $u$ $=$ $xy+g(y)$ with $g$ $\in$ $C^{1}\backslash C^{2}$. Take $\vec{F}$ $=$ $-\vec{X}^{\ast }$. We can then compute $N(u)$ $=$ $(0,\pm 1)$ in the nonsingular domain defined by $2x+g'(y)$ $\neq$ $0$.
We will also make a remark on deducing the second equality in (\[eqn6.3’\]). First note that at points of $A$ with $H_{m-1}(\bar{A})$ $=$ $0$, $\nu$ may not exist. How do we deal with this? For any $\varepsilon $ $>$ $0,$ we can find a finite open cover $\cup _{j=1}^{k}D_{j}$ $\supset $ $\bar{A}$ such that $\sum_{j=1}^{k}H_{m-1}(\partial D_{j})$ $<$ $\varepsilon .$ By the divergence theorem we have
$$\int_{(U\backslash \Gamma )\backslash \cup _{j=1}^{k}D_{j}}N(u)\cdot \nabla
\varphi +H\varphi =\int_{\partial \lbrack (U\backslash \Gamma )\backslash
\cup _{j=1}^{k}D_{j}]}\varphi N(u)\cdot \nu .$$
Passing to the limit as $\varepsilon $ $\rightarrow $ $0$ and observing that the integrands are bounded (since $|N(u)|=1),$ we obtain
$$\int_{U\backslash \Gamma }N(u)\cdot \nabla \varphi +H\varphi =\int_{\partial
(U\backslash \Gamma )}\varphi N(u)\cdot \nu .$$
The idea of the above argument was used in [@CF74]. We have displayed this idea in the proof of Lemma 6.1. We also used a similar argument in the proof of Theorem 5.2 in [@CHMY04]. We remark that Pauls had a similar result (for $m=2$, $\vec{F}=-\vec{X}^{\ast }$, and $H$ $=$ $0$) as Theorem C in [@Pau05]. Ritoré and Rosales also obtained a similar result for $C^{2}$-smooth minimizers (for $m=2$, $\vec{F}=-\vec{X}^{\ast }$, and $H$ $=$ $constant$) as Theorem 4.15 in [@RR05].
Examples
========
We shall give examples of Lipschitz (continuous) minimizers in dimension 2.
**Definition 7.1.** A $p$-area minimizer or a $p$-minimizer in short is a minimizer for (\[eqn1.2\]) with $H$ $=$ $0$.
Throughout this section, we will always work on the situation that $m=2$, $\vec{F}=-\vec{X}^{\ast }$, and $H$ $=$ $0$. Recall that the integral curves of $N^{\perp }(u)$ are straight lines (see Section 4 in [@CHMY04]), called the characteristic lines, segments, or rays. We call the angle between $\Gamma $ (oriented) and a characteristic ray (with direction $N^{\perp }(u))$ in $B^{+}$ ($B^{-},$ respectively) touching a point $p$ $\in $ $\Gamma $ the incident (reflected, respectively) angle at $p.$ Therefore geometrically (\[eqn6.1\]) is equivalent to saying that at $p$ $\in $ $\Gamma \cap B,$ either $N^{+}(u)=N^{-}(u)$ (see Figure 1(b)) or $N^{+}(u)\neq N^{-}(u)$ which implies
$$\textit{The incident angle}\mathit{=}\textit{The reflected angle. }
\label{eqn6.4}$$
(see Figure 1(a))$.$ Suppose $u$ $\in $ $C^{2}$ at a point $p$ $%
\in $ $\Gamma \cap B$ and $\Gamma $ is a singular curve. Recall that if the characteristic line segments $%
\Gamma _{+}$ and $\Gamma _{-}$ in $B^{+}$ and $B^{-}$ respectively meet at $%
p,$ then $\Gamma _{+}\cup \{p\}\cup \Gamma _{-}$ must form a straight line segment according to (the proof of) Proposition 3.5 in [@CHMY04]. Therefore by (\[eqn6.4\]) (note that $N^{+}(u)\ =\ -N^{-}(u)$ at $p$ in this situation), we can conclude that
$$\Gamma _{+}\textit{ and }\Gamma _{-}\textit{ are perpendicular to }\Gamma \textit{
at }p\textit{ if }u\in C^{2}\textit{ at }p. \label{eqn6.5}$$
The constraint (\[eqn6.5\]) gives a necessary and sufficient condition for a $C^{2}$-smooth solution of (\[eqn1.1\]) with $H$ $=$ $0$ to be a $p$-minimizer. We can have a function $u$ $\in $ $C^{2}(\Omega )$ which satisfies the p-minimal surface equation $divN(u)$ $=$ $0$ in $\Omega \backslash S(u),$ but is not a weak solution or a $p$-minimizer.
**Example 7.1**. Consider $\vec{F}$ $=$ $(-y,x)$ in the following $N(u)$’s.
\(a) By taking $a=\cos \vartheta ,$ $b=\sin \vartheta ,$ and $g(-bx+ay)$ $=$ ($\cot \vartheta )$ $(-bx+ay)^{2}$ in (1.2) of [@CHMY04] for $0$ $<$ $%
\vartheta $ $<$ $\frac{\pi }{2}$, we obtain $u(x,y)$ $=$ $-xy+y^{2}\cot
\vartheta .$ This is a $C^{2}$ smooth solution to $divN(u)$ $=$ $0$ in $%
R^{2}\backslash S(u)$ for $\vec{F}$ $=$ $(-y,x)$ by a direct computation. We can easily determine the singular set $S(u)$ $\equiv $ $\{u_{x}-y$ $=$ $0,$ $%
u_{y}+x$ $=$ $0\}$ $=$ $\{y=0\}.$ On the other hand, $N^{\perp }(u)$ $=$ $%
(\cos \vartheta ,$ $\sin \vartheta )$ which is not perpendicular to the $x$-axis $\{y=0\}$ (see Figure 2(a))$.$ So in view of (\[eqn6.5\]), this $u$ is not a $p$-minimizer on any bounded domain $\Omega $ containing part of the $x$-axis.
(b)** **Let $u(x,y)$ $=$ $-xy+y^{2}\cot \vartheta $ for $y>0;$ $=$ $%
-xy+y^{2}\cot \eta $ for $y<0;$ $=0$ for $y=0$ where $0$ $<$ $\vartheta
,\eta $ $<$ $2\pi ,$ $\vartheta \neq \pi ,$ $\eta \neq \pi .$ We compute
$$\begin{aligned}
N^{\perp }(u) &=&(\frac{\cos \vartheta }{\sin \vartheta }|\sin \vartheta
|,|\sin \vartheta |)\text{ for }y>0; \label{eqn6.6} \\
N^{\perp }(u) &=&(-\frac{\cos \eta }{\sin \eta }|\sin \eta |,-|\sin \eta |)%
\text{ for }y<0. \notag\end{aligned}$$
Observe that (\[eqn6.4\]) (or (\[eqn6.1\])) holds if and only if $\vartheta +\eta $ $=$ $2\pi $ (see Figure 2(b)) by (\[eqn6.6\])$.$ Therefore we conclude that $u$ is a ($C^{1,1}$-smooth) $p$-minimizer on any bounded domain in $R^{2}$ if and only if $\vartheta +\eta $ $=$ $2\pi $ in view of (\[eqn6.4\]).
**Example 7.2**. Let $u(x,y)$ $=$ $xy$ for $y>0$, and $u=0$ for $y\leq 0.$ Consider the case of $\vec{F}$ $=$ $(-y,x).$ Compute
$$\begin{aligned}
N^{\perp }(u) &=&(1,0)\text{ for }x>0,y>0;\text{ }N^{\perp }(u)=(-1,0)\text{
for }x<0,y>0. \\
N^{\perp }(u) &=&\frac{(x,y)}{\sqrt{x^{2}+y^{2}}}\text{ for }y<0\end{aligned}$$
(see Figure 3). Observe that the positive $y$-axis $\{x=0,$ $y>0\}$ is a singular curve where (\[eqn6.5\]) holds true. Also on the $x$-axis $%
\{y=0\}$ except the origin, $N^{\perp }(u)$ is continuous and hence ([eqn6.1]{}) holds true (note that the $x$-axis is not a singular curve, but is a curve where $u$ is not $C^{1}$ smooth). Applying Theorem 6.3 with $\Gamma$ $=$ $\{x=0,$ $y > 0\}$ $\cup$ $\{ y=0\}$, we conclude that $u$ is a (Lipschitz) $p$-minimizer on any bounded domain $\Omega$ $\subset$ $R^{2}$.
{width="7cm"}\
We remark that it is not possible to construct a Lipschitz $p$-minimizer having a loop consisting of characteristic lines (see Figure 4 for an example). Indeed, by contradiction, suppose that the loop consists of three characteristic lines $\gamma _{1},$ $\gamma _{2},$ and $\gamma _{3}$ as indicated in Figure 4$.$ Let $\Delta $ denote the region surrounded by $\gamma _{1},$ $\gamma _{2},$ and $\gamma _{3}.$ We integrate the contact form $\Theta $ $\equiv $ $du$ $+$ $xdy$ $-$ $ydx$ over the loop as follows:
$$\begin{aligned}
0 &=&\int_{\gamma _{1}\cup \gamma _{2}\cup \gamma _{3}}\Theta \text{ (}%
\gamma _{1},\gamma _{2},\text{ and }\gamma _{3}\text{ being Legendrian)} \\
&=&\int_{\Delta }d\Theta \text{ \ (Stokes' Theorem)} \\
&=&2\int_{\Delta }dx\wedge dy=2\text{ Area}(\Delta )\neq 0.\end{aligned}$$
This contradiction confirms our claim.
{width="5cm"}
**Example 7.3**. There can be two distinct $C^{2}$-smooth p-minimal graphs (i.e., satisfying (\[eqn1.1\]) on nonsingular domain) having the same boundary value and the same p-area, but both of them are not $p$-minimizers. Consider $u$ $=$ $x^{2}+xy,$ $v$ $=$ $xy+1-y^{2}$ (first given in [@Pau01]). We can easily verify that $u$ and $v$ satisfy (\[eqn1.1\]) with $H$ $=$ $0$ on their respective nonsingular domains and have the same value on the unit circle in the $xy$ - plane. But they do not satisfy (\[eqn6.5\]). So by Proposition $6.2$ or Theorem 6.3, neither of them can be a $p$-minimizer. Compute the $p$-area (see (\[eqn1.2\])) of $u$ and $%
v$ over the unit disc $\Delta $ as follows:
$$\begin{aligned}
\mathcal{X(}u\mathcal{)} &=&\int_{\Delta }\sqrt{8}|x|dxdy=\frac{8\sqrt{2}}{3}%
, \\
\mathcal{X}(v) &=&\int_{\Delta }2|x-y|dxdy=\frac{8\sqrt{2}}{3}.\end{aligned}$$
So they have the same $p$-area. By the uniqueness of $p$-minimizers (see Theorem B), we also conclude that neither $u$ nor $v$ can be the $p$-minimizer.
We are going to describe what the (unique) $p$-minimizer looks like on $%
\Delta $ with the boundary value (or curve) $\rho (\theta )$ $\equiv $ $\cos
^{2}\theta $ $+$ $\cos \theta $ $\sin \theta $ ($\theta $ is the standard angle parameter for $\partial \Delta ).$ Let $(\alpha ,$ $\beta ,$ $\gamma )$ be a point of a line segment $\tilde{L}$ $\subset $ $\bar{\Delta}\times R$ meeting the boundary curve with the projection $L$ $\subset $ $\bar{\Delta}$ passing through the origin$.$ Suppose $\theta ^{\prime }$ is the angle between the positive $x$-axis and part of $L$, lying in the upper half plane. Then we have
$$\begin{aligned}
\alpha &=&t\cos \theta ^{\prime },\text{ }\beta =t\sin \theta ^{\prime }
\label{eqn6.19} \\
\gamma &=&\cos ^{2}\theta ^{\prime }+\cos \theta ^{\prime }\sin \theta
^{\prime } \notag\end{aligned}$$
for $-1$ $\leq $ $t$ $\leq $ $1$ (note that $\rho (\pi +\theta )$ $%
=$ $\rho (\theta )).$ Suppose the contact plane passing through $(\alpha ,$ $%
\beta ,$ $\gamma )$ intersects the boundary curve $\rho $ at $(\cos \theta ,$ $\sin \theta ,$ $\rho (\theta ))$. Then we have the following relation:
$$\rho (\theta )-\rho (\theta ^{\prime })+t\sin (\theta -\theta ^{\prime })=0
\label{eqn6.20}$$
by observing that $z-\gamma +x(y-\beta )-y(x-\alpha )=0$ is the equation for such a contact plane in $R^{3}$ with coordinates $(x,y,z).$ By elementary trigonometry for the above specific $\rho ,$ we can reduce ([eqn6.20]{}) to
$$\frac{\sqrt{2}}{2}[\sin (2\theta +\frac{\pi }{4})-\sin (2\theta ^{\prime }+%
\frac{\pi }{4})]+t\sin (\theta -\theta ^{\prime })=0. \label{eqn6.21}$$
The idea is to choose $\theta ^{\prime }$ such that $\sin (2\theta
^{\prime }+\frac{\pi }{4})$ $=$ $0.$ Then we solve (\[eqn6.21\]) for $%
\theta $ (perhaps we have multiple solutions)$.$ Keeping $\tilde{L}$ or $L$ associated to $%
\theta ^{\prime }$ as the singular set in mind, we connect $(\alpha ,$ $\beta ,$ $\gamma )$ $\in $ $\tilde{L}$ to a point of the boundary curve, associated to $\theta ,$ by a line segment. Since these line segments are Legendrian, their union forms a Legendrian ruled surface, hence a $p$-minimal surface ([@CHMY04]). Moreover, if two characteristic lines (i.e., above Legendrian lines projected to the $xy$-plane) meet at a point of $\tilde{L},$ condition (\[eqn6.4\]) holds. So in this way we can construct the $p$-minimizer by Proposition $6.2$ or Theorem 6.3. We give more details below.
First solving $\sin (2\theta ^{\prime }+\frac{\pi }{4})$ $=$ $0$ gives $%
\theta ^{\prime }$ $=$ $(n-\frac{1}{4})\frac{\pi }{2}$ where $n$ is an integer. There are two such $\theta ^{\prime }$’s modulo an integral multiple of $\pi ,$ namely $\theta ^{\prime }$ $=$ $\frac{3}{8}\pi $ and $%
\theta ^{\prime }$ $=$ $\frac{7}{8}\pi .$ We take $\theta ^{\prime }$ $=$ $%
\frac{3}{8}\pi $ (it turns out that $\theta ^{\prime }$ $=$ $\frac{7}{8}\pi $ won’t give rise to a $p$-minimal graph in the following argument). So ([eqn6.21]{}) is reduced to $$\frac{\sqrt{2}}{2}\sin 2(\theta -\frac{3}{8}\pi )=t\sin (\theta -\frac{3}{8}%
\pi ). \label{eqn6.22}$$
(note that for $\theta ^{\prime }$ $=$ $\frac{7}{8}\pi $ we have $%
-t$ instead of $t$ in (\[eqn6.22\])). By the double angle formula, we deduce from (\[eqn6.22\]) that $$(a)\text{ }\cos (\theta -\frac{3}{8}\pi )=\frac{t}{\sqrt{2}}\text{ ;}(b)%
\text{ }\sin (\theta -\frac{3}{8}\pi )=0. \label{eqn6.23}$$
The solutions to $(b)$ of (\[eqn6.23\]) are $\frac{3}{8}\pi $ $%
+n\pi $ for any integer $n,$ which we ignore. We have two solutions $\theta
_{1},$ $\theta _{2}$ (modulo an integral multiple of $2\pi )$ to $(a)$ of (\[eqn6.23\]) for a given $t$ with the relation
$$\theta _{1}-\frac{3}{8}\pi =\frac{3}{8}\pi -\theta _{2}. \label{eqn6.24}$$
When $t$ runs from $-1$ to $1,$ $\theta _{1}$ runs from $\frac{9}{8%
}\pi $ to $\frac{5}{8}\pi $ clockwise while $\theta _{2}$ runs from $-\frac{3%
}{8}\pi $ to $\frac{1}{8}\pi $ counterclockwise (See Figure 5 below).
{width="6cm"}
Denote the line segments between $(\alpha ,$ $\beta )$ $\in $ $L$ ($\theta
^{\prime }$ $=$ $\frac{3}{8}\pi $) and the boundary point $(\cos \theta _{j},
$ $\sin \theta _{j}),$ $j$ $=$ $1,$ $2,$ by $\Gamma _{t}^{1},$ $\Gamma
_{t}^{2},$ respectively. $\Gamma _{t}^{1}$ and $\Gamma _{t}^{2}$ are the $xy$-plane projections of two Legendrian lines $\tilde{\Gamma}_{t}^{1},$ $\tilde{%
\Gamma}_{t}^{2}$ connecting $(\alpha ,$ $\beta ,$ $\gamma )$ $\in $ $\tilde{L%
}$ to $(\cos \theta _{j},$ $\sin \theta _{j},$ $\rho (\theta _{j})),$ $j$ $=$ $1,$ $2,$ respectively. We will define a graph $\check{u}$ over $\bar{\Delta}%
,$ whose restriction to the region $\Omega $ $\equiv $ $L$ $\cup $ $(\cup
_{j=1,2;-1\leq t\leq 1}\Gamma _{t}^{j})$ is $\tilde{L}$ $\cup $ $(\cup
_{j=1,2;-1\leq t\leq 1}\tilde{\Gamma}_{t}^{j}).$ We can parametrize $\tilde{%
\Gamma}_{t}^{2},$ say, in the following form (see (4.9) in[@CHMY04]):
$$\begin{aligned}
x &=&s(\sin \eta (t))+\alpha (t) \\
y &=&-s(\cos \eta (t))+\beta (t) \\
z &=&s[\beta (t)\sin \eta (t)+\alpha (t)\cos \eta (t)]+\gamma (t).\end{aligned}$$
Here $\eta (t)$ $=$ $\frac{\pi }{2}$ $+$ $\theta _{2}(t)$ $-$ $%
\delta (t)$ in which $\cos (\theta _{2}(t)-\frac{3}{8}\pi )=\frac{t}{\sqrt{2}%
}$ (see $(a)$ of (\[eqn6.23\])) and $\tan \delta (t)$ $=$ $t\sqrt{1-t^{2}/2%
}/(1-t^{2}/\sqrt{2})$ by elementary plane geometry (we leave the details to the reader). On the other hand, (\[eqn6.4\]) holds along $L$ due to ([eqn6.24]{}). Therefore $\check{u}$ $\in $ $C^{1,1}$ is a weak solution to $%
(1.1)$ with $H$ $=$ $0$ over the region $\Omega .$ The remaining domain $%
\bar{\Delta}\backslash \Omega $ consists of four small fan-shaped regions (see Figure 5). For each of such regions, we can connect two points on the boundary curve, indicated by $\theta ^{\prime }$ and $\theta $ which are related by (\[eqn6.21\]) with $t$ $=$ $1.$ Thus we obtain a family of Legendrian line segments whose lengths are getting smaller when both $\theta
^{\prime }$ and $\theta $ tend to some critical value (e.g., for the fan-shaped region between $\frac{1}{8}\pi $ and $\frac{3}{8}\pi ,$ the critical value is $\frac{1}{4}\pi $ by solving $\frac{d\theta }{d\theta
^{\prime }}$ $=$ $0).$ These Legendrian line segments form a portion of the graph $\check{u}$ over $\bar{\Delta}\backslash \Omega .$ So $\check{u}$ is a $C^{2}$-smooth $p$-minimal graph over $\bar{\Delta}\backslash \Omega .$ Altogether $\check{u}$ $\in $ $C^{1,1}(\bar{\Delta})$ is the (unique) $p$-minimizer by Theorem 3.3.
Appendix: uniqueness of solutions to (\[eqn4.1\])
=================================================
The existence of solutions to (\[eqn4.1\]) is asserted in Theorem 4.5. In this section we are going to prove the uniqueness. In fact we can obtain more general results. First we define
$$N_{\varepsilon }(u)\equiv \frac{\nabla u+\vec{F}}{\sqrt{\varepsilon
^{2}+|\nabla u+\vec{F}|^{2}}}. \label{eqn8.1}$$
Let $\vec{\alpha}$ $\equiv $ $(\varepsilon ,\nabla u+\vec{F})$ $%
\in $ $R\times R^{m}$ $=$ $R^{m+1}.$ Denote $|\vec{\alpha}|$ as $\alpha .$ Similarly let $\vec{\beta}$ $\equiv $ $(\varepsilon ,\nabla v+\vec{F})$ and $%
\beta \equiv |\vec{\beta}|$.
**Lemma 8.1.** *Let* $u,$* *$v$* *$\in $* *$W^{1}(\Omega )$* where* $\Omega $ $\subset $ $R^{m}$* (*$%
m\geq 1)$ *is an arbitrary domain. Then*
$$\mathit{(N}_{\varepsilon }\mathit{(u)-N}_{\varepsilon }\mathit{(v))\cdot
(\nabla u-\nabla v)\geq }\frac{\alpha +\beta }{2}\mathit{\mid N}%
_{\varepsilon }\mathit{(u)-N}_{\varepsilon }\mathit{(v)\mid }^{2}\mathit{.}
\label{eqn8.2}$$
*Moreover, the equality holds for* $\varepsilon =0.$* When* $\varepsilon >0,$* *$(N_{\varepsilon }(u)-N_{\varepsilon
}(v))$* *$\cdot $* *$(\nabla u-\nabla v)$* *$=$* *$0$* if and only if* $\nabla u$* *$=$* *$%
\nabla v.$
**Proof.** We compute
$$\begin{aligned}
&&(N_{\varepsilon }(u)-N_{\varepsilon }(v))\cdot (\nabla u-\nabla v)
\label{eqn8.3} \\
&=&(\frac{\nabla u+\vec{F}}{\alpha }-\frac{\nabla v+\vec{F}}{\beta })\cdot
\{(\nabla u+\vec{F})-(\nabla v+\vec{F})\} \notag \\
&=&\{\frac{(\varepsilon ,\nabla u+\vec{F})}{\alpha }-\frac{(\varepsilon
,\nabla v+\vec{F})}{\beta }\}\cdot \{(\varepsilon ,\nabla u+\vec{F}%
)-(\varepsilon ,\nabla v+\vec{F})\} \notag \\
&=&\{\frac{\vec{\alpha}}{\alpha }-\frac{\vec{\beta}}{\beta }\}\cdot \{\vec{%
\alpha}-\vec{\beta}\}=(\alpha +\beta )(1-\cos \theta ) \notag\end{aligned}$$
where $\vec{\alpha}\cdot \vec{\beta}=\alpha \beta \cos \theta .$ On the other hand, we can estimate
$$\begin{aligned}
& & \mid N_{\varepsilon }(u)-N_{\varepsilon }(v)\mid ^{2} \label{eqn8.4} \\
&\leq &\mid \frac{\vec{\alpha}}{\alpha }-\frac{\vec{\beta}}{\beta }\mid
^{2}=2(1-\cos \theta ). \notag\end{aligned}$$
Now (\[eqn8.2\]) follows from (\[eqn8.3\]) and (\[eqn8.4\]). Observing that the equality in (\[eqn8.4\]) holds for $\varepsilon $ $=$ $%
0,$ we obtain the equality in (\[eqn8.2\]) for $\varepsilon $ $=$ $0.$ Suppose $(N_{\varepsilon }(u)$ $-$ $N_{\varepsilon }(v))$ $\cdot $ $(\nabla
u $ $-$ $\nabla v)$ $=$ $0.$ By (\[eqn8.2\]) we have $N_{\varepsilon }(u)$ $= $ $N_{\varepsilon }(v).$ Taking the modulus of this equality gives $\nabla u+\vec{F}|$ $=$ $|\nabla v+\vec{F}|$ if $%
\varepsilon $ $>$ $0.$ It follows that $\nabla u$ $=$ $\nabla v.$
Q.E.D.
We remark that the equality in (\[eqn8.2\]) for $\varepsilon $ $=$ $0$ has been obtained as Lemma $5.1^{\prime }$ in [@CHMY04]. Recall $%
Q_{\varepsilon }u$ $\equiv $ $divN_{\varepsilon }(u)$ (see (\[eqn4.1\]), (\[eqn8.1\])). Note that for $\vec{F}$ $=$ $0,$ $\varepsilon $ $=$ $1,$ $Q_{\varepsilon }u$ is the Riemannian mean curvature of the graph defined by $u.$ In this case, the above inequality has been obtained in [@Mik79], [@Hw88], and [@CK91] independently.
**Definition 8.1.** Let $\Omega \subset R^{m}$ be a bounded domain and $\varepsilon$ $>$ $0$. Suppose $u,$ $v$ $\in $ $W^{1}(\Omega )$ and $\vec{F}$ is measurable$.$ We say $Q_{\varepsilon }u$ $-$ $Q_{\varepsilon }v$ $\geq $ $0$ ($\leq $ $0,$ respectively) weakly if for any $\varphi $ $\in $ $C_{0}^{1}(\Omega ),$ $%
\varphi $ $\geq $ $0,$ there holds
$$\int_{\Omega }(N_{\varepsilon }(u)-N_{\varepsilon }(v))\cdot \nabla \varphi
\leq 0\text{ (}\geq 0,\text{ respectively}). \label{eqn8.5}$$
Note that $N_{\varepsilon }(u)$ and $N_{\varepsilon }(v)$ are integrable since they are bounded by $1.$ We have the following comparison principle for $Q_{\varepsilon }.$
**Theorem 8.2.** *Let* $\Omega \subset R^{m}$* be a bounded domain and* $\varepsilon $ $>$ $0$*. Suppose* $u,$* *$%
v$* *$\in $* *$C^{1}(\Omega )$* *$\cap $* *$%
C^{0}(\bar{\Omega})$* satisfy* $Q_{\varepsilon }u$* *$-$* *$Q_{\varepsilon }v$* *$\geq $* *$0$* (*$%
\leq $* *$0,$* respectively) weakly and* $u$* *$-$* *$v$* *$\leq $* *$0$* (*$\geq 0,$*respectively*$)$* on* $\partial \Omega .$* Then* $u$* *$-$* *$v$* *$\leq $* *$0$* (*$\geq 0,$* respectively*$)$* in* $\Omega .$
**Proof.** Given $a$ $>$ $0,$ we choose a function $f_{a}$ $\in $ $%
C^{1}(R)$ with the property that $f_{a}$ $\equiv $ $0$ in $(-\infty ,$ $a],$ $f_{a}$ $>$ $0$, and $f_{a}^{\prime }$ $>$ $0$ in $(a,$ $\infty ).$ Observe that $f_{a}(u-v)$ $\in $ $C_{0}^{1}(\Omega )$ (i.e., $f_{a}(u-v)$ $\in $ $%
C^{1}$ and has compact support in $\Omega $) by the assumption $u$* *$-$* *$v$* *$\leq $* *$0$ on $\partial \Omega $. It follows from (\[eqn8.5\]) that
$$\begin{aligned}
0 &\geq &\int_{\Omega }(N_{\varepsilon }(u)-N_{\varepsilon }(v))\cdot \nabla
(f_{a}(u-v)) \label{eqn8.6} \\
&=&\int_{\{u-v>a\}}(N_{\varepsilon }(u)-N_{\varepsilon }(v))\cdot
f_{a}^{\prime }(u-v)(\nabla u-\nabla v) \notag \\
&\geq &0\text{ \ (by (\ref{eqn8.2})).} \notag\end{aligned}$$
Therefore we have $(\nabla u$ $-$ $\nabla v)$ $\cdot $ $%
(N_{\varepsilon }(u)$ $-$ $N_{\varepsilon }(v))$ $=$ $0$ in $\{u$ $-$ $v$ $>$ $a\}$ since $f_{a}^{\prime }(u-v)$ $>$ $0$ and $(N_{\varepsilon }(u)$ $-$ $%
N_{\varepsilon }(v))$ $\cdot $ $(\nabla u-\nabla v)\geq $ $0$ in ([eqn8.6]{}). It follows that $\nabla u$ $=$ $\nabla v$ in $\{u$ $-$ $v$ $>$ $%
a\}$ by Lemma 8.1. Thus we obtain $u$ $-$ $v$ $\equiv $ $a$ in $\{u$ $-$ $v$ $>$ $a\},$ a contradiction. So $\{u$ $-$ $v$ $>$ $a\}$ is empty. Since $a$ $%
> $ $0$ is arbitrary, we conclude that $\{u$ $-$ $v$ $>$ $0\}$ is empty. So $%
u$* *$-$* *$v$* *$\leq $* *$0$ in $\Omega .$
Q.E.D.
We remark that basically the above result can be deduced from Theorem 10.7 in [@GT83].
**Corollary 8.3.** *Let* $\Omega \subset R^{m}$* be a bounded domain. Let* $\varepsilon $* *$>$* *$0.$ *Suppose* $u,$* *$v$* *$\in $* *$C^{2}(\Omega )$* *$\cap $* *$C^{0}(\bar{\Omega})$ *and* $\vec{F}\in C^{1}(\Omega )$* satisfy* $%
Q_{\varepsilon }u$* *$=$* *$Q_{\varepsilon }v$* in* $%
\Omega $ *and* $u$* *$=$* *$v$* on* $\partial
\Omega .$* Then* $u\equiv v$* in* $\Omega .$
**
**Theorem 8.4.** *Let* $\Omega \subset R^{m}$* be a bounded domain and* $\varepsilon $* *$>$* *$0$*. Suppose* $u,$* *$v$* *$\in $* *$W^{1,1}(\Omega )$* satisfy* $Q_{\varepsilon }u$* *$-$* *$%
Q_{\varepsilon }v$* *$\geq $* *$0$* (*$\leq $* *$0,$* respectively) weakly and* $(u$* *$-$* *$%
v)^{+}$* (*$(u$* *$-$* *$v)^{-},$*respectively)* $\in $* *$W_{0}^{1,1}(\Omega ).$* Then* $u$* *$-$* *$v$* *$\leq $* *$0$* (*$%
\geq $* *$0,$* respectively) in* $\Omega .$
**Proof.** First** **we observe that (\[eqn8.5\]) still holds for $\varphi $ $\in $ $W_{0}^{1,1}(\Omega ),$ $\varphi $ $\geq $ $0.$ It follows that
$$\begin{aligned}
0 &\geq &\int_{\Omega }(N_{\varepsilon }(u)-N_{\varepsilon }(v))\cdot \nabla
(u\mathit{\ }-\mathit{\ }v)^{+} \label{eqn8.7} \\
&=&\int_{\{u-v>0\}}(N_{\varepsilon }(u)-N_{\varepsilon }(v))\cdot \nabla (u%
\mathit{\ }-\mathit{\ }v) \notag \\
&\geq &0 \notag\end{aligned}$$
by (\[eqn8.2\]). Therefore if $u$ $-$ $v$ $>$ $0,$ $\nabla (u\mathit{\ }-\mathit{\ }%
v)^{+}$ $=$ $\nabla (u\mathit{\ }-\mathit{\ }v)$ $=$ $0$ by Lemma 7.6 in [@GT83], (\[eqn8.7\]), and Lemma 8.1. Also if $u$ $-$ $v$ $\leq $ $0,$ $\nabla (u\mathit{\ }-\mathit{\ }v)^{+}$ $=$ $0$ by Lemma 7.6 in [@GT83]. Altogether we have shown that $\nabla (u\mathit{\ }-\mathit{\ }v)^{+}$ $=$ $0$ in $\Omega .$ Now applying the Sobolev inequality to $(u\mathit{\ }-%
\mathit{\ }v)^{+}$ $\in $ $W_{0}^{1,1}(\Omega ),$ we obtain $(u\mathit{\ }-%
\mathit{\ }v)^{+}$ $=$ $0$ in $\Omega .$ That is, $u$* *$-$* * $v$* *$\leq $* *$0$ in $\Omega .$
Q.E.D.
We remark that the proof of Theorem 8.4 is based on the idea of the proof of Theorem 8.1 in [@GT83].
[99]{} Balogh, Z. M., Size of characteristic sets and functions with prescribed gradient, J. reine angew. Math., 564 (2003) 63-83.
Cheng, J.-H. and Hwang, J.-F., Properly embedded and immersed minimal surfaces in the Heisenberg group, Bull. Aus. Math. Soc., 70 (2004) 507-520.
Cheng, J.-H., Hwang, J.-F., Malchiodi, A., and Yang, P., Minimal surfaces in pseudohermitian geometry, Annali della Scuola Normale Superiore di Pisa, Classe di Scienze (5), 4 (2005) 129-177.
Concus, P. and Finn, R., On capillary free surfaces in the absence of gravity, Acta Math., 132 (1974) 177-198.
Collin, P. and Krust, R., Le Problème de Dirichlet pour l’équation des surfaces minimales sur des domaines non bornès, Bull. Soc. Math. France, 119 (1991) 443-462.
Franchi, B., Serapioni, R., and Serra Cassano, F., Rectifiability and perimeter in the Heisenberg group, Math. Ann. 321 (2001) 479-531.
Garofalo, N. and Nhieu, D.-M., Isoperimetric and Sobolev inequalities for Carnot-Caratheodory spaces and the existence of minimal surfaces, Comm. Pure Appl. Math., 49 (1996) 1081-1144.
Garofalo, N. and Pauls, S. D., The Bernstein problem in the Heisenberg group, preprint, 2005
Gilbarg, D. and Trudinger, N. S., Elliptic partial differential equations of second order, 2nd ed., G.M.W. 224, Springer-Verlag, 1983.
Hwang, J.-F., Comparison principles and Liouville theorems for prescribed mean curvature equation in unbounded domains, Ann. Scuola Norm. Sup. Pisa, 15 (1988) 341-355.
Juutinen, P. and Lindqvist, P., A theorem of Radó’s type for the solutions of a quasi-linear equation, Math. Res. Lett., 11 (2004) 31-34.
Juutinen, P., P-harmonic approximation of functions of least gradient, to appear in Indiana Univ. Math. J..
Lee, J. M., The Fefferman metric and pseudohermitian invariants, Trans. Amer. Math. Soc., 296 (1986) 411-429.
Miklyukov, V. M., On a new approach to Bernstein’s theorem and related questions for equations of minimal surface type, Mat. Sb., 108(150) (1979) 268-289; English transl. in Math. USSR Sb., 36 (1980) 251-271.
Morrey, C. B., Multiple integrals in the calculus of variations, GMW 130, Springer-Verlag New York Inc. 1966.
Pauls, S. D., Minimal surfaces in the Heisenberg group, Geometric Dedicata, 104 (2004) 201-231.
Pauls, S. D., H-minimal graphs of low regularity in $H^{1}$, Comment. Math. Helv., 81 (2006) 337-384.
Ritoré, M. and Rosales, C., Area-stationary surfaces in the Heisenberg group $H^{1}$, arXiv: math.DG/0512547 v1.
Sternberg, P., Williams, G., and Ziemer, W. P., Existence, uniqueness, and regularity for functions of least gradient, J. reine angew. Math., 430 (1992) 35-60.
[^1]:
[^2]:
[^3]:
|
---
abstract: 'Manual determination of plant phenotypic properties such as plant architecture, growth, and health is very time consuming and sometimes destructive. Automatic image analysis has become a popular approach. This research aims to identify the position (and number) of leaves from a temporal sequence of high-quality indoor images consisting of multiple views, focussing in particular of images of maize. The procedure used a segmentation on the images, using the convex hull to pick the best view at each time step, followed by a skeletonization of the corresponding image. To remove skeleton spurs, a discrete skeleton evolution pruning process was applied. Pre-existing statistics regarding maize development was incorporated to help differentiate between true leaves and false leaves. Furthermore, for each time step, leaves were matched to those of the previous and next three days using the graph-theoretic Hungarian algorithm. This matching algorithm can be used to both remove false positives, and also to predict true leaves, even if they were completely occluded from the image itself. The algorithm was evaluated using an open dataset consisting of $13$ maize plants across 27 days from two different views. The total number of true leaves from the dataset was $1843$, and our proposed techniques detects a total of $1690$ leaves including $1674$ true leaves, and only $16$ false leaves, giving a recall of $90.8\%$, and a precision of $99.0\%$.'
author:
- |
Nazifa Azam Khan$^1$, Oliver A.S. Lyon$^2$, Mark Eramian$^1$, Ian McQuillan$^1$\
[[email protected]]{}, [[email protected]]{}, [[email protected]]{}, [[email protected]]{}\
$^1$Department of Computer Science, University of Saskatchewan, Saskatoon, SK, Canada\
$^2$School of Computing, Queen’s University, Kingston, ON, Canada
bibliography:
- 'plant.bib'
title: 'A Novel Technique Combining Image Processing, Plant Development Properties, and the Hungarian Algorithm, to Improve Leaf Detection in Maize'
---
Introduction {#intro}
============
Agriculture is the backbone of the world economy, and a significant number of countries’ economies are highly dependent on it. Plant diseases, undesirable growth, nutritional deficiency, and disorder in plants not only affect the quality and quantity of agricultural profits, but also play a vital role in food crises. Thus, monitoring the condition of plants is a fundamental step in successful cultivation of crops and plant breeding. Indeed, plant breeding, with the assistance of high-throughput phenotyping, is helping to cultivate crops under extreme climate, and to create novel plant varieties [@article; @high2019]. This can ultimately contribute towards a greater quantity and quality of food for feeding the ever-growing population. Until recently, the observation and analysis of plant growth, disease detection, and phenotypic properties, were done entirely manually by experts, in a time intensive, and largely intuitive fashion. Thus, the potential of using image processing in plant research to automate phenotypic inspection has long been recognised as an important step forward [@intro_1]. Now, the food industry ranks among the top industries using image processing [@intro_2] to help evaluate food quality and consistency while eliminating the subjectivity of manual inspections [@intro_3].
Computer vision can be used to extract useful information from plant images [@plant], and to identify phenotypic traits throughout a plant’s life [@crisp]. Various types of digital cameras are used to acquire richer information about plants of interest [@translating; @plant_model; @plant_phenome]. Extracting meaningful phenotypes from plant image sequences is broadly classified into two categories: holistic and component-based [@automated]. Holistic plant phenotyping considers the whole plant as a single object and gives metrics that quantify the basic geometric properties of the plant (e.g. height, width, plant aspect ratio, etc). Component-based analysis tries to identify the specific distinguishing components of a plant (leaves, stem, flower etc), their positions, and sizes [@unl].
***Problem overview:*** Our goal is to reconstruct and predict maize plant growth properties, topology, numbers and positions of leaves, and their emergence, from indoor time sequence plant images. Maize is a globally-grown annual cereal crop, and one of the top three most important cereal crops in the world [@maize_2016; @individual; @agronomic]. Therefore, maize has a vital role to play in our agricultural economy, and automated prediction of maize plant growth, topology, components, disease, and architecture is important.
Automatic determination of plant topology and architecture is highly dependent on accurate plant skeletons. Skeletons are a thin, sometimes one-pixel-wide, representation of any object that represents an object’s topology; it is also often useful for feature extraction. After fifty years of research, there is still no perfect skeletonization algorithm for each individual area of application [@lam1992]. Obtaining accurate plant skeletons from images is a difficult problem, as they are sensitive to small changes leading to extraneous branches, and incorrectly joined segments (errors in topology) [@branches]. Extra branches, also called spurs, are especially common in plant skeletons and form due to noise in images [@cai]. Spurs are often incorrectly interpreted as leaves. The complex geometry of plants, their thin structures, and missing information due to self-occlusion, make skeleton extraction and pruning extremely challenging tasks [@ayan]. Occlusion can occur frequently in 2D images, both partially, and totally. Partial occlusion occurs when a part of a component is occluded from an image, e.g. part of a leaf hiding its branching point. Total occlusion occurs when a leaf is totally obscured by other components of the plant. For total occlusion, there is no obvious way to tell from the image itself that a component is present.
***Contribution:*** This study proposes a novel technique to improve detection of leaves and topology in maize. The proposed method initially obtains the plant skeleton with image processing algorithms, and then it applies statistics regarding maize development available in literature to the skeleton to improve the predicted topology. Lastly, the Hungarian algorithm is applied to match the leaves in each day’s image with those in the previous and next day’s images to match skeleton components between days. The Hungarian algorithm, also known as the Munkres algorithm, is an algorithm on weighted, undirected graphs that determines the one-to-one mapping between two given sets of vertices where the matched edges have the mathematically smallest combined weight [@hungarian]. Despite there being an exponential number of such mappings, the mathematically optimal solution can be found in polynomial time. This can be used to find the best matching between leaves in one image of a plant with those of the same plant on another day, ideally matching the same leaves together [@shortest]. This can both discard leaves detected from erroneous spurs, and also properly predict components even if they are completely occluded. In this way, the analyses are not completely dependent on the skeletonization techniques and pruning strategies. This contributes not only to leaf counting, but also to inference of plant topology.
While the analysis was carried out using images of maize, certain aspects of the analysis would be generalizable to time sequence images from other plant species. For example, the use of the Hungarian algorithm to match different components of the same plant between days is an interesting approach generally. Furthermore, the use of apriori knowledge regarding plant development in a given species can be used to override the classification of components identified by the computer vision algorithms.
Dataset {#data}
=======
An open dataset was used from the University of Nebraska-Lincoln [@dataset]. This dataset, called UNL-CPPD-I, has images of 13 different maize plants (with different genotypes). Plants were imaged once per day for 27 days using the visible light camera of the UNL Lemnatec Scanalyzer 3D high-throughput phenotyping facility [@unl]. Images were taken from two different orthogonal side views at 0 degrees and 90 degrees; denoted by *view-0 image*, and *view-90 image*, respectively. The 0-degree orientation is not always fixed across days, thus the best view for segmenting leaves differs from day to day even for the same plant.
Maize has multiple stages of development; vegetative, transitional, reproductive, and seed [@bonnett]. All images in the dataset are only from the vegetative stage. During this stage, the tip of the main stem is short, leaves are arranged in an alternate phyllotaxy (each leaf develops on the opposite side of the previous leaf, forming a left-right alternating pattern), and leaves arise at a certain distance from the top of the stem. A limited number of axillary buds can develop, but ears do not develop until further stages. Hence, at this stage, the topology is dominated by the alternating leaf pattern.
The dataset also contains ground-truth annotated images with the visible leaves marked. Note that if a leaf is not visible in a given image, then it is not annotated in the ground-truth. This is immediately evident because the number of annotated leaves from the two views can differ substantially. While this is advantageous from the perspective of identifying leaves on an individual image, it does hinder the evaluation of leaf identification procedures that try to identify leaves even if they are occluded, which is our desired goal.
The imaging started on October 10, 2015, 2 days after seed planting. The dataset contains 700 images. A detailed description about the imaging setup, dataset organization, and their genotypes is given in [@unl].
Methodology {#methods}
===========
This section discusses the methods, and algorithm implementations. Each phase is described in a subsection, and are image segmentation, view selection, plant skeletonization, a threshold-based pruning method, spur removal based on statistics from literature on maize, and the use of the Hungarian matching algorithm to improve leaf counting. Certain thresholds calculated within are appropriate for indoor time-sequence images of maize, and would likely need to be adjusted for other species and setups. However, the process used to derive the thresholds can be applied elsewhere, along with the aforementioned generalizable elements.
Segmentation
------------
The first step is obtaining the plant area from the available images with image segmentation techniques. Background subtraction was used to extract the foreground, which, in this case, is the plant itself. Background subtraction involves removing the background of the image, which consists of the imaging chambers of the Lemnatec Scanalyzer 3D high-throughput plant phenotyping system. This has a fixed background that remains static over the period of interest for the image sequence [@unl] (Figure \[fig\_met\_1\]). Then, the Otsu thresholding algorithm [@otsu] was used on the grayscale image of the foreground image to obtain the segmented image. Figure \[fig\_met\_5\] shows an example of Plant\_001-9 at day 15 from view-90 (\[fig\_met\_2\]), its foreground after background subtraction (\[fig\_met\_3\]), and the resulting segmented plant image (\[fig\_met\_4\]).
Preliminary inspection of foreground image histograms showed that any threshold smaller than $0.27$ would label background pixels as foreground (Figure \[fig\_met\_7\]). However, for some images, the detected threshold was smaller than $0.27$ due to the light affecting the background. Therefore, the threshold used was the larger of $0.27$ and that detected by the Otsu algorithm. At this stage, there were some images where these thresholds were capturing some pixels from the plant tub (Figure \[fig\_met\_10\]). Hence, another level of thresholding was performed by calculating the excess green ($2G-R-B$) of the foreground image. The initially-thresholded pixels of the excess green image was thresholded again with a threshold value of the maximum value between $0.1$, and the minimum value between Otsu returned threshold, and $0.5$; which is $t = \max(0.1, \min(t_o,0.5))$, where $t_o$ is the Otsu threshold, and $t$ is the final threshold value. Figure \[fig\_met\_11\] shows how the second level thresholding removed the tub pixels.
View selection {#view}
--------------
For each plant and day, a view selection process was applied to select the view (either 0 degrees or 90 degrees) where the leaves, stem, and buds are most clearly visible. It is best to analyze the plant captured from the viewpoint at which as many leaves as possible are visible. Hence, we compute the area of the convex hulls of the binarized plant images of both views (a similar process was also used in [@unl]). The view with the largest convex hull was selected. For example, Figures \[fig\_met\_12\] and \[fig\_met\_13\] show the binary images of a maize plant on day 24, from both views. It is apparent that the area of the convex hull at side view 90 is higher.
Skeletonization {#skel}
---------------
Skeletons are typically computed by either morphological thinning, computing the medial axis, geometric methods, or the fast marching distance transform. Morphological thinning takes a region, and gradually reduces the boundaries of that region until they are only separated by one pixel. The results of morphological thinning are similar to those of the medial axis transformation, which finds medial points by determining the set of points that are local maxima in terms of distance from the edge of the shape. Although these methods are straightforward, they require intensive heuristics to ensure connectivity of the skeleton in the case of complex dynamic structures such as plants [@unl].
After extensive preliminary testing, it was observed that different skeletonization algorithms work better in specific ranges of days since emergence. This preliminary testing was measured based on the leaf count, spur count, and visually how accurately the skeleton branching points and tips are positioned. Branching points are the starting point of the leaf from the plant stem, and end-points are the leaf tips.
Two different skeletonization methods were applied on different time intervals from emergence. The first skeletonization method is the fast parallel thinning algorithm [@zhang]. This approach works by making successive passes of the image and removing pixels on object borders; this continues until no more pixels can be removed. The image is correlated with a mask that assigns each pixel a number in the range $0 \dots 255$ corresponding to each possible pattern of its 8 neighbouring pixels. A lookup table is then used to assign each pixel a value of 0, 1, 2, or 3, which are selectively removed during the iterations [@zhang]. This approach has the advantages of contour noise immunity and a good effect in thinning crossing lines [@chen]. Some of the earlier days’ images have branches where lines representing leaves cross the stem. From a 1-pixel wide skeleton, a branching point was determined as a pixel with 3 or more neighbours, and a leaf end-point was the pixel with one neighbour. In earlier days’ images, there are frequently overlapping lines in the skeleton, and the prediction needs to be able to properly classify the portions of the crossed lines [@saha]. In our testing, the fast parallel algorithm performs better at classifying these crossed lines. Hence, for skeletons from days 1 through 10 from emergence, this approach was applied.
However, this process causes numerous branching points in the skeleton near skeleton points that have more than three neighbours [@lam], which occurs often in later days. Most of the images at later days have occlusions and curvatures in some leaves. Thus, an algorithm that is better in terms of noise sensitivity, and also preserves topologic and geometric connectivity would be better for these images. Preliminary testing showed that the 3D medial surface/axis thinning algorithm performed well at resolving leaf occlusion and leaf curvature. Therefore, images from day 11 onwards were skeletonized by the 3D medial surface axis thinning algorithm [@lee]. This method uses an octree data structure to examine a $3\times3\times3$ neighbourhood of a pixel. The algorithm proceeds by iteratively sweeping, and removing pixels at each iteration until the image stops changing. Each iteration consists of two steps: first, a list of candidates for removal is assembled. Then pixels from this list are rechecked sequentially, to better preserve connectivity of the image [@lee]. The medial axis of an object is the set of all points that have more than one closest point on the object’s boundary. It ultimately produces a 1-pixel wide skeleton preserving the connectedness as the original object.
Skeleton pruning {#DSE}
----------------
The process of eliminating spurs to overcome skeleton instability is known as pruning [@pruning]. A fixed-threshold-based pruning method could be used on all maize images in an attempt to remove skeleton spurs [@cai]. However, this resulted in many false negatives. Therefore, a pruning method called discrete skeleton evolution [@discrete] was applied on all of the plant skeletons. The fundamental theory of this process is to remove skeleton end-branches that have the smallest relevance for shape reconstruction. It calculates the relevance of branches as their contribution to shape reconstruction by calculating a weight for every edge between an end-point and a branching point iteratively, and any such edge having a weight less than a threshold is deleted. The weight is calculated with the following formula, $1 - (a_s - a_e)/a_s$; where $a_s$ is the current area of the skeleton, and $a_e$ is the area of the edge; the threshold of $0.005$ was used from [@discrete] (area is the number of pixels in that object). This is appropriate because a small weight $w_i$ indicates that the edge has a negligible influence on the skeleton reconstruction, and the skeleton can be reconstructed without this branch in nearly the same fashion as the reconstruction with it [@discrete].
Eliminating skeleton spurs with heuristics and statistics regarding maize development {#pruning}
-------------------------------------------------------------------------------------
By attempting to detect and remove spurs using the thresholding technique from Section \[DSE\], there is a risk that it will incorrectly identify some components as a spur and remove it. Therefore, the following are used to decide between a true leaf, and a spur.
### Removing one pixel long spurs
A general threshold-based edge pruning was performed on the upper area. In the images, the root of the plants is always within $1700$ pixels from the bottom of the image. A $1$ pixel edge pruning was performed to remove any skeleton spurs that were $1$ pixel long above $1700$ pixels above the bottom. This was done because upper leaves would have emerged later on, and tended to be large (leaves produced early in development were more likely to be small). This threshold also helped remove spurs caused by leaf curvature. Figure \[fig\_met\_14\] shows such an example.
### Root area pruning with maize statistics {#step_2}
There was a large number of spurs created adjacent to the tub edge and soil, resulting in uneven segmentation near the root of the plant. Thus, the next few pruning steps were performed to resolve this issue. This was based on existing information regarding the collar (a spot on the stem from where leaves emerge); the third leaf collar of maize plants usually becomes visible approximately between 10 to 14 days after emergence [@corn]. Hence, we calculated the number of branching points, starting from the topmost (maize has an apical structure [@irish], which means the new leaves should only emerge towards the top) for each plant image up to day 10, and everything below the fourth branching point was removed. However, even though the rule suggests there would be at most three branching points, four was chosen to be safe. Figure \[fig\_met\_16\] shows how this pruning step removed a number of spurs around the root of Figure \[fig\_met\_17\].
### Root area pruning by comparing last two consecutive branching point’s position {#step_3}
However, there were still some spurs near the root (Figure \[fig\_met\_17\]). After the above pruning steps, up to four branching points were still possible up until day 10. To remove putative spurs that remain, if their distance between the lowest two branching points is small (smaller than 10 pixels), then it is unlikely for them to be real leaves following alternate phyllotaxy. Hence, the lowest branching point was removed (Figure \[fig\_met\_18\]). This was also applied on plants up to day 10.
### Removing tub edge
Some images had spurs near the root even after applying the above pruning steps, for example, Figure \[fig\_met\_21\] (and on images after day 10). Hence, for all of the images, the $x$-distance, and $y$-distance[^1] between the lowest branching point, and the lowest end-point of the skeleton were calculated, which could either be an end-point of a leaf, or an end-point of an unwanted edge or spur, or possibly the root. If the $x$-distance is less than or equal to the $y$-distance, then possibly it is the root, or a leaf end-point; otherwise it might be a spur, and was deleted. This is because the lowest end-point should be on the stem, and a larger $y$-distance would be representative of moving horizontally from the stem without branching, a likely sign that it was created by the soil, or the tub edge. Figure \[fig\_met\_22\] shows how this pruning strategy removed the spur nearest the root.
### Remove root branches created due to non-smooth segmentation boundary
Lastly, while analyzing false leaves, it was noticed that uneven boundaries near the root of the segmented plant image causes false positives. Figure \[fig\_met\_23\] shows an example of such a scenario. The main challenge here was to decide between a bent dying leaf, and a spur. To identify and remove spurs such as this, the boundary of the segmented plant is compared to the lowest end-point of the first detected leaf to assess whether it is in the middle of the boundary or not (Figure \[fig\_met\_25\]). However, there might be cases where the lowest edge is a bent leaf. To ensure that any true leaf is not deleted, the angles between the potential leaf and stem was calculated, and an angle threshold was used to make the decision. Figure \[fig\_met\_26\], and \[fig\_met\_27\] show how the angle between the potential leaf and stem play a role in deciding between a leaf and a spur. This pruning step was applied on plants at day 15 and later, where this scenario was more prominent.
### Removal of root spurs by comparing the lowest branching point with the lowest skeleton point
There were some skeletons that had a true leaf and a spur connected with the same branching point; this was common at the lowest branching point. In such a case, there were three branching points (Figure \[fig\_met\_28\]). We classify one as a continuation of the stem, one a leaf, and one a spur. A heuristic was used that looked at differences in both $x$ and $y$ coordinates of each end-point with the branching point. The stem has the smallest difference in $y$-coordinate. Between the two remaining segments, a decision is made based on the length of the segment with small and low segments being preferred as the spur. In Figure \[fig\_met\_28\], after stem identification, the spur is associated with the lower of the two segments. In Figure \[fig\_met\_30\], the lowest end-point is that of the leaf, but the spur is chosen to be a short segment.
Growth properties, and leaf matching with the Hungarian matching algorithm {#matching}
--------------------------------------------------------------------------
Despite the fact that the pruning steps of Section \[pruning\] are largely helpful, there are some real leaves that are being improperly disregarded by them. Also, there were still some false positive skeleton spurs present in some plant skeleton images. This section describes techniques to remove some more skeleton spurs, and also detect some true leaves that were not detectable with the image processing techniques.
It is helpful to understand some statistics regarding maize development. It has been found that until the tenth-leaf stage (meaning ten leaves with a collar are visible) the rate of leaf development is approximately 2 to 3 days per additional leaf [@corn_stat]. Thus, for each plant, we compared the number of detected leaves on each image between days. It would be better to not compare the number of leaves, but to match leaves between days. But, the view selected is not the same for a plant across all days, hence matching in these cases is not straightforward. Hence, as a first pass, the number of leaves between days was compared, and then if the numbers differed as described below, and the views were the same, then a matching algorithm was used.
Specifically, whenever there was a mismatch between the number of detected leaves and the range in the number of leaves expected on that day, the number of detected leaves of that day was compared with the number of detected leaves of the three previous, and three next days’ images. If an image of any specific day has missing leaves, or has spurs, it can possibly be identified through this comparison. For example, Figure \[fig\_met\_34\] shows a skeleton at day 10 with a missing leaf, and Figure \[fig\_met\_35\] shows the skeleton of the same plant at day 11 showing the missing leaf from the day 10 image. The original day 10 image has that missing leaf in it (Figure \[fig\_met\_39\]), but it went undetected via the skeletonization procedures. If we compare the leaf count between days 7 and 13, it is clear that day 10 has a missing leaf. Similarly, Figure \[fig\_met\_32\] shows another example of the plant skeleton at day 17 that has a spur in it. However, the number of detected leaves between days 14 and 20 is one fewer.
When a difference occurred, we applied the Hungarian matching algorithm to detected leaves from one day to the next day. This algorithm operates on undirected, weighted bipartite graphs. If the two bipartite vertex sets are $V_1$ and $V_2$ where $V_1$ is smaller in size than $V_2$, then a matching is any injective function $\theta$ from $V_1$ to $V_2$. The image of an element in $V_1$ is its match. Given any such matching $\theta$, the score of $\theta$ is the sum of the edge weights on edges connecting each vertex $v \in V_1$ with $\theta(v)$. Of all of the (exponentially many) matchings, the Hungarian algorithm can find the matching which produces the smallest possible score, in polynomial time.
In the context of this problem, each leaf detected in the day $i$ image was represented as a vertex in $V_1$, and each leaf in the day $i+1$ image was represented as a vertex in $V_2$. The edge weights between them was the sum of the Euclidian distance of the two leaf end-points, with the Euclidian distance of the two leaf branching points. If two leaves between two days has a small weight, then they are likely the same leaf. In this way, we obtained the best matching (of leaves between days) with the Hungarian algorithm. Finally, a threshold was applied to the resulting matched leafs to only keep a match if the edge weight was small enough. After this, any leaf that is unmatched in some day vs. adjacent days could be either a spur, or had the leaf occluded, and which of those is resolved by considering the number of leaves in neighbouring days as described previously.
Evaluation methodologies {#eva}
========================
The verification of leaf detection was done visually, which means manually checking correspondence between skeleton segments and leaves of the predicted elements to ground-truth images. A skeleton segment is a leaf if it starts in a branching point, and has an end-point. The ground-truth number of leaves used to evaluate our method was calculated by taking the maximum number of leaves between the two views. As previously mentioned, leaves were only annotated on an image if they were visible, but a better method of evaluation would be to use the number of leaves even if they are not visible. However, this information is not available but there are at least as many leaves as the maximum of those annotated across the two views. We calculated precision and recall of detected leaves, to evaluate our proposed technique.
A comparative evaluation was done with the Deep Plant Phenomics (DPP) platform which is an open-source [@jordan-ian] programming interface for training models to perform regression and classification tasks. A convolutional neural network (CNN) was created with the DPP framework and was trained according to [@jordan-tuitorial]. Among general trials, the best results were obtained by a model that had two $5\times5$ convolution layers, four $3\times3$ convolution layers with stride 2, and an output layer. A $3\times3$, stride 2 max pooling layer was used after each convolution layer. The model parameters, and training hyper-parameters were: batch size 10, image dimensions $256\times256$, learning rate 0.0001, number of epochs 500, 65% of data for training, 15% of data for validation, and 20% testing data. The batch size denotes the number of examples to be considered for each iteration of training. The total number of images was 630, as images acquired prior to plant emergence were excluded. We employed data augmentations consisting of cropping, flipping, and brightness/contrast adjustment. This was only used to estimate leaf count and not to detect leaf positions. However, it is possible to compare results to ours by interpreting our results as a leaf count. The model was evaluated by calculating the mean absolute loss and absolute loss standard deviation, where the absolute loss is the relative difference in count between predicted and ground-truth.
Results
=======
Table \[tab\_1\] describes the total number of true leaves and false leaves detected in different phases of the proposed technique when they were executed in order. Note that there was a significant reduction in false leaves (23 from 117) after employing the maize plant growth knowledge and statistics. When the time series leaf count comparison, and the Hungarian algorithm strategies were then applied, the number of true leaves increased, and 9 fewer false leaves were detected. However, these processes added an additional two false leaves making the final detection of 1674 true leaves, and 16 false leaves. Across all procedures, the precision, and recall of our proposed method was $0.90$, and $0.99$ respectively (Table \[tab\_2\]). While we used the maximum of the leaves across the two views as the ground-truth (total of $1843$ true leaves), if we instead used the leaves visible in the views selected (total of $1818$ true leaves), the recall and precision would be $0.92$, and $0.99$ respectively.
We also calculated the mean absolute loss, and the absolute loss standard deviation to compare our method with the existing deep learning techniques. The best DPP leaf counter model after a few trials resulted in a mean absolute loss of $1.9$, and an absolute loss standard deviation was $1.5$. In comparison, our method had a mean absolute loss, and an absolute loss standard deviation of $0.62$, and $0.76$ respectively without the time series leaf count comparison and Hungarian matching algorithm respectively, and $0.54$, and $0.68$ with the time series leaf count comparison and Hungarian matching algorithm respectively. Therefore, our method was achieving better results.
Discussion and Conclusions {#conclusion}
==========================
The maize dataset was released in [@unl], where they also performed leaf counting as a part of their component-based phenotyping studies. The view selection, and the segmentation methods of [@unl] was similar to our method. They have evaluated their method by calculating the *average plant-level accuracy*, and defined the plant-level accuracy by subtracting the number of false leaves from the number of detected true leaves, and then dividing by the number of leaves present in the plant image selected. Their average plant-level accuracy of leaf detection with this dataset was $92\%$ [@unl], where the average plant-level accuracy is the average of the thirteen plant-level accuracy of the 13 maize plants. However, their evaluation metric was calculated with the number of leaves present in the plant image selected, but the results of their view selection was not available. Thus, it is not possible to directly compare our results with theirs (as our views were not identical to theirs).
The calculated mean absolute loss, and the absolute loss standard deviation of our technique and Deep Plant Phenomics (DPP) leaf counter model indicates that, the novel technique combining image processing and knowledge regarding maize development improved leaf counting. Moreover, this proposed approach allows us to predict the positions of the leaves, whereas the deep learning leaf counting models only output the predicted total number of leaves. The accuracy of component-based plant phenotyping highly depends on the obtained plant skeletons. The task of determining ideal image processing techniques to figure out maize topology is challenging. Therefore this proposed novel method aims to reduce the constraints on determining the ideal image processing algorithms, and attempts to improve the results obtained from computer vision with plant-specific growth statistics, and knowledge. Moreover, the Hungarian algorithm adds additional information by matching topologies from one day to others, which refines leaf identification. This work contributes towards component-based plant phenotyping studies of maize. Altogether our method achieves a recall of $90.8\%$, and a precision of $99.0\%$, also indicating that our technique can play an important role in component-based plant phenotyping studies of not only maize plants but also similarly structured plants.
Acknowledgements
================
This research was undertaken thanks in part to funding from the Canada First Research Excellence Fund.
[^1]: In images, the upper leftmost pixel is $(0,0)$, the $x$-axis is the height, and $y$-axis is the length.
|
---
abstract: 'We report on the status of the calculation of deep-inelastic structure functions at three loops in perturbative QCD. The method employed allows to calculate the Mellin moments of structure functions analytically as a general function of $N$. As an illustration, we present the leading fermionic contributions to the non-singlet anomalous dimension of $F_2$ at three loops and, as a new result, to the non-singlet coefficient function of $F_2$ at three loops.'
address: |
$^a$Institut f[ü]{}r Theoretische Teilchenphysik, Universit[ä]{}t Karlsruhe\
76128 Karlsruhe, Germany\
$^b$NIKHEF Theory Group, Kruislaan 409, 1098 SJ Amsterdam, The Netherlands\
author:
- 'S. Moch$^a$ and J.A.M. Vermaseren$^b$'
title: |
TOWARDS DEEP-INELASTIC STRUCTURE\
FUNCTIONS AT THREE LOOPS
---
=by -1
Introduction
============
Today, structure functions in inclusive deep-inelastic scattering are extremely well measured quantities. As a consequence, they offer the possibility for very precise determinations of the strong coupling $\alpha_s$ and the parton distribution functions. The high statistical accuracy of the present and upcoming experimental measurements demands analyses to next-to-next-to leading order (NNLO) of perturbative QCD for the structure functions $F_2,F_3$ and $F_L$.
However, the complete NNLO corrections are not fully available yet. The two-loop coefficient functions of $F_2,F_3$ and $F_L$ have been calculated [@vanNeerven:1991nn; @Moch:1999eb]. For the three-loop anomalous dimensions only a finite number of fixed Mellin moments [@Larin:1997wd] are presently available. In addition, some information about leading fermionic contributions [@Gracey:1994nn; @Bennett:1997ch] and the small-$x$ limit [@Catani:1994sq] exists.
In the following, we briefly report on the status of the calculation of the coefficient functions and the anomalous dimensions to three loops in perturbative QCD. Furthermore, we present results for the leading fermionic contributions to the non-singlet structure function $F_2$.
Method
======
We employ the optical theorem and the operator product expansion (OPE) to calculate the deep-inelastic structure functions in Mellin space analytically [@Gonzalez-Arroyo:1979df; @Kazakov:1988jk] as general functions of $N$. For the $N$-th Mellin moment of $F_{2}$ we can write $$\begin{aligned}
\label{eq:F2mellin}
\displaystyle
F_{2}^N(Q^2)\,=\,
\int\limits_0^1 dx\, x^{N-2} F_{2}(x,Q^2) \,=\,
\sum\limits_{j=\alpha,{\rm{q, g}}}
C_{2,j}^{N}\left(\frac{Q^2}{\mu^2},\alpha_s\right)
A_{{\rm{P}},N}^j\left(\mu^2\right)\, ,\end{aligned}$$ where $C_{2,j}^{N}$ denote the coefficient functions and $A_{{\rm{P}},N}^j$ the spin averaged hadronic matrix elements of singlet operators $O^{\rm q}$, $O^{\rm g}$ and non-singlet operators $O^{\alpha}$, $\alpha = 1,2,\dots,(n_f^2-1),$ of leading twist. Both, the coefficient functions and the renormalized operator matrix elements in eq.(\[eq:F2mellin\]) satisfy renormalization group equations governed by the same anomalous dimensions $\gamma_{jk}$. The anomalous dimensions determine the scale evolution of deep-inelastic structure functions.
The calculation of the coefficient functions $C_{2,j}^{N}$ and anomalous dimensions $\gamma_{jk}$ at a given order in perturbation theory amounts to the determination of the $N$-th moment of all contributing Feynman diagrams with external partons of momentum $p$ with $p^2 = 0$ and photons of momentum $q$ with $q^2 = -Q^2$. To achieve this task, we apply the following strategy [@Moch:1999eb; @Moch:2001fr]. We set up a hierarchy among all diagrams depending on the number of $p$-dependent propagators. We define basic building blocks (BBB) as diagrams in which the parton momentum $p$ flows only through a single line in the diagram, while composite building blocks (CBB) denote all diagrams with more than one $p$-dependent propagator.
Then, with the help of integration-by-parts [@'tHooft:1972fi] and scaling identities [@Moch:1999eb] we determine reduction schemes that map the CBB’s of a given topology to the BBB’s of the same topology or to the CBB’s of a simpler topology. Subsequently, we use reduction identities that express the BBB’s of a given topology in terms of simpler topologies. Working in Mellin space, the reduction equations often involve explicitly the parameter $N$ of the Mellin moment. Sometimes, one encounters difference equations in $N$ for the $N$-th moment $F(N)$ of a diagram, $$\begin{aligned}
a_0(N) F(N) + a_1(N) F(N-1) + \\
\dots + a_n(N) F(N-n) + G(N) &=& 0 \, , \nonumber
\label{diffeq}\end{aligned}$$ where $G(N)$ denotes the $N$-th Mellin moment of simpler diagrams. First order difference equations can be solved at the cost of one sum over $\Gamma$-functions in dimensional regularization. We use $D=4-2\epsilon$. The $\Gamma$-functions can be expanded in $\epsilon$ and the sum can be solved to any order in $\epsilon$ in terms of harmonic sums [@Vermaseren:1998uu; @Blumlein:1998if]. Higher order difference equations could be solved constructively. On the mathematical side, the approach to calculate Mellin moments of structure functions relies on particular mathematical concepts [@Vermaseren:2000we], such as harmonic sums [@Vermaseren:1998uu; @Blumlein:1998if] and our ability to set up and solve the difference equations as nested sums in $N$.
Leading fermionic contributions
===============================
To illustrate the method, we discuss the leading fermionic contributions to the non-singlet structure function. At three loops, they are proportional to $n_f^2$, with $n_f$ being the number of massless fermions. These contributions form a gauge-invariant subset, but do not involve yet any genuine three-loop topologies. Therefore, in the sense of the reduction strategy sketched above, the $n_f^2$-terms are easier to calculate.
The result for the $n_f^2$-contribution to the non-singlet anomalous dimension $g_{\rm qq}^{(2),\rm ns}$ at three loops is known from the work of Gracey [@Gracey:1994nn]. It is given by, $$\begin{aligned}
\label{eq:gqq2nf}
g_{\rm qq}^{(2),\rm ns} &=&
C_F n_f^2 \Biggl(
{17 \over 9}
+ {32 \over 9} {1 \over N\!+\!1}
- {88 \over 27} {1 \over (N\!+\!1)^2}
+ {8 \over 9} {1 \over (N\!+\!1)^3}
- {32 \over 9} {1 \over N}
\nonumber
\\
&&
+ {88 \over 27} {1 \over N^2}
- {8 \over 9} {1 \over N^3}
- {16 \over 27} S_{1}(N)
- {80 \over 27} S_{2}(N)
+ {16 \over 9} S_{3}(N)
\Biggr)
\, ,\end{aligned}$$ with $C_F=(N^2-1)/(2N)$, which is $4/3$ for QCD.
As a new result, we give here the $n_f^2$-contribution to the non-singlet coefficient function $c_{2,\rm qq}^{(3),\rm ns}$ at three loops for the flavour class, where both photons couple to the external quark. Strictly speaking, the three-loop coefficient functions contribute in a perturbative expansion only at next-to-next-to-next-to-leading order (NNNLO). However, the result illustrates nicely that our method will not only provide the anomalous dimensions, which are proportional to the single pole in $\epsilon$ in dimensional regularization, but also the coefficient functions which are determined by the finite terms, since our approach is (at least in principle) not limitated to a given order in $\epsilon$.
In terms of harmonic sums up to weight four, our result for the $n_f^2$-contribution to $c_{2,\rm qq}^{(3),\rm ns}$ reads $$\begin{aligned}
\label{eq:c2qq3nf}
\lefteqn{
c_{2,\rm qq}^{(3),\rm ns} \, =}
\nonumber
\\
&&
C_F n_f^2 \Biggl(
- {9517 \over 486}
- {8 \over 9} \zeta_3
+ {36748 \over 729} {1 \over N\!+\!1}
+ {16 \over 27} {\zeta_3 \over N\!+\!1}
- {4384 \over 81} {1 \over (N\!+\!1)^2}
\nonumber
\\
&&
+ {2360 \over 81} {1 \over (N\!+\!1)^3}
- {184 \over 27} {1 \over (N\!+\!1)^4}
+ {16 \over 3} {S_{1}(N\!+\!1) \over (N\!+\!1)^3}
- {544 \over 27} {S_{1}(N\!+\!1) \over (N\!+\!1)^2}
\nonumber
\\
&&
- {32 \over 9} {S_{1,1}(N\!+\!1) \over (N\!+\!1)^2}
+ {32 \over 9} {S_{2}(N\!+\!1) \over (N\!+\!1)^2}
+ {2240 \over 81} {S_{1}(N\!+\!1) \over N\!+\!1}
+ {272 \over 27} {S_{1,1}(N\!+\!1) \over N\!+\!1}
\nonumber
\\
&&
+ {16 \over 9} {S_{1,1,1}(N\!+\!1) \over N\!+\!1}
- {16 \over 9} {S_{1,2}(N\!+\!1) \over N\!+\!1}
- {272 \over 27} {S_{2}(N\!+\!1) \over N\!+\!1}
- {16 \over 9} {S_{2,1}(N\!+\!1) \over N\!+\!1}
\nonumber
\\
&&
+ {16 \over 9} {S_{3}(N\!+\!1) \over N\!+\!1}
- {11170 \over 729} {1 \over N}
- {16 \over 27} {\zeta_3 \over N}
+ {1204 \over 81} {1 \over N^2}
- {992 \over 81} {1 \over N^3}
+ {184 \over 27} {1 \over N^4}
\nonumber
\\
&&
- {16 \over 3} {S_{1}(N) \over N^3}
+ {232 \over 27} {S_{1}(N) \over N^2}
+ {32 \over 9} {S_{1,1}(N) \over N^2}
- {32 \over 9} {S_{2}(N) \over N^2}
- {644 \over 81} {S_{1}(N) \over N}
\nonumber
\\
&&
- {104 \over 27} {S_{1,1}(N) \over N}
- {16 \over 9} {S_{1,1,1}(N) \over N}
+ {16 \over 9} {S_{1,2}(N) \over N}
+ {104 \over 27} {S_{2}(N) \over N}
\nonumber
\\
&&
+ {16 \over 9} {S_{2,1}(N) \over N}
- {16 \over 9} {S_{3}(N) \over N}
+ {8714 \over 729} S_{1}(N)
+ {32 \over 27} S_{1}(N) \zeta_3
+ {940 \over 81} S_{1,1}(N)
\nonumber
\\
&&
+ {232 \over 27} S_{1,1,1}(N)
+ {32 \over 9} S_{1,1,1,1}(N)
- {32 \over 9} S_{1,1,2}(N)
- {232 \over 27} S_{1,2}(N)
\nonumber
\\
&&
- {32 \over 9} S_{1,2,1}(N)
+ {32 \over 9} S_{1,3}(N)
- {860 \over 27} S_{2}(N)
- {536 \over 27} S_{2,1}(N)
- {64 \over 9} S_{2,1,1}(N)
\nonumber
\\
&&
+ {64 \over 9} S_{2,2}(N)
+ {2440 \over 81} S_{3}(N)
+ {32 \over 3} S_{3,1}(N)
- {368 \over 27} S_{4}(N)
\Biggr)\, .\end{aligned}$$ This result agrees with the one of the fixed Mellin moment calculation [@Larin:1997wd] for $N=2,\dots,14$. It may be used to improve approximations [@vanNeerven:1999ca] to the full functional form of $c_{2,\rm qq}^{(3),\rm ns}$, although the $n_f^2$-terms are numerically not the most dominant contribution.
Conclusions
===========
The present approach based on the OPE, to calculate the Mellin moments of structure functions allows for the calculation of the complete coefficient functions and the anomalous dimensions at three loops.
As a next step, one can consider the subleading fermionic contributions at three loops proportional to $n_f$. The determination of these terms already relies on major parts of the complete reduction scheme as it requires the calculation of several genuine three-loop topologies of the Benz type as well as the calculation of two-loop topologies with a self-energy insertion. The results for the $n_f$-contribution to the non-singlet anomalous dimension will be presented elsewhere.
[10]{}
W. L. van Neerven and E. B. Zijlstra, Phys. Lett. [**B272**]{}, 127 (1991); E. B. Zijlstra and W. L. van Neerven, Phys. Lett. [**B273**]{}, 476 (1991); idib. [**B297**]{}, 377 (1992). Nucl. Phys. [**B383**]{}, 525 (1992). S. Moch and J. A. M. Vermaseren, Nucl. Phys. [**B573**]{}, 853 (2000), . S. A. Larin, P. Nogueira, T. van Ritbergen, and J. A. M. Vermaseren, Nucl. Phys. [**B492**]{}, 338 (1997), ;\
A. Retey and J. A. M. Vermaseren, Nucl. Phys. [**B604**]{}, 281 (2001), . J. A. Gracey, Phys. Lett. [**B322**]{}, 141 (1994), . J. F. Bennett and J. A. Gracey, Nucl. Phys. [**B517**]{}, 241 (1998), . S. Catani and F. Hautmann, Nucl. Phys. [**B427**]{}, 475 (1994), . A. Gonzalez-Arroyo, C. Lopez, and F. J. Yndurain, Nucl. Phys. [**B153**]{}, 161 (1979). D. I. Kazakov and A. V. Kotikov, Nucl. Phys. [**B307**]{}, 721 (1988); ibid. [**B345**]{}, 299 (1990), Erratum. S. Moch, J. A. M. Vermaseren, and M. Zhou, (2001), . G. ’t Hooft and M. Veltman, Nucl. Phys. [**B44**]{}, 189 (1972);\
K. G. Chetyrkin and F. V. Tkachev, Nucl. Phys. [**B192**]{}, 159 (1981). J. A. M. Vermaseren, Int. J. Mod. Phys. [**A14**]{}, 2037 (1999), . J. Blümlein and S. Kurth, Phys. Rev. [**D60**]{}, 014018 (1999), . J. A. M. Vermaseren and S. Moch, Nucl. Phys. Proc. Suppl. [**89**]{}, 131 (2000), . W. L. van Neerven and A. Vogt, Nucl. Phys. [**B568**]{}, 263 (2000), ; Phys. Lett. [**B490**]{}, 111 (2000), ; Nucl. Phys. [**B603**]{}, 42 (2001), .
|
---
abstract: 'Setting up an empirical model of optical sensing to exploit the circular Bragg phenomenon displayed by chiral sculptured thin films (CSTFs), we considered a CSTF with and without a central twist defect of $\pi/2$ radians. The circular Bragg phenomenon of the defect-free CSTF, and the spectral hole in the co-polarized reflectance spectrum of the CSTF with the twist defect, were both found to be acutely sensitive to the refractive index of a fluid which infiltrates the void regions of the CSTF. These findings bode well for the deployment of CSTFs as optical sensors.'
---
[**[Empirical model of optical sensing via spectral shift of circular Bragg phenomenon]{}**]{}
[**[Tom G. Mackay${}^{a,b,}$[^1] and Akhlesh Lakhtakia${}^{b,c,}$[^2]]{}**]{}\
${}^{a}$School of Mathematics and Maxwell Institute for Mathematical Sciences,\
University of Edinburgh, Edinburgh EH9 3JZ, UK\
${}^{b}$NanoMM—Nanoengineered Metamaterials Group, Department of Engineering Science and Mechanics,\
Pennsylvania State University, University Park, PA 16802-6812, USA\
${}^{c}$Department of Physics, Indian Institute of Technology Kanpur, Kanpur 208016, India
Introduction
============
ł[intro]{}
By means of physical vapor deposition, an array of parallel helical nanowires—known as a chiral sculptured thin film (CSTF)—may be grown on a substrate LMBR,HW. At optical wavelengths, a CSTF may be regarded as a unidirectionally nonhomogeneous continuum. CSTFs display orthorhombic symmetry locally whereas they are structurally chiral from a global perspective [@STF_Book Chap. 9]. There are two key attributes which render CSTFs attractive for a host of applications. Firstly, CSTFs exhibit the circular Bragg phenomenon (just as cholesteric liquid crystals do Gennes). Thus, a structurally right/left-handed CSTF of sufficient thickness almost completely reflects right/left-circularly polarized (RCP/LCP) light which is normally incident, but normally incident LCP/RCP light is reflected very little, when the free-space wavelength lies within the Bragg regime. This property has led to the use of CSTFs as circular polarization Wu, spectral-hole Ḩodgkinson00, and Solc Ȩrtekin filters, among other applications Polo,LDHX. Secondly, CSTFs are porous, and their multiscale porosity can be tailored to allow only species of certain shapes and sizes to infiltrate their void regions Messier. This engineered porosity, combined with the circular Bragg phenomenon, makes CSTFs attractive as platforms for light-emitting devices with precise control over the circular polarization state and the emission wavelengths XLLCH,Zhang, and optical biosensors L01,ML\_CSTF.
Further possibilities for a CSTF emerge if a structural twist defect is introduced. For example, if the upper half of a CSTF is twisted by $\pi/2$ radians about the axis of nonhomogeneity relative to the lower half, then the co-polarized reflectance spectrum contains a spectral hole in the middle of the Bragg regime Yang\_PRE,LM99. This phenomenon may be exploited for narrow-bandpass filtering Ḩod00, as well as for optical sensing of fluids which infiltrate the void regions of the CSTF LMSWH01,Horn.
We devised an empirical model yielding the sensitivity of a CSTF’s optical response—as a circular Bragg filter—to the refractive index of a fluid which infiltrates the CSTF’s void regions, with a view to optical-sensing applications. We considered both a defect-free CSTF and a CSTF with a central $\pi/2$-twist defect. Our model is not limited to normal incidence but also encompasses oblique incidence, and we present computed results for slightly off-normal incidence, a realistic situation for sensing applications. Furthermore, because the material that is deposited as the CSTF is not precisely the same as the bulk material that is evaporated, we use an inverse homogenization procedure [@ML_inverse_homog] on related but uninfiltrated columnar thin films (CTFs) [@HWH_AO] to predict the spectral shifts due to infiltration of CSTFs.
An $\exp(-i\omega t)$ time-dependence is implicit, with $\omega$ denoting the angular frequency and $i = \sqrt{-1}$. The free-space wavenumber, the free-space wavelength, and the intrinsic impedance of free space are denoted by $\ko=\omega\sqrt{\epso\muo}$, $\lambdao=2\pi/\ko$, and $\etao=\sqrt{\muo/\epso}$, respectively, with $\muo$ and $\epso$ being the permeability and permittivity of free space. Vectors are in boldface, dyadics are underlined twice, column vectors are in boldface and enclosed within square brackets, and matrixes are underlined twice and square-bracketed. The Cartesian unit vectors are identified as $\ux$, $\uy$, and $\uz$.
Empirical Model
===============
Uninfiltrated defect-free CSTF
------------------------------
Let us begin the explication of our empirical model with a defect-free CSTF with vacuous void regions; i.e., an uninfiltrated CSTF. The $z$ direction is taken to be the direction of nonhomogeneity. The CSTF is supposed to have been grown on a planar substrate through the deposition of an evaporated bulk material [@STF_Book]. The substrate, which lies parallel to the plane $z=0$, is supposed to have been rotated about the $z$ axis at a uniform angular speed throughout the deposition process. The rise angle of each resulting helical nanowire, relative to the $xy$ plane, is denoted by $\chi$. The refractive index of the deposited material — assumed to be an isotropic dielectric material — is written as $n_s$, which can be different from the refractive index of the bulk material that was evaporated MTR1976,BMYVM,WRL03.
Each nanowire of a CSTF can be modeled as a string of highly elongated ellipsoidal inclusions, wound end-to-end around the $z$ axis to create a helix [@Sherwin; @Lakh_Opt]. The surface of each ellipsoidal inclusion is characterized by the shape dyadic $$\un \, \un + \gamma_\tau \, \ut \, \ut + \gamma_b \, \ub \,
\ub ,$$ wherein the normal, tangential and binormal basis vectors are given as $$\left. \begin{array}{l}
\un = - \ux \, \sin \chi + \uz \, \cos \chi \vspace{4pt} \\
\ut = \ux \, \cos \chi + \uz \, \sin \chi \vspace{4pt} \\
\ub = - \uy
\end{array}
\right\}.$$ By choosing the shape parameters $\gamma_{b} \gtrsim 1$ and $\gamma_\tau \gg 1$, an aciculate shape is imposed on the inclusions. For the numerical results presented in §\[Numerica\], we fixed $\gamma_\tau =
15$ while noting that increasing $\gamma_\tau$ beyond 10 does not give rise to significant effects for slender inclusions [@Lakh_Opt]. The helical nanowires occupy only a proportion $f
\in \le 0, 1 \ri $ of the total CSTF volume; the volume fraction of the CSTF not occupied by nanowires is $1 - f$.
At length scales much greater than the nanoscale, the CSTF’s relative permittivity dyadic may be expressed as $$\=\eps_{\,1} = {\=S}_{\,z} \le h \frac{\pi z}{\Omega} \ri \.
{\=S}_{\,y} \le \chi \ri \. \=\eps^{ref}_{\,1} \.
{\=S}^T_{\,y} \le \chi \ri \. {\=S}^T_{\,z} \le h \frac{\pi
z}{\Omega} \ri, \l{eps1_dyadic}$$ where $2 \Omega$ is the structural period and the rotation dyadics $$\left.
\begin{array}{l}
{\=S}_{\,y} \le \chi \ri = \#u_y\, \#u_y + \le \#u_x\, \#u_x +
\#u_z\, \#u_z \ri \cos \chi + \le \#u_z\, \#u_x - \#u_x\, \#u_z \ri
\sin \chi \vspace{4pt} \\
{\=S}_{\,z} \le \sigma \ri =
\#u_z\, \#u_z +
\le \#u_x\, \#u_x + \#u_y\, \#u_y \ri \cos \sigma + \le \#u_y\,
\#u_x - \#u_x\, \#u_y \ri \sin \sigma
\end{array}
\right\}.$$ The handedness parameter $h = + 1$ for a structurally right-handed CSTF, and $h = - 1$ for a structurally left-handed CSTF. The reference relative permittivity dyadic $\=\eps^{ref}_{\,1}$ has the orthorhombic form $$\=\eps^{ref}_{\,1} = \eps_{a1} \,\un\,\un +\eps_{b1}\,\ut\,\ut \,
+\,\eps_{c1}\,\ub\,\ub . \l{eps1_ref_dyadic}$$
The nanowire rise angle $\chi$ can be measured from scanning-electron-microscope imagery. In principle, the relative permittivity parameters $\lec \eps_{a1}, \eps_{b1}, \eps_{c1} \ric$ of an uninfiltrated CSTF are also measurable. However, in view of the paucity of suitable experimental data on CSTFs, our empirical model relies on the measured experimental data on the related columnar thin films (CTFs). In order to deposit both CSTFs and CTFs, the vapor flux is directed at a fixed angle $\chi_v$ with respect to the substrate plane. The different morphologies of CSTFs and CTFs are due to the rotation of the substrate for the former but not for the latter. The parameters $\lec \eps_{a1}, \eps_{b1}, \eps_{c1}, \chi
\ric$ are functions of $\chi_v$.
The nanoscale model parameters $\lec n_s, f, \gamma_b \ric$ are not readily determined by experimental means. However, the process of inverse homogenization can be employed to determine these parameters from a knowledge of $\lec \eps_{a1}, \eps_{b1}, \eps_{c1} \ric$, as was done for titanium-oxide CTFs in a predecessor paper ML\_inverse\_homog.
Infiltrated defect-free CSTF
----------------------------
With optical-sensing applications in mind, next we consider the effect of filling the void regions of a defect-free CSTF with a fluid of refractive index $n_\ell$. This brings about a change in the reference relativity permittivity dyadic. The infiltrated CSTF is characterized by the relative permittivity dyadic $\=\eps_{\,2}$, which has the same eigenvectors as $\=\eps_{\,1}$ but different eigenvalues. Thus, the infiltrated CSTF is characterized by and , but with $\=\eps_{\,2}$ in lieu of $\=\eps_{\,1}$, $\=\eps^{ref}_{\,2}$ in lieu of $\=\eps^{ref}_{\,1}$ and $\lec \eps_{a2}, \eps_{b2}, \eps_{c2} \ric$ in lieu of $\lec \eps_{a1}, \eps_{b1}, \eps_{c1} \ric$. The nanowire rise angle $\chi$ remains unchanged.
In our model, the Bruggeman homogenization formalism is applied in its usual forward sense ȨAB to determine $\lec \eps_{a2}, \eps_{b2},
\eps_{c2} \ric$, from knowledge of the nanoscale model parameters $\lec n_s, f, \gamma_b \ric$ together with $\lec
n_\ell,\gamma_\tau\ric$, as described elsewhere Lakh\_Opt.
Boundary-value problem
----------------------
ł[bvp]{}
Let us now suppose that a CSTF occupies the region $0 \leq z \leq
L$, with the half-spaces $z< 0$ and $z > L$ being vacuous. An arbitrarily polarized plane wave is incident on the CSTF from the half-space $z < 0$. Its wavevector lies in the $xz$ plane, making an angle $\theta \in \les 0, \pi/2 \ri$ relative to the $+z$ axis. As a result, there is a reflected plane wave in the half-space $z < 0$ and a transmitted plane wave in the half-space $z > L$. Thus, the total electric field phasor in the half-space $z < 0$ may be expressed as $$\begin{aligned}
\nonumber
\#E (\#r) &=& \les a_L \frac{i \#u_y - \#p_+}{\sqrt{2}} - a_R
\frac{i \#u_y + \#p_+}{\sqrt{2}} \ris \,\exp\le{i\kappa}x\ri\, \exp \le i \ko z\cos\theta \ri
\\
&&\quad- \les
r_L \frac{i \#u_y - \#p_-}{\sqrt{2}} - r_R \frac{i \#u_y +
\#p_-}{\sqrt{2}} \ris \, \exp\le{i\kappa}x\ri\, \exp \le - i \ko z \cos\theta\ri, \quad
z < 0,\end{aligned}$$ while that in the half-space $z > L$ may be expressed as $$\#E (\#r) = \les t_L \frac{i \#u_y - \#p_+}{\sqrt{2}} - t_R \frac{i
\#u_y + \#p_+}{\sqrt{2}} \ris \, \exp\le{i\kappa}x\ri\, \exp \les i \ko \le z - L \ri \cos\theta\ris,
\quad z > L,$$ wherein $\#p_\pm = \mp \#u_x \cos \theta + \#u_z \sin \theta$ and $\kappa=\ko\sin\theta$.
Our aim is to determine the unknown amplitudes $r_L$ and $r_R$ of the LCP and RCP components of the reflected plane wave, and the unknown amplitudes $t_L$ and $t_R$ of the LCP and RCP components of the transmitted plane wave, from the known amplitudes $a_L$ and $a_R$ of the LCP and RCP components of the incident plane wave. As is comprehensively described elsewhere ŞTF\_Book, this is achieved by solving the 4$\times$4 matrix/4-vector relation $$\l{main_eq}
[\#f^{exit}] =
[\=M(L)] \. [\#f^{entry}].$$ Here, the column 4-vectors $$[\#f^{entry}] = \frac{1}{\sqrt{2}} \le
\begin{array}{c}
\le r_L + rR \ri + \le a_L + a_R \ri \vspace{4pt} \\
i \les - \le r_L - rR \ri + \le a_L - a_R \ri \ris \vspace{4pt} \\
-i \les \le r_L - rR \ri + \le a_L - a_R \ri \ris/ \etao \vspace{4pt} \\
- \les \le r_L + rR \ri - \le a_L + a_R \ri \ris/ \etao
\end{array}
\ri, \quad
[\#f^{exit}] = \frac{1}{\sqrt{2}} \le
\begin{array}{c}
t_L + tR \vspace{4pt} \\
i \le t_L - tR \ri \vspace{4pt} \\
-i \le t_L - tR \ri / \etao \vspace{4pt} \\
\le t_L + tR \ri / \etao
\end{array}
\ri,$$ arise from the field phasors at $z=0$ and $z=L$, respectively. The optical response characteristics of the CSTF are encapsulated by the 4$\times$4 transfer matrix $[\=M(L)] $, which is conveniently expressed as LVM $$\l{m_eq}
[\=M(L)] =
[\=B(h \frac{\pi z}{\Omega})]
\. [\=M'(L)] ,$$ wherein the 4$\times$4 matrix $$[\=B(\sigma)] = \le
\begin{array}{cccc}
\cos \sigma & - \sin \sigma & 0 & 0 \\ \sin
\sigma & \cos \sigma &0 & 0 \\
0 & 0& \cos \sigma & - \sin \sigma \\
0 & 0 & \sin \sigma & \cos \sigma
\end{array}
\ri .$$ The 4$\times$4 matrizant $ [\=M'(z)] $ satisfies the ordinary differential equation $$\frac{d}{dz} [\=M'(z)] = i \,
[\=P'(z)] \.
[\=M'(z)] , \l{MODE}$$ subject to the boundary condition $ [\=M'(0)] =
[\=I]$, with $[\=I]$ being the identity 4$\times$4 matrix. The 4$\times$4 matrix LVM $$\nonumber
[\=P'(z)]=
\begin{bmatrix}
0 & -i h\frac{\pi}{\Omega} & 0 & \omega\muo\\
i h \frac{\pi}{\Omega} & 0 & -\omega\muo & 0\\
0 & -\omega\epso\epsilon_{\rm c} & 0 & - i h \frac{\pi}{\Omega}\\
\frac{\omega\epso\epsb}{\tau} & 0 & i h \frac{\pi}{\Omega} & 0
\end{bmatrix}
\, +$$ $$\l{kermat}
\begin{bmatrix}
-\frac{\kappa \le \epsilon_{b} - \epsilon_{a} \ri }{2\epsilon_{a}
\tau} \cos\xi \sin{2\chi} & 0 &
-\frac{\kappa^2}{\omega\epso\epsilon_{a}\tau}\sin\xi\cos\xi &
-\frac{\kappa^2}{\omega\epso\epsilon_{a}\tau}\cos^2\xi \\
& & & \\
\frac{\kappa \le \epsilon_{b}-\epsilon_{a}
\ri } {2\epsilon_{a} \tau}\sin\xi\sin{2\chi} & 0 &
\frac{\kappa^2}{\omega\epso\epsilon_{a}\tau}\sin^2\xi &
\frac{\kappa^2}{\omega\epso\epsilon_{a}\tau}\sin\xi\cos\xi \\
& & & \\
\frac{\kappa^2}{\omega\muo}\sin\xi\cos\xi &
\frac{\kappa^2}{\omega\muo}\cos^2\xi & 0 & 0 \\
& & & \\
- \frac{\kappa^2}{\omega\muo}\sin^2\xi &
-\frac{\kappa^2}{\omega\muo}\sin\xi\cos\xi &
-\frac{\kappa\le \epsilon_{b} - \epsilon_{a}
\ri}{2\epsilon_{a}\tau}\sin\xi\sin{2\chi} & -\frac{\kappa\le
\epsilon_{b} - \epsilon_{a}
\ri}{2\epsilon_{a}\tau}\cos\xi\sin{2\chi}
\end{bmatrix}
\,$$ containing $$\xi = h \pi z/\Omega ,\qquad
\tau = \cos^2\chi+ \le \epsb/\epsa \ri \sin^2\chi$$ depends on whether the CSTF is uninfiltrated or infiltrated. Equation can be solved for $[\=M'(z)]$ by numerical means, most conveniently using a piecewise uniform approximation ŞTF\_Book.
Once $[\=M'(z)]$ is determined, it is a straightforward matter of linear algebra to extract the reflection amplitudes $r_{L,R}$ and transmission amplitudes $t_{L,R}$ from , for specified incident amplitudes $a_{L,R}$. Following the standard convention, we introduce the reflection coefficients $r_{LL,LR,RL,RR}$ and transmission coefficients $t_{LL,LR,RL,RR}$ per $$\le \begin{array}{c} r_L \vspace{4pt} \\
r_R \end{array} \ri = \le \begin{array}{cc} r_{LL} & r_{LR}
\vspace{4pt} \\ r_{RL} & r_{RR}
\end{array} \ri
\le \begin{array}{c} a_L \vspace{4pt} \\
a_R \end{array} \ri, \qquad \quad
\le \begin{array}{c} t_L \vspace{4pt} \\
t_R \end{array} \ri = \le \begin{array}{cc} t_{LL} & t_{LR}
\vspace{4pt} \\ t_{RL} & t_{RR} \end{array} \ri
\le \begin{array}{c} a_L \vspace{4pt} \\
a_R \end{array} \ri.$$ The square magnitude of a reflection or transmission coefficient yields the corresponding reflectance or transmittance; i.e., $R_{\alpha \beta} = \left| r_{\alpha \beta} \right|^2$ and $T_{\alpha
\beta} = \left| t_{\alpha \beta} \right|^2$, where $\alpha, \beta \in
\lec L, R \ric$.
CSTF with central $\pi/2$-twist defect
--------------------------------------
ł[twisted\_CSTF]{}
As mentioned in §\[intro\], the introduction of a central twist defect leads to a narrowband feature that can be very useful for sensing applications. Therefore, we further consider the CSTF of finite thickness introduced in §\[bvp\] but here with the upper half $z \in \les L/2, L \ris$ of the CSTF twisted about the $z$ axis by an angle of $\pi/2$ radians with respect to the lower half $z \in \les 0, L/2 \ri$.
Mathematically, the central $\pi/2$-twist defect is accommodated as follows: The relative permittivity dyadic of the centrally twisted uninfiltrated CSTF is per but with the rotation dyadic $
{\=S}_{\,z} \le h \frac{\pi z}{\Omega} \ri $ therein replaced by $ {\=S}_{\,z} \lec h \les \frac{\pi z}{\Omega} + \psi(z) \ris
\ric $, where $$\psi(z) = \left\{ \begin{array}{lcr} 0, && 0 \leq z < L/2
\vspace{6pt}
\\ \pi/2, && L/2 \leq z \leq L \end{array}
\right..$$ The relative permittivity dyadic of the centrally twisted and infiltrated CSTF follows from the corresponding dyadic for the centrally twisted and uninfiltrated CSTF, in exactly the same way as is the case when the CSTF is defect-free. The calculation of the reflectances and transmittances follows the same path as is described in §\[bvp\] with the exception that therein is replaced by $$\l{new_m_eq}
[\=M(L)]
=
[\=B \le h \frac{\pi L
}{2 \Omega}+\frac{\pi}{2} \ri]\. [\=M'(L/2)]
\. [\=B \le h \frac{\pi L
}{2 \Omega}-\frac{\pi}{2} \ri]\. [\=M'(L/2)] .$$
Numerical results
=================
ł[Numerica]{}
In order to illustrate the empirical model, we chose a CSTF of thickness $L = 40 \Omega$ where the structural half-period $\Omega =
185$ nm. The chosen relative permittivity parameters, namely $$\left.
\begin{array}{l}
\eps_{a1} = \displaystyle{\les 1.0443 + 2.7394 \le \frac{2
\chi_v}{\pi} \ri - 1.3697
\le \frac{2 \chi_v}{\pi} \ri^2 \ris^2} \vspace{6pt} \\
\eps_{b1} = \displaystyle{ \les 1.6765 + 1.5649 \le \frac{2
\chi_v}{\pi} \ri - 0.7825 \le \frac{2 \chi_v}{\pi} \ri^2 \ris^2}
\vspace{6pt} \\
\eps_{c1} = \displaystyle{ \les 1.3586 + 2.1109 \le \frac{2
\chi_v}{\pi} \ri - 1.0554 \le \frac{2 \chi_v}{\pi} \ri^2 \ris^2}
\end{array}
\right\} \l{tio1}$$ with $$\chi = \tan^{-1} \le {2.8818} \tan \chi_v \ri \l{tio2},$$ emerged from data measured for a CTF made by evaporating patinal${}^{\mbox{\textregistered}}$ titanium oxide ḨWH\_AO. These relations—which came from measurements at $\lambdao=
633$ nm—were presumed to be constant over the range of wavelengths considered here. Values for the corresponding nanoscale model parameters $\lec n_s, f, \gamma_b \ric$, as computed using the inverse Bruggeman homogenization formalism ML\_inverse\_homog, are provided in Table 1 for the vapor flux angles $\chi_v = 15^\circ$, $30^\circ$, and $60^\circ$. Furthermore, we set $h=+1$. The angle of incidence $\theta$ was fixed at $10^\circ$.
Computed reflectances and transmittances are plotted versus $\lambdao$ in Fig. \[Fig1\], for the defect-free CSTF for which we set $ \chi_v =
15^\circ$. Further computations (not presented here) using other values of $\chi_v$ revealed qualitatively similar graphs of reflectances and transmittances versus $\lambdao$. The effects of three values of $n_\ell$—namely, $n_\ell = 1$, $1.3$ and $1.5$—are represented in Fig. \[Fig1\]. The circular Bragg phenomenon is most obviously appreciated as a sharp local maximum in the graphs of $R_{RR}$, with attendant features occurring in the graphs of some other reflectances and transmittances. If $\lambda^{max}_0 $ denotes the free-space wavelength corresponding to this local maximum, from Fig. \[Fig1\], we found that $\lambdao^{max} \approx 622$ nm for $n_\ell = 1$, $\lambdao^{max} \approx 712$ nm for $n_\ell = 1.3$, and $\lambdao^{max} \approx 768$ nm for $n_\ell = 1.5$.
Clearly, the circular Bragg phenomenon undergoes a substantial spectral shift as $n_\ell$ increases from unity. In order to elucidate further this matter, we focused on the spectral-shift sensitivity $d \lambda^{max}_0 / d
n_\ell$. Graphs of $d \lambda^{max}_0 / d
n_\ell$ against $\lambda^{max}_0$, computed for the range $1 < n_\ell < 1.5
$, are presented in Fig. \[Fig1a\]. In addition to results for the vapor flux angle $\chi_v =
15^\circ$, results are also plotted in Fig. \[Fig1a\] for $\chi_v =
30^\circ$ and $ \chi_v =
60^\circ$. For all vapor flux angles considered and all values of $n_\ell \in \le
1, 1.5 \ri$, the spectral-shift sensitivity $d \lambda^{max}_0 / d
n_\ell$ is positive-valued and greater than 118 nm per refractive index unit (RIU). When $\chi_v = 15^\circ$, $d \lambda^{max}_0 / d
n_\ell$ generally decreases as $ \lambda^{max}_0$ increases. A similar trend is exhibited for $\chi_v
= 30^\circ$, but $d \lambda^{max}_0 / d
n_\ell$ generally increases as $ \lambda^{max}_0$ increases for $\chi_v
=60^\circ$.
The center wavelength of the circular Bragg regime has been estimated as VL00 $$\l{estimate}
\lambdao^{Br} \approx \Omega \le \sqrt{ \eps_{c2}} +
\sqrt{\frac{\eps_{a2} \eps_{b2}}{\eps_{a2} \cos^2 \chi + \eps_{b2}
\sin^2 \chi}} \ri \sqrt{ \cos \theta}.$$ The graphs of $d \lambda^{Br}_0 / d
n_\ell$ versus $ \lambda^{Br}_0$, as provided in Fig. \[Fig1b\] for the vapor flux angles $\chi_v = 15^\circ$, $30^\circ$, and $60^\circ$, are remarkably similar (but not identical) to the graphs of $d \lambda^{max}_0 / d
n_\ell$ versus $ \lambda^{max}_0$ displayed in Fig. \[Fig1a\]. Thus, the center-wavelength formula can yield a convenient estimate of the spectral-shift sensitivity, without having to solve the reflection-transmission problem.
We turn now to the CSTF with a central twist defect of $\pi/2$ radians, as described in §\[twisted\_CSTF\]. Graphs of the reflectances and transmittances versus $\lambdao$ for $\chi_v =
15^\circ$ are provided in Fig. \[Fig2\]. As we remarked for the defect-free CSTF, graphs (not presented here) which are qualitatively similar to those presented in Fig. \[Fig2\] were obtained when other values of the vapor flux angle $\chi_v$ were considered. The graphs of Fig. \[Fig2\] are substantially different to those of Fig. \[Fig1\]: the local maximums in the graphs of $R_{RR}$ in Fig. \[Fig1\] have been replaced by sharp local minimums in Fig. \[Fig2\]. These local minimums—which represent an ultranarrowband spectral hole—arise at the free-space wavelengths $\lambda^{min}_0$ that are approximately the same as the corresponding local maximums $\lambda^{max}_0 $ in Fig. \[Fig1\].
The location of the spectral hole on the $\lambdao$ axis is highly sensitive to $n_\ell$. In a similar manner to before, we explore this matter by computing the spectral-shift sensitivity $d \lambda^{min}_0 / d
n_\ell$ at each value of $n_\ell$. In Fig. \[Fig2a\], $d \lambda^{min}_0 / d
n_\ell$ is plotted against $\lambda^{min}_0$, with the spectral-shift sensitivity computed for the range $1 < n_\ell < 1.5
$ and with $\chi_v = 15^\circ$, $ 30^\circ$, and $60^\circ$. The plots of $d \lambda^{min}_0 / d
n_\ell$ versus $\lambda^{min}_0$ in Fig. \[Fig2a\] are both qualitatively and quantitatively similar to those of $d \lambda^{max}_0 / d
n_\ell$ versus $\lambda^{max}_0$ in Fig. \[Fig1a\]. That is, positive-valued $d \lambda^{min}_0 / d
n_\ell$ generally decreases as $\lambda^{min}_0$ increases for $\chi_v = 15^\circ$ and $ 30^\circ$, and generally increases as $\lambda^{min}_0$ increases for $\chi_v= 60^\circ$. A similar correspondence exists with Fig. \[Fig1b\].
Closing remarks
===============
Our empirical model has demonstrated that the circular Bragg phenomenon associated with a defect-free CSTF, and the ultranarrowband spectral hole displayed by a CSTF with a central $\pi/2$-twist defect, both undergo substantially large spectral shifts due to infiltration by a fluid. Although, owing to lack of availability of experimental data, we did not consider wavelength-dispersion in the dielectric properties of the material used to deposit a CSTF, the promise of CSTFs—with or without a structural twist—to act as platforms for optical sensing was clearly highlighted. Experimental validation is planned.
Acknowledgements {#acknowledgements .unnumbered}
================
TGM is supported by a Royal Academy of Engineering/Leverhulme Trust Senior Research Fellowship. AL thanks the Binder Endowment at Penn State for partial financial support of his research activities.
[99]{}
A. Lakhtakia, R. Messier, M. J. Brett, and K. Robbie, “Sculptured thin films (STFs) for optical, chemical and biological applications," *Innovations Mater. Res.*, vol. 1, pp. 165-176, 1996.
I. Hodgkinson and Q. h. Wu, “Inorganic chiral optical materials," *Adv. Mater.*, vol. 13, pp. 889-897, 2001.
A. Lakhtakia and R. Messier, *Sculptured Thin Films: Nanoengineered Morphology and Optics*, SPIE Press, Bellingham, WA, USA, 2005.
P. G. de Gennes and J. Prost, *The Physics of Liquid Crystals*, Oxford University Press, New York, NY, USA, 1993.
Q. Wu, I. J. Hodgkinson, and A. Lakhtakia, “Circular polarization filters made of chiral sculptured thin films: experimental and simulation results," *Opt. Eng.*, vol. 39, pp. 1863-1868, 2000.
I. J. Hodgkinson, Q. H. Wu, A. Lakhtakia, and M. W. McCall, “Spectral-hole filter fabricated using sculptured thin-film technology," *Opt. Commun.*, vol. 177, pp. 79-84, 2000.
E. Ertekin and A. Lakhtakia, “Sculptured thin film Solc filters for optical sensing of gas concentration," *Eur. Phys. J. Appl. Phys.*, vol. 5, pp. 45-50, 1999.
J. A. Polo Jr., “Sculptured thin films," in *Micromanufacturing and Nanotechnology*, N. P. Mahalik, Ed., Springer, Heidelberg, Germany, 2005, pp. 357-381.
A. Lakhtakia, M. C. Demirel, M. W. Horn, and J. Xu, “Six emerging directions in sculptured-thin-film research," *Adv. Solid State Phys.*, vol. 46, pp. 295-307, 2007.
R. Messier, V. C. Venugopal, and P. D. Sunal, “Origin and evolution of sculptured thin films," *J. Vac. Sci. Technol. A*, vol. 18, pp. 1538-1545, 2000.
J. Xu, A. Lakhtakia, J. Liou, A. Chen, and I. J. Hodgkinson, “Circularly polarized fluorescence from light-emitting microcavities with sculptured-thin-film chiral reflectors," *Opt. Commun.*, vol. 264, pp. 235-239, 2006.
F. Zhang, J. Xu, A. Lakhtakia, S. M. Pursel, and M. W. Horn, “Circularly polarized emission from colloidal nanocrystal quantum dots confined in microcavities formed by chiral mirrors," *Appl. Phys. Lett.*, vol. 91, 023102, 2007. \[Interchange the labels LCP and RCP in Fig. 2c of this paper.\]
A. Lakhtakia, “On bioluminescent emission from chiral sculptured thin films," *Opt. Commun.*, vol. 188, pp. 313-320, 2001.
T. G. Mackay and A. Lakhtakia, “Theory of light emission from a dipole source embedded in a chiral sculptured thin film," *Opt. Express*, vol. 15, pp. 14689-14703, 2007. Erratum: vol. 16, p. 3659, 2008.
Y.-C. Yang, C.-S. Kee, J.-E. Kim, and H. Y. Park, “Photonic defect modes of cholesteric liquid crystals," *Phys. Rev. E* vol. 60, pp. 6852–6854, 1999.
A. Lakhtakia and M. McCall, “Sculptured thin films as ultranarrow-bandpass circular-polarization filters," *Opt. Commun.*, vol. 168, pp. 457-465, 1999.
I. J. Hodgkinson, Q. H. Wu, K. E. Thorn, A. Lakhtakia, and M. W. McCall, “Spacerless circular-polarization spectral-hole filters using chiral sculptured thin films: theory and experiment," *Opt. Commun.*, vol. 184, pp. 57-66, 2000.
A. Lakhtakia, M. W. McCall, J. A. Sherwin, Q. H. Wu, and I. J. Hodgkinson, “Sculptured-thin-film spectral holes for optical sensing of fluids," *Opt. Commun.*, vol. 194, pp. 33-46, 2001.
S. M. Pursel and M. W. Horn, “Prospects for nanowire sculptured-thin-film devices," *J. Vac. Sci. Technol. B*, vol. 25, pp. 2611-2615, 2007.
T. G. Mackay and A. Lakhtakia, “Determination of constitutive and morphological parameters of columnar thin films by inverse homogenization," $\mathsf{http://arxiv.org/abs/0909.5375}$
I. Hodgkinson, Q. h. Wu, and J. Hazel, “Empirical equations for the principal refractive indices and column angle of obliquely deposited films of tantalum oxide, titanium oxide, and zirconium oxide," *Appl. Opt.*, vol. 37, pp. 2653-2659, 1998.
R. Messier, T. Takamori, and R. Roy, “Structure-composition variation in rf-sputtered films of Ge caused by process parameter changes," *J. Vac. Sci. Technol.*, vol. 13, pp. 1060-1065, 1976.
J. R. Blanco, P. J. McMarr, J. E. Yehoda, K. Vedam, and R. Messier, “Density of amorphous germanium films by spectroscopic ellipsometry," *J. Vac. Sci. Technol. A*, vol. 4, pp. 577-582, 1986.
F. Walbel, E. Ritter, and R. Linsbod, “Properties of TiO${}_{\mbox{x}}$ films prepared by electron-beam evaporation of titanium and titanium suboxides," *Appl. Opt.*, vol. 42, pp. 4590-4593, 2003.
J. A. Sherwin, A. Lakhtakia, and I. J. Hodgkinson, “On calibration of a nominal structure-property relationship model for chiral sculptured thin films by axial transmittance measurements," *Opt. Commun.*, vol. 209, pp. 369-375, 2002.
A. Lakhtakia, “Enhancement of optical activity of chiral sculptured thin films by suitable infiltration of void regions," *Optik*, vol. 112, pp. 145-148, 2001. Erratum: vol. 112, p. 544, 2001.
T. G. Mackay and A. Lakhtakia, *Electromagnetic Anisotropy and Bianisotropy: A Field Guide*, Word Scientific, Singapore, 2010.
A. Lakhtakia, V. C. Venugopal, and M. W. McCall, “Spectral holes in Bragg reflection from chiral sculptured thin films: circular polarization filter," *Opt. Commun.*, vol. 177, pp. 57-68, 2000.
V. C. Venugopal and A. Lakhtakia, “On absorption by non-axially excited slabs of dielectric thin-film helicoidal bianisotropic mediums," *Eur. Phys. J. Appl. Phys.*, vol. 10, pp. 173-184, 2000.
------------ ------------ -------- --------
$\chi_v$ $\gamma_b$ $f$ $n_s$
$15^\circ$ 2.2793 0.3614 3.2510
$30^\circ$ 1.8381 0.5039 3.0517
$60^\circ$ 1.4054 0.6956 2.9105
------------ ------------ -------- --------
: Nanoscale model parameters $\gamma_b$, $f$ and $n_s$ for $\chi_v = 15^\circ$, $30^\circ$, and $60^\circ$. []{data-label="tab1"}
![Reflectances and transmittances plotted against the free-space wavelength for a defect-free titanium-oxide CSTF; $L = 40
\Omega$, $ \Omega = 185$ nm, $h=+1$, $\chi_v = 15^\circ$, and $\theta = 10^\circ$. The CSTF is infiltrated with a fluid of refractive index $n_\ell = 1.0$ (blue broken-dashed curves), $1.3$ (red solid curves), and 1.5 (green dashed curves). []{data-label="Fig1"}](chiv15_rll_no_twist.eps "fig:"){width="2.6in"} ![Reflectances and transmittances plotted against the free-space wavelength for a defect-free titanium-oxide CSTF; $L = 40
\Omega$, $ \Omega = 185$ nm, $h=+1$, $\chi_v = 15^\circ$, and $\theta = 10^\circ$. The CSTF is infiltrated with a fluid of refractive index $n_\ell = 1.0$ (blue broken-dashed curves), $1.3$ (red solid curves), and 1.5 (green dashed curves). []{data-label="Fig1"}](chiv15_rrl_no_twist.eps "fig:"){width="2.6in"}\
![Reflectances and transmittances plotted against the free-space wavelength for a defect-free titanium-oxide CSTF; $L = 40
\Omega$, $ \Omega = 185$ nm, $h=+1$, $\chi_v = 15^\circ$, and $\theta = 10^\circ$. The CSTF is infiltrated with a fluid of refractive index $n_\ell = 1.0$ (blue broken-dashed curves), $1.3$ (red solid curves), and 1.5 (green dashed curves). []{data-label="Fig1"}](chiv15_rlr_no_twist.eps "fig:"){width="2.6in"} ![Reflectances and transmittances plotted against the free-space wavelength for a defect-free titanium-oxide CSTF; $L = 40
\Omega$, $ \Omega = 185$ nm, $h=+1$, $\chi_v = 15^\circ$, and $\theta = 10^\circ$. The CSTF is infiltrated with a fluid of refractive index $n_\ell = 1.0$ (blue broken-dashed curves), $1.3$ (red solid curves), and 1.5 (green dashed curves). []{data-label="Fig1"}](chiv15_rrr_no_twist.eps "fig:"){width="2.6in"}\
![Reflectances and transmittances plotted against the free-space wavelength for a defect-free titanium-oxide CSTF; $L = 40
\Omega$, $ \Omega = 185$ nm, $h=+1$, $\chi_v = 15^\circ$, and $\theta = 10^\circ$. The CSTF is infiltrated with a fluid of refractive index $n_\ell = 1.0$ (blue broken-dashed curves), $1.3$ (red solid curves), and 1.5 (green dashed curves). []{data-label="Fig1"}](chiv15_tll_no_twist.eps "fig:"){width="2.6in"} ![Reflectances and transmittances plotted against the free-space wavelength for a defect-free titanium-oxide CSTF; $L = 40
\Omega$, $ \Omega = 185$ nm, $h=+1$, $\chi_v = 15^\circ$, and $\theta = 10^\circ$. The CSTF is infiltrated with a fluid of refractive index $n_\ell = 1.0$ (blue broken-dashed curves), $1.3$ (red solid curves), and 1.5 (green dashed curves). []{data-label="Fig1"}](chiv15_trl_no_twist.eps "fig:"){width="2.6in"}\
![Reflectances and transmittances plotted against the free-space wavelength for a defect-free titanium-oxide CSTF; $L = 40
\Omega$, $ \Omega = 185$ nm, $h=+1$, $\chi_v = 15^\circ$, and $\theta = 10^\circ$. The CSTF is infiltrated with a fluid of refractive index $n_\ell = 1.0$ (blue broken-dashed curves), $1.3$ (red solid curves), and 1.5 (green dashed curves). []{data-label="Fig1"}](chiv15_tlr_no_twist.eps "fig:"){width="2.6in"} ![Reflectances and transmittances plotted against the free-space wavelength for a defect-free titanium-oxide CSTF; $L = 40
\Omega$, $ \Omega = 185$ nm, $h=+1$, $\chi_v = 15^\circ$, and $\theta = 10^\circ$. The CSTF is infiltrated with a fluid of refractive index $n_\ell = 1.0$ (blue broken-dashed curves), $1.3$ (red solid curves), and 1.5 (green dashed curves). []{data-label="Fig1"}](chiv15_trr_no_twist.eps "fig:"){width="2.6in"}
![ł[Fig1a]{} Spectral-shift sensitivity $d \lambda^{max}_0 / d n_\ell$ plotted against $\lambda^{max}_0$ for $n_\ell \in \le 1, 1.5
\ri$. The vapor flux angle $\chi_v =
15^\circ$ (blue broken-dashed curve), $30^\circ$ (green dashed curve), and $60^\circ$ (red solid curve). ](dlambda_dn_no_twist.eps){width="3.3in"}
![ł[Fig1b]{} Spectral-shift sensitivity as estimated by $d \lambda^{Br}_0 / d n_\ell$ plotted against $\lambda^{Br}_0$ for $n_\ell \in \le 1, 1.5
\ri$. The vapor flux angle $\chi_v =
15^\circ$ (blue broken-dashed curve), $30^\circ$ (green dashed curve), and $60^\circ$ (red solid curve). ](bragg_dlambda_dn_no_twist.eps){width="3.3in"}
![ł[Fig2]{} As Fig. \[Fig1\] except that the upper half $z \in \les L/2, L \ris$ of the CSTF is twisted about the $z$ axis by $\pi/2$ radians with respect to the lower half $z \in \les 0, L/2 \ri$. ](chiv15_rll_twist.eps "fig:"){width="2.6in"} ![ł[Fig2]{} As Fig. \[Fig1\] except that the upper half $z \in \les L/2, L \ris$ of the CSTF is twisted about the $z$ axis by $\pi/2$ radians with respect to the lower half $z \in \les 0, L/2 \ri$. ](chiv15_rrl_twist.eps "fig:"){width="2.6in"}\
![ł[Fig2]{} As Fig. \[Fig1\] except that the upper half $z \in \les L/2, L \ris$ of the CSTF is twisted about the $z$ axis by $\pi/2$ radians with respect to the lower half $z \in \les 0, L/2 \ri$. ](chiv15_rlr_twist.eps "fig:"){width="2.6in"} ![ł[Fig2]{} As Fig. \[Fig1\] except that the upper half $z \in \les L/2, L \ris$ of the CSTF is twisted about the $z$ axis by $\pi/2$ radians with respect to the lower half $z \in \les 0, L/2 \ri$. ](chiv15_rrr_twist.eps "fig:"){width="2.6in"}\
![ł[Fig2]{} As Fig. \[Fig1\] except that the upper half $z \in \les L/2, L \ris$ of the CSTF is twisted about the $z$ axis by $\pi/2$ radians with respect to the lower half $z \in \les 0, L/2 \ri$. ](chiv15_tll_twist.eps "fig:"){width="2.6in"} ![ł[Fig2]{} As Fig. \[Fig1\] except that the upper half $z \in \les L/2, L \ris$ of the CSTF is twisted about the $z$ axis by $\pi/2$ radians with respect to the lower half $z \in \les 0, L/2 \ri$. ](chiv15_trl_twist.eps "fig:"){width="2.6in"}\
![ł[Fig2]{} As Fig. \[Fig1\] except that the upper half $z \in \les L/2, L \ris$ of the CSTF is twisted about the $z$ axis by $\pi/2$ radians with respect to the lower half $z \in \les 0, L/2 \ri$. ](chiv15_tlr_twist.eps "fig:"){width="2.6in"} ![ł[Fig2]{} As Fig. \[Fig1\] except that the upper half $z \in \les L/2, L \ris$ of the CSTF is twisted about the $z$ axis by $\pi/2$ radians with respect to the lower half $z \in \les 0, L/2 \ri$. ](chiv15_trr_twist.eps "fig:"){width="2.6in"}
![ł[Fig2a]{} Spectral-shift sensitivity $d \lambda^{min}_0 / d n_\ell$ plotted against $\lambda^{min}_0$ for $n_\ell \in \le 1, 1.5 \ri$. The vapor flux angle $\chi_v =
15^\circ$ (blue broken-dashed curve), $30^\circ$ (green dashed curve), and $60^\circ$ (red solid curve). ](dlambda_dn_twist.eps){width="3.3in"}
[^1]: Email: [email protected]
[^2]: Email: [email protected]
|
---
abstract: 'In this paper, we review the construction of the Dirac operator for graded affine Hecke algebras and calculate the Dirac cohomology of irreducible unitary modules for the graded Hecke algebra of $gl(n)$.'
address:
- |
Dept. of Mathematics\
Cornell University\
Ithaca, NY 14850
- |
Dept. of Mathematics\
University of Utah\
Salt Lake City, UT 84112
author:
- Dan Barbasch
- Dan Ciubotaru
title: Unitary Hecke algebra modules with nonzero Dirac cohomology
---
[^1]
Introduction {#sec:1}
============
The Dirac operator plays an important role in the representation theory of real reductive Lie groups. An account of the definition, properties and some applications can be found in [@BW]. It is well known, starting with the work of [@AS] and [@P], that discrete series occur in the kernel of the Dirac operator. Work of Enright and Wallach [@EW] generalizes these results to other types of representations. Other uses are to provide, via the *Dirac inequality*, [introduced by Parthasarathy]{}, necessary conditions for unitarity. One of the most striking applications is that for regular integral infinitesimal character, the Dirac inequality gives precisely the unitary dual, and determines the unitary representations with nontrivial $({{{\mathfrak g}}},K)-$cohomology.
Given these properties, Vogan has introduced the notion of Dirac cohomology; this was studied extensively in [@HP] and subsequent work. One can argue that Dirac cohomology is a generalization of $({{{\mathfrak g}}},K)-$cohomology. While a representation has nontrivial $({{{\mathfrak g}}},K)-$cohomology only if its infinitesimal character is regular integral, the corresponding condition necessary for Dirac cohomology to be nonzero is more general; certain representations with singular and nonintegral infinitesimal character will also have nontrivial Dirac cohomology.
In this paper, we prove new results about an analogue of the Dirac operator in the case of the graded affine Hecke algebra, introduced in [@BCT]. This operator can be thought of as the analogue of the Dirac operator in the case of a $p$-adic group. One of our results is to determine the behaviour of the Dirac cohomology with respect to Harish-Chandra type induction. In the real case, a unitary representation has nontrivial $({{{\mathfrak g}}}
,K)-$cohomology, if and only if it is (essentially) obtained from the trivial representation on a Levi component via the derived functor construction. For unitary representations with nontrivial Dirac cohomology the infinitesimal character can be nonintegral and singular. So conjecture instead, that unitary representations with nontrivial Dirac cohomology are all cohomologically induced from unipotent (in the sense of [@A]) representations. To investigate this conjecture we explore the Dirac cohomology of unipotent representations for graded affine Hecke algebras. In particular, we compute part of the cohomology of spherical unipotent representations for affine Hecke algebras of all types. In the case of type $A$ we go further; we compute the cohomology of all unitary modules.
This paper was written while we were guests of the Max Planck Institute in Bonn as part of the program *Analysis on Lie groups*. We would like to thank the institute for its hospitality, and the organizers for making the program possible, and providing the environment to do this research.
Dirac cohomology for graded Hecke algebras {#sec:2}
==========================================
In this section we review the construction and properties of the Dirac operator from [@BCT] and the classification of spin projective Weyl group representations from [@C].
Root systems
------------
We fix an ${{\mathbb R}}$-root system $\Phi=(V,R,V^\vee, R^\vee)$: $V, V^\vee$ are finite dimensional ${{\mathbb R}}$-vector spaces, with a perfect bilinear pairing $(~,~): V\times V^\vee\to {{\mathbb R}}$, so that $R\subset V\setminus\{0\},$ $R^\vee\subset V^\vee\setminus\{0\}$ are finite subsets in bijection $$R\longleftrightarrow R^\vee,\ {{\alpha}}\longleftrightarrow{{\alpha}}^\vee,\ \text{such that }({{\alpha}},{{\alpha}}^\vee)=2.$$ The reflections $$s_{{\alpha}}: V\to V,\ s_{{\alpha}}(v)=v-(v,{{\alpha}}^\vee){{\alpha}}, \quad s_{{\alpha}}:V^\vee\to V^\vee,\ s_{{\alpha}}(v')=v'-({{\alpha}},v'){{\alpha}}^\vee, \quad {{\alpha}}\in R,$$ leave $R$ and $R^\vee$ invariant, respectively. Let $W$ be the subgroup of $GL(V)$ (respectively $GL(V^\vee)$) generated by $\{s_{{\alpha}}:~{{\alpha}}\in R\}$.
We will assume that the root system $\Phi$ is reduced and crystallographic. We will fix a choice of simple roots $\Pi\subset R$, and consequently, positive roots $R^+$ and positive coroots $R^{\vee,+}.$ Often, we will write ${{\alpha}}>0$ or ${{\alpha}}<0$ in place of ${{\alpha}}\in R^+$ or ${{\alpha}}\in (-R^+)$, respectively.
We fix a $W$-invariant inner product $\langle~,~\rangle$ on $V$. Denote also by $\langle~,~\rangle$ the dual inner product on $V^\vee.$ If $v$ is a vector in $V$ or $V^\vee$, we denote $|v|:=\langle v,v\rangle^{1/2}.$
The Clifford algebra
--------------------
A classical reference for the Clifford algebra is [@Ch] (see also section II.6 in [@BW]). Denote by $ C(V)$ the Clifford algebra defined by $V$ and the inner product $\langle~,~\rangle$. More precisely, $
C(V)$ is the quotient of the tensor algebra of $V$ by the ideal generated by $${{\omega}}\otimes {{\omega}}'+{{\omega}}'\otimes {{\omega}}+2\langle {{\omega}},{{\omega}}'\rangle,\quad
{{\omega}},{{\omega}}'\in V.$$ Equivalently, $ C(V)$ is the associative algebra with unit generated by $V$ with relations: $${{\omega}}{{\omega}}'+{{\omega}}'{{\omega}}=-2\langle{{\omega}},{{\omega}}'\rangle.$$ Let $\mathsf{O}(V)$ denote the group of orthogonal transformation of $V$ with respect to $\langle~,~\rangle$. This acts by algebra automorphisms on $ C(V)$, and the action of $-1\in
\mathsf{O}(V)$ induces a grading $$C(V)= C(V)_{\mathsf{even}}+ C(V)_{\mathsf{odd}}.$$ Let ${{\epsilon}}$ be the automorphism of $ C(V)$ which is $+1$ on $
C(V)_{\mathsf{even}}$ and $-1$ on $ C(V)_{\mathsf{odd}}$. Let ${}^t$ be the transpose [anti]{}automorphism of $ C(V)$ characterized by $${{\omega}}^t=-{{\omega}},\ {{\omega}}\in V,\quad (ab)^t=b^ta^t,\ a,b\in C(V).$$ The Pin group is $$\label{pin}
\mathsf{Pin}(V)=\{a\in C(V):~ {{\epsilon}}(a) V a^{-1}\subset
V,~ a^t=a^{-1}\}.$$ It sits in a short exact sequence $$\label{ses}
1\longrightarrow {{\mathbb Z}}/2{{\mathbb Z}}\longrightarrow
\mathsf{Pin}(V)\xrightarrow{\ \ p\ \ } \mathsf{O}(V)\longrightarrow 1,$$ where the projection $p$ is given by $p(a)({{\omega}})={{\epsilon}}(a){{\omega}}a^{-1}$.
If $\dim V$ is even, the Clifford algebra $C(V)$ has a unique (up to equivalence) complex simple module $(\gamma, S)$ of dimension $2^{\dim
V/2}$, endowed with a positive definite Hermitian form $\langle ~,~\rangle_{ S}$ such that $$\label{eq:unitary}
\langle\gamma(a)s,s'\rangle_{ S}=\langle s,\gamma(a^t) s'\rangle_{ S},\quad\text{for all
}a\in C(V)\text{ and } s,s'\in S.$$ When $\dim V$ is odd, there are two simple inequivalent [complex]{} modules $(\gamma_+,S^+),$ $(\gamma_-,S^-)$ of dimension $2^{[\dim
V/2]}$. [Analogous to (\[eq:unitary\]), these modules admit an invariant positive definite Hermitian form.]{} In order to simplify the formulation of the results, we will often refer to any one of $S$, $S^+,$ $S^-$, as a spin module. Via (\[pin\]), a spin module $S$ is an irreducible unitary $\mathsf{Pin}(V)$ representation.
The pin cover ${{\widetilde {W}}}$ of the Weyl group
----------------------------------------------------
The Weyl group $W$ acts by orthogonal transformations on $V$, so one can embed $W$ as a subgroup of $\mathsf{O}(V).$ We define the group ${{\widetilde {W}}}$ in $\mathsf{Pin}(V)$: $${{\widetilde {W}}}:=p^{-1}({W})\subset \mathsf{Pin}(V),\text{ where $p$
is as in (\ref{ses}).}$$
The group ${{\widetilde {W}}}$ has a Coxeter presentation similar to that of $W$. Recall that as a Coxeter group, $W$ has a presentation: $$W=\langle s_{{{\alpha}}},~{{\alpha}}\in\Pi|\ (s_{{\alpha}}s_\beta)^{m({{\alpha}},\beta)}=1, ~{{\alpha}},\beta\in\Pi\rangle,$$ for certain positive integers $m({{\alpha}},\beta).$ Theorem 3.2 in [@Mo] exhibits ${{\widetilde {W}}}$ as $${{\widetilde {W}}}=\langle z,{{\widetilde {s}}}_{{{\alpha}}},~{{\alpha}}\in\Pi|\ z^2=1,~({{\widetilde {s}}}_{{\alpha}}{{\widetilde {s}}}_\beta)^{m({{\alpha}},\beta)}=z, ~{{\alpha}},\beta\in\Pi\rangle.$$
We call a representation ${{\widetilde {\sigma}}}$ of ${{\widetilde {W}}}$ genuine (resp. non-genuine) if ${{\widetilde {\sigma}}}(z)=-1$ (resp. ${{\widetilde {\sigma}}}(z)=1$). The non-genuine ${{\widetilde {W}}}$-representations are the ones that factor through $W$. We say that two genuine ${{\widetilde {W}}}$-types $\sigma_1,\sigma_2$ are associate if $\sigma_1\cong\sigma_2\otimes{\mathsf{sign}}$.
Since ${{\widetilde {W}}}\subset\mathsf{Pin}(V)$, we can regard $S$ if $\dim V$ is even (resp. $S^\pm$ if $\dim V$ is odd) as unitary (genuine) ${{\widetilde {W}}}$-representations. If $R$ spans $V$, they are irreducible representations ([@Mo Theorem 3.3]). When $\dim V$ is odd, $S^+$ and $S^-$ are associate, while if $\dim V$ is even, $S$ is self-associate.
Define the Casimir element of ${{\widetilde {W}}}$: $$\label{omWtilde}
\Omega_{{{\widetilde {W}}}}=z\sum_{\substack{{{\alpha}}>0,\beta>0\\s_{{\alpha}}(\beta)<0}}
|{{\alpha}}^\vee| |\beta^\vee| ~{{\widetilde {s}}}_{{\alpha}}{{\widetilde {s}}}_\beta\in \mathbb C[{{\widetilde {W}}}]^{{{\widetilde {W}}}}.$$ Every ${{\widetilde {\sigma}}}\in \widehat{{{\widetilde {W}}}}$ acts on $\Omega_{{{\widetilde {W}}}}$ by a scalar, which we denote ${{\widetilde {\sigma}}}(\Omega_{{{\widetilde {W}}}}).$
Before stating Theorem \[t:intro\], we need to introduce more notation. Assume that $R$ spans $V$ and let ${\mathfrak g}$ be the complex semisimple Lie algebra with root system $\Phi$ and Cartan subalgebra ${{\mathfrak h}}=V^\vee\otimes_{{\mathbb R}}{{\mathbb C}}$, and let $G$ be the simply connected Lie group with Lie algebra ${\mathfrak g}$. Extend the inner product from $V^\vee$ to ${{\mathfrak h}}.$ Let us denote by ${{\mathcal T}}(G)$ the set of $G$-conjugacy classes of Jacobson-Morozov triples $(e,h,f)$ in ${\mathfrak g}$. We set: $$\label{eq:tzero}
{{\mathcal T}}_0(G)=\{[(e,h,f)]\in {{\mathcal T}}(G): \text{ the centralizer of }\{e,h,f\} \text{
in }{\mathfrak g}\text{ is a toral subalgebra}\}.$$ For every class in ${{\mathcal T}}(G)$, we may (and will) choose a representative $(e,h,f)$ such that $h\in{{\mathfrak h}}.$ For every nilpotent element $e$, let $A(e)$ denote the A-group in $G$, and let $\widehat {A(e)}_0$ denote the set of representations of $A(e)$ of Springer type. For every $\phi\in \widehat{A(e)}_0$, let $\sigma_{(e,\phi)}$ be the associated Springer representation. Normalize the Springer correspondence so that $\sigma_{0,\text{triv}}={\mathsf{sign}}$.
\[t:intro\] There is a surjective map $$\Psi:\widehat{{{\widetilde {W}}}}_{\mathsf{gen}} \longrightarrow {{\mathcal T}}_0(G),$$ with the following properties:
1. If $\Psi({{\widetilde {\sigma}}})=[(e,h,f)]$, then we have $${{\widetilde {\sigma}}}(\Omega_{{{\widetilde {W}}}})=\langle h,h\rangle,$$ where $\Omega_{{{\widetilde {W}}}}$ is as in (\[omWtilde\]).
2. Let $(e,h,f)\in {{\mathcal T}}_0(G)$ be given. For every Springer representation $\sigma_{(e,\phi)}$, $\phi\in\widehat {A(e)}_0$, and every spin ${{\widetilde {W}}}$-module $S$, there exists ${{\widetilde {\sigma}}}\in \Psi^{-1}[(e,h,f)]$ such that ${{\widetilde {\sigma}}}$ appears with nonzero multiplicity in the tensor product $\sigma_{(e,\phi)}\otimes S$. Conversely, for every ${{\widetilde {\sigma}}}\in \Psi^{-1}[(e,h,f)]$, there exists a spin ${{\widetilde {W}}}$-module $S$ and a Springer representation $\sigma_{(e,\phi)}$, such that ${{\widetilde {\sigma}}}$ is contained in $\sigma_{(e,\phi)}\otimes S.$
Since ${\mathsf{triv}}(\Omega_{{{\widetilde {W}}}})={\mathsf{sign}}(\Omega_{{{\widetilde {W}}}})$, Theorem \[t:intro\](1) says in particular that any two associate genuine ${{\widetilde {W}}}$-types ${{\widetilde {\sigma}}}_1,{{\widetilde {\sigma}}}_2$ lie in the same fiber of $\Psi$.
The graded Hecke algebra
------------------------
Recall the real root system $\Phi=(V,R,V^\vee,R^\vee)$. The complexifications of $V,
V^\vee$ are denoted by $V_{{\mathbb C}},V_{{\mathbb C}}^\vee$. We denote by $S(V_{{\mathbb C}})$ the symmetric algebra in $V_{{\mathbb C}}.$
The graded affine Hecke algebra ${{\mathbb H}}$ (with equal parameters) is defined as follows:
1. as a ${{\mathbb C}}$-vector space, it is $S(V_{{\mathbb C}})\otimes{{\mathbb C}}[W]$;
2. $S(V_{{\mathbb C}})$ and ${{\mathbb C}}[W]$ have the usual algebra structures as subalgebras;
3. the cross relations are $$s_{{\alpha}}\cdot\xi-s_{{\alpha}}(\xi)\cdot s_{{\alpha}}=\langle{{\alpha}},\xi\rangle,$$ for every ${{\alpha}}\in \Pi$ and $\xi\in V_{{\mathbb C}}.$
\[d:casimir\] Let $\{{{\omega}}_i:i=1,n\}$ and $\{{{\omega}}^i: i=1,n\}$ be dual bases of $V$ with respect to $\langle~,~\rangle$. Define the Casimir element of ${{\mathbb H}}$: $\Omega=\sum_{i=1}^n\omega_i\omega^i\in {{\mathbb H}}$.
It is easy to see that the element $\Omega$ is independent of the choice of bases and central in ${{\mathbb H}}$. Moreover, if $(\pi,X)$ is an irreducible ${{\mathbb H}}$-module with central character $\chi_\nu$ for $\nu \in V_{{\mathbb C}}^\vee$, then $\pi$ acts on $\Omega$ by the scalar $\langle \nu,\nu\rangle.$
The algebra ${{\mathbb H}}$ has a natural conjugate linear anti-involution defined on generators as follows: $$\label{eq:tomdef}
\begin{aligned}
&w^*={w^{-1}},\quad w\in W,\\
&\omega^*=-\omega+\sum_{\beta>0} (\beta,\omega) {s_\beta},\quad
{\omega\in V}.
\end{aligned}$$
An ${{\mathbb H}}$-module $(\pi,X)$ is said to be Hermitian if there exists a Hermitian form $(~,~)_X$ on $X$ which is invariant in the sense that $(\pi(h)x,y)_X={{(x,\pi(h^*)y)}_X},$ for all $h\in{{\mathbb H}},$ $ x,y\in X$. If such a form exists which is also positive definite, then $X$ is said to be unitary.
For every $\omega\in V$, define $$\label{omtilde}
{{\widetilde {{{\omega}}}}}={{\omega}}-\frac 12 \sum_{\beta>0} (\beta,\omega) {s_\beta}
\; \in \; {{\mathbb H}}.$$ It is immediate that ${{\widetilde {\omega}}}^* = - {{\widetilde {\omega}}}$.
\[d:dirac\] Let $\{{{\omega}}_i\}$, $\{{{\omega}}^i\}$ be dual bases of $V$. The Dirac element is defined as $${{\mathcal D}} = \sum_i {{\widetilde {\omega}}}_i \otimes \omega^i \in {{\mathbb H}}\otimes {C(V)}.$$ It is elementary to verify that ${{\mathcal D}}$ does not depend on the choice of dual bases.
We will usually work with a fixed spin module $(\gamma, S)$ for [$C(V)$]{} and a fixed ${{\mathbb H}}$-module $(\pi,X)$. Define the Dirac operator for $X$ (and $S$) as $D=(\pi \otimes \gamma)({{\mathcal D}})$.
Suppose $X$ is a Hermitian ${{\mathbb H}}$-module with invariant form $(~,~)_X$. Endow $X \otimes S$ with the Hermitian form $(x\otimes s, x' \otimes s')_{X \otimes S}
= (x,x')_X \langle s,s'\rangle_S$. [Analogous to results of Parthasarathy in the real case,]{} the operator $D$ is self adjoint with respect to $(~,~)_{X \otimes S}$, $$( D (x\otimes s), x'\otimes s')_{X\otimes S}=
(x\otimes s,D(x'\otimes s'))_{X\otimes S}$$ Thus a Hermitian ${{\mathbb H}}$-module is unitary only if $$\label{eq:dcriterion}
(D^2 (x\otimes s), x\otimes s)_{X\otimes S} \geq 0, \qquad \text{ for all $x\otimes s \in X \otimes S$}.$$ We write $\Delta_{{{\widetilde {W}}}}$ for the diagonal embedding of ${{\mathbb C}}[{{\widetilde {W}}}]$ into [${{\mathbb H}}\otimes C(V)$]{} defined by extending $\Delta_{{{\widetilde {W}}}}({{\widetilde {w}}}) = {p({{\widetilde {w}}})} \otimes {{\widetilde {w}}}$ linearly.
For ${{\widetilde {w}}}\in {{\widetilde {W}}}$, one can easily see that $$\label{Winv}
\Delta_{{{\widetilde {W}}}}({{\widetilde {w}}}) {{\mathcal D}}
={\mathsf{sign}}({{\widetilde {w}}}) {{\mathcal D}} \Delta_{{{\widetilde {W}}}}({{\widetilde {w}}})$$ as elements of [${{\mathbb H}}\otimes C(V)$]{}. In particular, the kernel of the Dirac operator on $X \otimes S$ is invariant under ${{\widetilde {W}}}$.
\[t:dirac\] The square of the Dirac element equals $${{\mathcal D}}^2=-\Omega\otimes 1+\frac 14\Delta_{{{\widetilde {W}}}}(\Omega_{{{\widetilde {W}}}}),$$ in ${{\mathbb H}}\otimes C(V)$.
Dirac cohomology
----------------
To have a uniform notation, we will denote a spin module by $S^{{\epsilon}}$. If $\dim V$ is even, then $S^{{\epsilon}}$ is $S$, the unique spin module, and if $\dim V$ is odd, then ${{\epsilon}}$ could be $+$ or $-$.
\[d:dcoh\] In the setting of Definition \[d:dirac\], define $$H^D_{{\epsilon}}(X):=\ker D\big / \left(\ker D\cap {\operatorname{Im}}D\right )$$ and call it the Dirac cohomology of $X$. (The symbol ${{\epsilon}}$ denotes the dependence on the spin module $S^{{\epsilon}}$.) If $X$ is unitary, the self-adjointness of $D$ implies that $\ker(D) \cap {\operatorname{Im}}(D) = 0$, and so $H^D_{{\epsilon}}(X) = \ker (D)$.
Vogan’s conjecture takes the following form.
\[t:hpv\] Suppose $(\pi,X)$ is an ${{\mathbb H}}$ module with central character $\chi_\nu$ with $\nu \in V_{{\mathbb C}}^\vee$. Suppose that $H^D_{{\epsilon}}(X) \neq 0$ and let $({{\widetilde {\sigma}}},{{\widetilde {U}}})$ be an irreducible representation of ${{\widetilde {W}}}$ such that ${\operatorname{Hom}}_{{{\widetilde {W}}}}({{\widetilde {U}}}, H^D_{{\epsilon}}(X) )
\neq 0$. If $\Psi({{\widetilde {\sigma}}})=[(e,h,f)]\in{{\mathcal T}}_0(G)$, then $\nu=\frac 12h.$
As a corollary, we find the following formula for $H^D_{{\epsilon}}(X).$
Assume $X$ is an ${{\mathbb H}}$ module with central character $\chi_{\frac 12 h}$, for some $[(e,h,f)]\in{{\mathcal T}}_0(G)$ (otherwise $H^D_{{\epsilon}}(X)=0$). Then, as a ${{\widetilde {W}}}$-module $$\label{HDdecomp}
H^D_{{\epsilon}}(X)=\sum_{{{\widetilde {\sigma}}}\in \Psi^{-1}(e,h,f)}\sum_{\mu\in\widehat W}[{{\widetilde {\sigma}}}:\mu\otimes S^{{\epsilon}}][X|_W:\mu]~{{\widetilde {\sigma}}}.$$
Theorem \[t:hpv\] has an easy weak converse, which will be useful in applications.
\[criterion\] Assume that $(\pi,X)$ is a unitary ${{\mathbb H}}$ module with central character $\chi_\nu$, $\nu\in V_{{\mathbb C}}^\vee$ and that there exists an irreducible ${{\widetilde {W}}}$-type $({{\widetilde {\sigma}}},{{\widetilde {U}}})$ such that ${\operatorname{Hom}}_{{{\widetilde {W}}}}({{\widetilde {U}}},X\otimes S^{{\epsilon}})\neq 0$ and $\langle\nu,\nu\rangle={{\widetilde {\sigma}}}(\Omega_{{{\widetilde {W}}}}).$ Then ${\operatorname{Hom}}_{{{\widetilde {W}}}}({{\widetilde {U}}}, H^D_{{\epsilon}}(X))\neq 0,$ and in particular $H^D_{{\epsilon}}(X)\neq 0.$
Let $x\otimes s$ be an element of $X\otimes S^{{\epsilon}}$ in the isotypic component of ${{\widetilde {\sigma}}}$. Then $D^2(x\otimes
s)=-\langle\nu,\nu\rangle+{{\widetilde {\sigma}}}(\Omega_{{{\widetilde {W}}}})=0$. Since $X$ is assumed unitary, the operator $D$ is self-adjoint on $X\otimes S$ and thus $\ker D^2=\ker D.$ This implies $x\otimes s\in \ker D(=H_{{\epsilon}}^D(X).$
An induction lemma {#s:2.6}
------------------
Let $(V_M,R_M,V_M^\vee,R_M^\vee)$ be a root subsystem of$(V,R,V^\vee,R^\vee).$ Let $\Pi_M\subset \Pi$ be the corresponding simple roots and $W_M\subset W$ the reflection subgroup. Let ${{\mathbb H}}_M$ denote the Hecke subalgebra of ${{\mathbb H}}$ given by this root subsystem. Denote by $V_N$ the orthogonal complement of $V_M$ in $V$ with respect to the fixed product $\langle~,~\rangle.$
Recall that the graded tensor product $A\hat\otimes B$ of two ${{\mathbb Z}}/2{{\mathbb Z}}$-graded algebras $A$ and $B$ is $A\otimes B$ as a vector space, but with multiplication defined by $$(a_1\otimes b_1)(a_2\otimes
b_2)=(-1)^{\text{deg}(b_1)\text{deg}(a_2)} a_1a_2\otimes b_1b_2.$$
There is an isomorphism of algebras $C(V)\cong C(V_M)\hat\otimes C(V_N).$
If an orthonormal basis of $V_M$ is $\{{{\omega}}_1,\dots,{{\omega}}_k\}$ and an orthonormal basis of $V_n$ is $\{{{\omega}}_{k+1},\dots,{{\omega}}_n\}$, then the isomorphism is given by ${{\omega}}_{i_1}\dots{{\omega}}_{i_l}\otimes{{\omega}}_{j_1}\dots{{\omega}}_{j_r}\mapsto
{{\omega}}_{i_1}\dots{{\omega}}_{i_l}{{\omega}}_{j_1}\dots{{\omega}}_{j_r}$, where $i_1,\dots,i_l\in\{1,\dots,k\}$ and $j_1,\dots,j_r\in\{k+1,\dots,n\}.$
Since $W_M$ acts trivially on $V_N$, and therefore ${{\widetilde {W}}}_M$ acts trivially on every $C(V_N)$-module, we see that as ${{\widetilde {W}}}_M$-representations: $$\label{restspin}
\begin{aligned}
&S\cong \oplus_{2^{\dim V_N/2}} S_M,\quad \text{ if }\dim V,\dim
V_M\text{ are both even};\\
&S^\pm\cong \oplus_{2^{\dim V_N/2}} S_M^\pm,\quad \text{ if }\dim V,\dim
V_M\text{ are both odd};\\
&S^\pm\cong \oplus_{2^{(\dim V_N-1)/2}} S_M,\quad \text{ if }\dim
V\text{ is odd and }\dim V_M\text{ is even};\\
&S\cong \oplus_{2^{(\dim V_N-1)/2}} (S_M^++S_M^-),\quad \text{ if }\dim
V\text{ is even and }\dim
V_M\text{ is odd}.\\
\end{aligned}$$
The following lemma will be our main criterion for proving that certain induced modules have nonzero Dirac cohomology. In order to reduce the number of cases, denote ${{\mathcal S}}=S$ if $\dim V$ is even, and ${{\mathcal S}}=S^++S^-$ if $\dim V$ is odd, and similarly define ${{\mathcal S}}_M.$ In particular, ${{\mathcal S}}$ is self-contragredient.
\[indlemma\] Let $\pi_M$ be an ${{\mathbb H}}_M$-module, and $\pi={{\mathbb H}}\otimes_{{{\mathbb H}}_M}\pi_M.$
1. ${\operatorname{Hom}}_{{{\widetilde {W}}}}[{{\widetilde {\sigma}}},\pi\otimes {{\mathcal S}}]=\frac{\dim {{\mathcal S}}}{\dim {{\mathcal S}}_M}{\operatorname{Hom}}_{{{\widetilde {W}}}_M}[{{\widetilde {\sigma}}}|_{{{\widetilde {W}}}_M},\pi_M\otimes {{\mathcal S}}_M].$
2. Assume that $\pi_M$ is unitary and the ${{\widetilde {W}}}_M$-type ${{\widetilde {\sigma}}}_M$ occurs in the $H^D(\pi_M)$. If there exists a ${{\widetilde {W}}}$-type ${{\widetilde {\sigma}}}$ such that
1. ${\operatorname{Hom}}_{{{\widetilde {W}}}_M}[{{\widetilde {\sigma}}}_M,{{\widetilde {\sigma}}}]\neq 0$;
2. if $\Psi({{\widetilde {\sigma}}})=[(e,h,f)]$, then the central character of $\pi$ is $\chi_\pi=h$,
then ${{\widetilde {\sigma}}}$ occurs in $H^D(\pi)$.
Part (b) is an immediate consequence of (a) using Proposition \[criterion\]. To prove (a), we use Frobenius reciprocity and the restriction of ${{\mathcal S}}$ to ${{\widetilde {W}}}_M$: $$\begin{aligned}
{\operatorname{Hom}}_{{{\widetilde {W}}}}[{{\widetilde {\sigma}}},\pi\otimes {{\mathcal S}}]&={\operatorname{Hom}}_W[{{\widetilde {\sigma}}}\otimes {{\mathcal S}},\operatorname{Ind}_{W_M}^W\pi_M]={\operatorname{Hom}}_{W_M}[({{\widetilde {\sigma}}}\otimes {{\mathcal S}})|_{W_M},\pi_M]\\
&={\operatorname{Hom}}_{{{\widetilde {W}}}_M}[{{\widetilde {\sigma}}}|_{{{\widetilde {W}}}_M},\pi_M\otimes {{\mathcal S}}|_{{{\widetilde {W}}}_M}]=\frac{\dim {{\mathcal S}}}{\dim {{\mathcal S}}_M}{\operatorname{Hom}}_{{{\widetilde {W}}}_M}[{{\widetilde {\sigma}}}|_{{{\widetilde {W}}}_M},\pi_M\otimes {{\mathcal S}}_M].\\
\end{aligned}$$
Spherical modules
-----------------
An ${{\mathbb H}}$-module $X$ is called spherical if ${\operatorname{Hom}}_W[{\mathsf{triv}},X]\neq 0.$ The (spherical) principal series modules of ${{\mathbb H}}$ are defined as the induced modules $$X(\nu)={{\mathbb H}}\otimes_{S(V_{{\mathbb C}})}{{\mathbb C}}_\nu,$$ for $\nu\in V_{{\mathbb C}}^\vee.$ Since $X(\nu)\cong {{\mathbb C}}[W]$ as $W$-modules, there is a unique [irreducible]{} spherical ${{\mathbb H}}$-subquotient $L(\nu)$ of $X(\nu)$. It is well known that:
1. $L(\nu)\cong L(w\nu),$ for every $w\in W$;
2. if $\nu$ is $R^+$-dominant, then $L(\nu)$ is the unique irreducible quotient of $X(\nu)$;
3. every irreducible spherical ${{\mathbb H}}$-module is isomorphic to a quotient $L(\nu)$, $\nu$ $R^+$-dominant.
Recall the Lie algebra ${\mathfrak g}$ that we attached to the root system $\Phi.$ The identification ${{\mathfrak h}}=V_{{\mathbb C}}^\vee$ allows us to view $\nu$ as an element of ${{\mathfrak h}}.$ Next, consider ${\mathfrak g}_1=\{x\in{\mathfrak g}: [\nu,x]=x$, the $ad$ $1$-eigenspace of $\nu$ on ${\mathfrak g}.$ The stabilizer $G_0=\{g\in G: Ad(g)\nu=\nu\}$ acts on ${\mathfrak g}_1$ with finitely many orbits, and let $e$ be an element of the unique open dense $G_0$-orbit. Lusztig’s geometric realization of ${{\mathbb H}}$ and classification of irreducible ${{\mathbb H}}$-modules implies in particular the following statement.
\[sphrest\] Let $\nu\in V_{{\mathbb C}}^\vee$ be given and let $e$ be a nilpotent element of ${\mathfrak g}$ attached [to $\nu$]{} by the procedure above. Then the spherical module $L(\nu)$ contains the Springer representation $\sigma_{(e,1)}$ with multiplicity one.
The second result that we need is the unitarizability of the spherical unipotent ${{\mathbb H}}$-modules.
\[unitunip\] For every Lie triple $(e,h,f)$, the spherical module $L(\frac 12 h)$ is unitary.
Now we can state the classification of spherical modules with nonzero Dirac cohomology.
We say that an ${{\mathbb H}}$-module $X$ has nonzero Dirac cohomology if for a choice of spin module $S^{{\epsilon}}$, $H^D_{{\epsilon}}(X)\neq 0$.
Let $[(e,h,f)]\in{{\mathcal T}}_0(G)$ be given and assume $G$ is simple. The results of [@C] give a concrete description in every Lie type of the map $\Psi$ from Theorem \[t:intro\]. In particular, there is [either]{} only one self-associate ${{\widetilde {W}}}$-type which we denote ${{\widetilde {\sigma}}}_{(e,1)}$, or two associate ${{\widetilde {W}}}$-types denoted ${{\widetilde {\sigma}}}_{(e,1)}^\pm$, which appear in the fiber $\Psi^{-1}(e,h,f)$ and can occur in the decomposition of the tensor product $\sigma_{(e,1)}\otimes S^{{\epsilon}}.$
\[HDsphunip\] An irreducible spherical module $L(\nu)$ has nonzero Dirac cohomology if and only if $\nu=w\cdot \frac 12h$ for some $[(e,h,f)]\in{{\mathcal T}}_0(G).$
Assume that $H^D_{{\epsilon}}(L(\nu))\neq 0.$ Then there exists a genuine ${{\widetilde {W}}}$-type ${{\widetilde {\sigma}}}$ occuring in $H^D_{{\epsilon}}(L(\nu))$, such that $\Psi({{\widetilde {\sigma}}})=[(e,h,f)]\in{{\mathcal T}}_0(G)$. By Theorem \[t:hpv\], $\nu=w\cdot \frac 12 h$.
Conversely, fix $[(e,h,f)]\in {{\mathcal T}}_0(G).$ The spherical module $L(\frac 12 h)$ contains $\sigma_{(e,1)}$ with multiplicity one by Proposition \[sphrest\], and it is unitary by Proposition \[unitunip\]. From this, Proposition \[criterion\] implies immediately that one of the ${{\widetilde {W}}}$-types in $\Psi^{-1}(e,h,f)$ occurs in $H^D_{{\epsilon}}(L(\frac 12 h)),$ for some ${{\epsilon}}$, and therefore $L(\frac 12 h)$ has nonzero Dirac cohomology.
In order to investigate the precise formula for $H^D_{{\epsilon}}(L(h/2))$, one uses (\[HDdecomp\]) and the results of Borho-McPherson [@BMcP] about the $W$-structure of the Springer representations in $A(e)$-isotypic components of the full cohomology of a Springer fiber. In our setting, this says that, as a $W$-module: $$\label{bomac}
L(h/2)=\sigma_{(e,1)}+\sum_{e'>e}\sum_{\phi'\in \widehat A(e)_0}m_{e',\phi'}\sigma_{(e',\phi')},$$ for some integers $m_{e',\phi'}\ge 0$. Here $e'>e$ means the closure ordering of nilpotent orbits, i.e., $e\in \overline{G\cdot
e'}\setminus G\cdot e'$. We make the following conjecture.
\[conj\] Let ${{\widetilde {\sigma}}}$ be a ${{\widetilde {W}}}$-type such that $\Psi({{\widetilde {\sigma}}})=[(e',h',f')].$ Then $${\operatorname{Hom}}_{{{\widetilde {W}}}}[{{\widetilde {\sigma}}},\sigma_{(e,\phi)}\otimes S^{{\epsilon}}]\neq 0$$ only if $e'\ge e.$
If this conjecture is true, then if we tensor by $S^{{\epsilon}}$ in (\[bomac\]), every ${{\widetilde {W}}}$-type coming from a $\sigma_{(e',\phi')}\otimes S^{{\epsilon}}$, $e'>e$, would correspond under the map $\Psi$ to a triple $(e'',h'',f'')$ with $e''\ge e'>e.$ In particular, [$|h''|>|h|,$]{} so the formula for $D_{{\epsilon}}^2$ (Theorem \[t:dirac\]) implies that none of these ${{\widetilde {W}}}$-types can contribute to $H^D_{{\epsilon}}(L(h/2)).$ Thus the only nontrivial contribution to $H^D_{{\epsilon}}(L(h/2))$ comes from $\sigma_{(e,1)}\otimes S^{{\epsilon}}$, and we would have: $$\label{eqconj}
\begin{aligned}
&H^D_{{\epsilon}}(L(h/2))=[{{\widetilde {\sigma}}}_{(e,1)}:\sigma_{(e,1)}\otimes S^{{\epsilon}}]~{{\widetilde {\sigma}}}_{(e,1)},\text{ if }{{\widetilde {\sigma}}}_{(e,1)}\cong{{\widetilde {\sigma}}}_{(e,1)}\otimes{\mathsf{sign}};\\
&H^D_{{\epsilon}}(L(h/2))=[{{\widetilde {\sigma}}}_{(e,1)}^+:\sigma_{(e,1)}\otimes S^{{\epsilon}}]~{{\widetilde {\sigma}}}_{(e,1)}^++[{{\widetilde {\sigma}}}_{(e,1)}^-:\sigma_{(e,1)}\otimes S^{{\epsilon}}]~{{\widetilde {\sigma}}}_{(e,1)}^-,\text{ otherwise}.
\end{aligned}$$ In section \[sec:3\], we will show that Conjecture \[conj\] holds when ${{\mathbb H}}$ is a Hecke algebra of type $A$, and therefore in that case (\[eqconj\]) is true (see Lemma \[prelimresults\]). Further evidence for this conjecture is provided by the computation of the Dirac index for tempered ${{\mathbb H}}$-modules in [@CT Theorem 1].
Nonzero Dirac cohomology for type $A$ {#sec:3}
=====================================
In this section, we specialize to the case of the graded Hecke algebra attached to the root system $\Phi=(V,R,V^\vee,R^\vee)$ of $gl(n).$ Explicitly, $V={{\mathbb R}}^n$ with a basis $\{{{\epsilon}}_1,\dots,{{\epsilon}}_n\}$, $R=\{{{\epsilon}}_i-{{\epsilon}}_j: 1\le
i\neq j\le n\}.$ To simplify notation, we will also use the coordinates $\{{{\epsilon}}_i\}$ to describe $V^\vee\cong {{\mathbb R}}^n$ and $R^\vee.$ We choose positive roots $R^+=\{{{\epsilon}}_i-{{\epsilon}}_j: 1\le i<j\le n\}$. The simple roots are therefore $\Pi=\{{{\epsilon}}_i-{{\epsilon}}_{i+1}: 1\le i<n\}.$ The Weyl group is the symmetric group $S_n$ and we write $s_{i,j}$ for the reflection in the root ${{\epsilon}}_i-{{\epsilon}}_j.$
The graded Hecke algebra ${{\mathbb H}}_n$ for $gl(n)$ is therefore generated by $S_n$ and the [set $\{{{\epsilon}}_i: 1\le i\le n\}$]{} subject to the commutation relations: $$\begin{aligned}
&s_{i,i+1} {{\epsilon}}_k={{\epsilon}}_k s_{i,i+1},\qquad k\neq i,i+1;\\
&s_{i,i+1}{{\epsilon}}_i-{{\epsilon}}_{i+1}s_{i,i+1}=1.\end{aligned}$$
We review the classification of the unitary dual of ${{\mathbb H}}_n$ and then determine which unitary ${{\mathbb H}}_n$-modules have nonzero Dirac cohomology.
Langlands classification
------------------------
We begin by recalling the Langlands classification for ${{\mathbb H}}_n$.
The Steinberg module ${\mathsf{St}}$ is the ${{\mathbb H}}_n$-module whose restriction to ${{\mathbb C}}[S_n]$ is the ${\mathsf{sign}}$-representation, and whose only $S(V_{{\mathbb C}})$ weight is $-\rho^\vee=-\frac 12\sum_{{{\alpha}}\in R^+} {{\alpha}}^\vee.$
Let $\lambda=(n_1,n_2,\dots,n_r)$ be a composition of $n$. (This means that $n_1+n_2+\dots+n_r=n$, but there is no order assumed between the $n_i$’s, [[*e.g. * ]{}]{}$(2,1)$ and $(1,2)$ are different compositions of $3$.) For every $1\le i\le r,$ regard the Hecke algebra ${{\mathbb H}}_{n_i}$ as the subalgebra of ${{\mathbb H}}$ generated by $\{{{\epsilon}}_j,{{\epsilon}}_{j+1},\dots,{{\epsilon}}_{j+n_i-1}\}$ and $\{s_{j,j+1},s_{j+1,j+2},\dots, s_{j+n_i-1,j+n_i}$, where $j=n_1+n_2+\dots+n_{i-1}+1.$ Then $${{\mathbb H}}_\lambda={{\mathbb H}}_{n_1}\times{{\mathbb H}}_{n_2}\times\dots\times {{\mathbb H}}_{n_r}$$ is a (parabolic) subalgebra of ${{\mathbb H}}_n.$ For every $r$-tuple $\underline\nu=(\nu_1,\nu_2,\dots,\nu_r)$ of complex numbers, we may consider the induced module $$\label{indmod}
I_\lambda(\underline\nu)={{\mathbb H}}_n\otimes_{{{\mathbb H}}_\lambda}({\mathsf{St}}\otimes{{\mathbb C}}_{\nu_1})\boxtimes\dots\boxtimes({\mathsf{St}}\otimes{{\mathbb C}}_{\nu_r}).$$ If $\underline\nu$ satifies the dominance condition $$\label{dom}
{\operatorname{Re}}(\nu_1)\ge{\operatorname{Re}}(\nu_2)\ge\dots\ge{\operatorname{Re}}(\nu_r),$$ we call $I_\lambda(\underline\nu)$ a standard module.
\[irrclass\]
1. Let $\lambda$ be a composition of $n$ and $I_\lambda(\underline\nu)$ a standard module as in (\[indmod\]) and (\[dom\]). Then $I_\lambda(\underline\nu)$ has a unique irreducible quotient $L_\lambda(\underline\nu)$.
2. Every irreducible ${{\mathbb H}}_n$-module is isomorphic to an $L_\lambda(\underline\nu)$ as in (a).
Recall that by Young’s construction, the $S_n$-types are in one to one correspondence with partitions of $n$. We write $\sigma_\lambda$ for the $S_n$-type parameterized by the partition $\lambda$ of $n$. For example, $\sigma_{(n)}={\mathsf{triv}}$ and $\sigma_{(1^n)}={\mathsf{sign}}.$ If $\lambda^t$ denotes the transpose partition of $\lambda$, then $\sigma_\lambda\otimes{\mathsf{sign}}=\sigma_{\lambda^t}.$ Finally, every composition $\lambda$ of $n$ gives rise to a partition of $n$ by reordering, and we denote the corresponding $S_n$-type by $\sigma_\lambda$ again.
\[lwt\] In the notation of Theorem \[irrclass\], the irreducible module $L_\lambda(\underline\nu)$ contains the $S_n$-type $\sigma_{\lambda^t}$ with multiplicity one.
Speh modules
------------
The building blocks of the unitary dual are the Speh modules whose construction we review now.
The following lemma is well-known and elementary.
For every $c\in {{\mathbb C}}$, there exists a surjective algebra homomorphism $\tau_c:
{{\mathbb H}}\to {{\mathbb C}}[S_n]$ given by: $$\begin{aligned}
&w\mapsto w,\quad w\in S_n;\\
&{{\epsilon}}_k\mapsto s_{k,k+1}+s_{k,k+2}+\dots+s_{k,n}+c,\quad 1\le k<n;\\
&{{\epsilon}}_n\mapsto c.\end{aligned}$$
We check the commutation relations. It is clear that if $k\neq i,i+1$, $s_{i,i+1}$ commutes with ${{\epsilon}}_k$, since $s_{i,i+1}$ commutes with every $s_{j,n}$, $k\le j<n$, when $i+1<k$, and it commutes with $s_{k,i}+s_{k,i+1}$ and $s_{k,j},$ $j\neq i,i+1$, when $i>k.$
Next, $s_{i,i+1}{{\epsilon}}_i=1+\sum_{j>i+1} w_{(i,j,i+1)}+c s_{i,i+1}$ and ${{\epsilon}}_{i+1}s_{i,i+1}=\sum_{j>i+1} w_{(i,j,i+1)}+c s_{i,i+1}$, where $w_{(i,j,i+1)}$ denotes the element of $S_n$ with cycle structure $(i,j,i+1).$ The claim follows.
For every partition $\lambda$ of $n$ and $c\in {{\mathbb C}}$, define the ${{\mathbb H}}$-module $\tau^*_c(\lambda)$ obtained by pulling back $\sigma_\lambda$ to ${{\mathbb H}}$ via $\tau_c.$
Viewing $\lambda$ as a left justified Young diagram, define the $c$-content of the $(i,j)$ box of $\lambda$ to be $c+(j-i)$, and the $c$-content of $\lambda$ to be the set [of]{} $c$-contents of boxes. This is best explained by an example. If $\lambda$ is the partition of $(3,3,1)$ of $n=7$, the $0$-content is the Young tableau $$\bgroup\catcode`\:=13 \catcode`\.=13
\catcode`\;=13 \catcode`\>=13 \catcode`\^=13
\setlength{\tabheight}{3ex}\setlength{\tabwidth}{3ex} \def\b##1##2##3{\gentabbox{##1}{##2}{1.2pt}{\vbox{##3}}} \def\n##1##2##3{\gentabbox{##1}{##2}{0.4pt}{\vbox{##3}}} \vbox\bgroup\offinterlineskip
:.{0}.{1}.{2} \\
:.{{-1}}.{0}.{1} \\
:.{-2}\\
\egroup\egroup$$ For the $c$-content, add $c$ to the entry in every box.
\[content\] The central character of $\tau^*_c(\lambda)$ is the ($S_n$-orbit of the) $c$-content of the partition $\lambda$.
This follows from the known values of the simultaneous eigenvalues of the Jucys-Murphy elements $s_{k,k+1}+s_{k,k+2}+\dots+s_{k,n}$ used to defined $\tau_c.$ See for example [@OV Theorem 5.8].
If $\lambda$ is a box partition, i.e., $\lambda=(\underbrace{m,m,\dots,m}_{d})$, for some $m,d$ such that $n=md$, and $c=0$ when $m+d$ is even or $c=\frac 12$ when $m+d$ is odd, call the module $\tau^*_c(\lambda)$ a Speh module, and denote it by $a(m,d).$
\[spehclass\] In the notation of Theorem \[irrclass\], the Speh module $a(m,d)$ is isomorphic to $L_{\lambda^t}(\frac{m-1}2,\frac{m-3}2,\dots,-\frac{m-1}2),$ where $\lambda^t=(\underbrace{d,d,\dots,d}_{m})$.
This is immediate from Theorem \[irrclass\], Theorem \[lwt\] and Lemma \[content\].
The unitary dual
----------------
The classification of irreducible ${{\mathbb H}}_n$-modules which admit a nondegenerate invariant hermitian form is a particular case of the classical result of [@KZ], as formulated in the Hecke algebra setting by [@BM].
If $\lambda=(n_1,\dots,n_r)$ is a composition of $n$, let $R_\lambda\subset R$ denote the root subsystem of the Levi subalgebra $gl(n_1)\oplus\dots\oplus gl(n_r)\subset gl(n).$ If $w\in
S_n$ has the property that $wR_\lambda^+=R_\lambda^+,$ then $w$ gives rise to an algebra automorphism of ${{\mathbb H}}_\lambda$, and therefore $w$ acts on the set of irreducible ${{\mathbb H}}_\lambda$-modules.
\[herm\] Let $\lambda=(n_1,\dots,n_r)$ be a composition of $n$, and $\underline\nu=(\nu_1,\dots,\nu_r)$ be a dominant $r$-tuple of complex numbers in the sense of (\[dom\]). In the notation of Theorem \[irrclass\], $L_\lambda(\underline\nu)$ is hermitian if and only if there exists $w\in S_n$ such that $w
R_\lambda^+=R_\lambda^+$ and $$\label{e:herm}
w(({\mathsf{St}}\otimes{{\mathbb C}}_{\nu_1})\boxtimes\dots\boxtimes({\mathsf{St}}\otimes{{\mathbb C}}_{\nu_r}))=({\mathsf{St}}\otimes{{\mathbb C}}_{-\overline\nu_1})\boxtimes\dots\boxtimes({\mathsf{St}}\otimes{{\mathbb C}}_{-\overline\nu_r}),$$ as ${{\mathbb H}}_\lambda$-modules.
Every Speh module $a(m,d)$ is a unitary ${{\mathbb H}}_n$-module.
Let $w_0$ denote the longest Weyl group element in $S_n$ and $w_0(\lambda)$ the longest Weyl group element in $S_{n_1}\times\dots\times S_{n_r}.$ Using Lemma \[spehclass\], we see now that every Speh module $a(m,d)$ is hermitian since the Weyl group element $w_0 w_0(\lambda^t)$ satifies condition (\[e:herm\]) in this case.
Since in addition $a(m,d)$ is irreducible as an $S_n$-module, it is in fact unitary.
The classification of the unitary dual of ${{\mathbb H}}_n$ is also well-known (see [@T] for the classification of the unitary dual for $GL(n,{{\mathbb Q}}_p)$).
The building blocks are the Speh modules defined before. First, every Speh module $a(m,d)$ can be tensored with a unitary character ${{\mathbb C}}_{y}$, $y\in\sqrt{-1}{{\mathbb R}}$ by which the central element ${{\epsilon}}_1+\dots+{{\epsilon}}_n$ of ${{\mathbb H}}_n$ acts. We denote the resulting (unitary) irreducible module by $a_y(m,d).$
Next, we consider induced complementary series representations of the form $$\label{deform}
\pi(a_y(m,d),\nu)={{\mathbb H}}_{2k}\otimes_{{{\mathbb H}}_k\times{{\mathbb H}}_k}(a_y(m,d)\otimes
{{\mathbb C}}_\nu)\boxtimes (a_y(m,d)\otimes{{\mathbb C}}_{-\nu}),\quad 0<\nu<\frac 12;$$ in this notation, it is implicit that $k=md$. An easy deformation argument shows that all $\pi(a_y(m,d))$ are irreducible unitary ${{\mathbb H}}_{2k}$-modules.
\[unitdual\]
1. Let $\lambda=(n_1,\dots,n_r)$ be a composition of $n$. If every $\pi_1,\dots,\pi_r$ is either a Speh module of the form $a_y(m,d)$ or an induced complementary series module of the form $\pi(a_y(m,d),\nu)$ as in (\[deform\]), then the induced module $$\label{tadicform}
{{\mathbb H}}\otimes_{{{\mathbb H}}_\lambda}(\pi_1\boxtimes\dots\boxtimes\pi_r)$$ is irreducible and unitary. Moreover, two such modules are isomorphic if and only if one is obtained from the other one by permuting the factors.
2. Every unitary ${{\mathbb H}}_n$-module is of the form (\[tadicform\]).
Nilpotent orbits in $sl(n)$
---------------------------
The classification of nilpotent orbits for $sl(n)$ is well-known. Let $P(n)$ denote the set of all (decreasing) partitions of $n$ and let $DP(n)$ be the set of partitions with distinct sizes. The Jordan canonical form gives a bijection between the set of nilpotent orbits of $sl(n)$ and $P(n)$. If $(e_\lambda,h_\lambda,f_\lambda)$ is a Lie triple, where the nilpotent element $e_\lambda$ is the Jordan form given by the partition $\lambda=(n_1,n_2,\dots,n_r),$ $n_1\ge n_2\ge\dots\ge
n_r>0$, then, using the identification ${{\mathfrak h}}={{\mathbb C}}^n$, the middle element $h_\lambda$ can be chosen to have coordinates $$\label{hlam}
h_\lambda=\left(\frac{n_1-1}2,\dots,-\frac{n_1-1}2;\dots;\frac{n_r-1}2,\dots,-\frac{n_r-1}2\right).$$ If we write $\lambda$ as $\lambda=(\underbrace{n_1',\dots,n_1'}_{k_1},\underbrace{n_2',\dots,n_2'}_{k_2},\dots,\underbrace{n_l',\dots,n_l'}_{k_l}),$ with $n_1'>n_2'>\dots>n_l'>0$, then the centralizer in $gl(n)$ of the triple $(e_\lambda,h_\lambda,f_\lambda)$ is $gl(k_1)\oplus
gl(k_2)\oplus\dots\oplus gl(k_l).$ In particular, the centralizer in $sl(n)$ is a toral subalgebra if and only if $\lambda\in DP(n)$. Thus, we have a natural bijection ${{\mathcal T}}_0(SL(n))\leftrightarrow DP(n).$ For $\lambda\in P(n)$, (${{\mathcal T}}_0$ defined in (\[eq:tzero\]) ) viewed as a left justified Young tableau, define $$\label{hooks}
\text{hook}(\lambda)$$ to be the partition obtained by taking the hooks of $\lambda.$ For example, if $\lambda=(3,3,1),$ then $\text{hook}(\lambda)=(5,2).$ It is clear that $\text{hook}(\lambda)\in DP(n).$
We will need the following reformulation for the central character of a Speh module.
\[charspeh\] The central character of a Speh module $a(m,d)$ is the ($S_n$-orbit of) $h_{\lambda'}$ (see (\[hlam\])), where $\lambda'$ is the partition $$\lambda'=\text{hook}(\underbrace{m,m,\dots,m}_d)=(m+d-1,m+d-3,\dots, |m-d|+1).$$
This is immediate from Lemma \[content\] and (\[hlam\]).
Irreducible ${{\widetilde {S}}}_n$-representations
--------------------------------------------------
Denote the length of a partition $\lambda$ by $|\lambda|$. We say that $\lambda$ is even (resp. odd) if $n-|\lambda|$ is even (resp. odd). The first part of Theorem \[t:intro\] for ${{\widetilde {S}}}_n$ is a classical result of Schur.
The irreducible ${{\widetilde {S}}}_n$-representations are parameterized by partitions in $DP(n)$ as follows:
1. for every even $\lambda\in DP(n)$, there exists a unique ${{\widetilde {\sigma}}}_\lambda\in \widehat{{{\widetilde {S}}}_n}$;
2. for every odd $\lambda\in DP(n)$, there exist two associate $
{{\widetilde {\sigma}}}_\lambda^+,{{\widetilde {\sigma}}}_\lambda^-\in \widehat{{{\widetilde {S}}}_n}$.
The dimension of ${{\widetilde {\sigma}}}_\lambda$ or ${{\widetilde {\sigma}}}_\lambda^\pm$, where $\lambda=(\lambda_1,\dots,\lambda_m)\in DP(n)$, is $$2^{[\frac{n-m}2]}\frac{n!}{\lambda_1!\dots\lambda_m!}\prod_{1\le
i<j\le m}\frac{\lambda_i-\lambda_j}{\lambda_i+\lambda_j}.$$
In order to simplify the formulas below, we let ${{\widetilde {\sigma}}}_\lambda^{{\epsilon}}$ denote any one of ${{\widetilde {\sigma}}}_\lambda$, if $\lambda$ is an even partition in $DP(n)$, or ${{\widetilde {\sigma}}}_\lambda^\pm$, if $\lambda$ is an odd partition in $DP(n)$.
The decomposition of the tensor product of an $S_n$-type $\sigma_\mu$ with a spin representation ${{\widetilde {\sigma}}}_{(n)}$ is known.
\[glammu\] If $\lambda\neq (n)$, we have: $$\label{tensdecomp}
\dim{\operatorname{Hom}}_{{{\widetilde {S}}}_n}[{{\widetilde {\sigma}}}_\lambda,\sigma_\mu\otimes {{\widetilde {\sigma}}}_{(n)}]=\frac
1{\epsilon_\lambda\epsilon_{(n)}} 2^{\frac{|\lambda|-1}2} g_{\lambda,\mu},$$ where ${{\epsilon}}_\lambda=1$ (resp. ${{\epsilon}}_\lambda=\sqrt 2$) if $\lambda$ is even (resp. odd), and the integer $g_{\lambda,\mu}$ is the $(\lambda,\mu)$ entry in the inverse matrix $K(-1)^{-1}$, where $K(t)$ is the matrix of Kostka-Foulkes polynomials. In particular:
1. $g_{\lambda,\lambda}=1$;
2. $g_{\lambda,\mu}=0$, unless $\lambda\ge\mu$ in the ordering of partitions.
\[tensorhook\] The integers $g_{\lambda,\mu}$ have also an explicit combinatorial description in terms of “shifted tableaux” of unshifted shape $\mu$ and content $\lambda$ satifying certain admissibility conditions (see [@St Theorem 9.3]). From this description, one may see for example that if $\lambda=\text{hook}(\mu)$, then $g_{\lambda,\mu}=1$ in (\[tensdecomp\]).
Nonzero cohomology
------------------
We are now in position to determine the unitary modules of ${{\mathbb H}}_n$ with nonzero Dirac cohomology.
We remark that since $gl(n)$ is not semisimple, the spin modules $S^{{\epsilon}}$ of $C(V)$ ($V\cong {{\mathbb C}}^n$) are not necessarily irreducible ${{\widetilde {S}}}_n$-representations. More precisely, using (\[restspin\]), we see that $S^\pm|_{{{\widetilde {S}}}_n}={{\widetilde {\sigma}}}_{(n)},$ when $n$ is odd, and $S|_{{{\widetilde {S}}}_n} ={{\widetilde {\sigma}}}_{(n)}^++{{\widetilde {\sigma}}}_{(n)}^-$, when $n$ is even.
\[ccdirac\] Assume $X$ is an irreducible ${{\mathbb H}}_n$-module such that $H^D(X)\neq 0.$ Then the central character of $X$ is in the set $\{h_{\lambda}/2:
\lambda\in DP(n)\},$ where $h_\lambda$ is as in (\[hlam\]).
This is just a reformulation of Theorem \[t:hpv\] in this particular case.
As a consequence of (\[tensdecomp\]), we obtain the following precise results for Dirac cohomology.
\[prelimresults\]
A spherical module $L(\nu)$ has nonzero Dirac cohomology if and only if $\nu\in \{h_{\lambda}/2:
\lambda\in DP(n)\},$ where $h_\lambda$ is as in (\[hlam\]), and in this case $H^D_{{\epsilon}}(L(h_{(n)}/2))=S^{{\epsilon}}$, and if $\lambda\neq (n)$: $$\begin{aligned}
H^D_{{\epsilon}}(L(h_\lambda/2))&=2^{[(|\lambda|-1)/2]}{{\widetilde {\sigma}}}_\lambda, &\text{ if $n$ is odd and $\lambda$ is even};\\
&=2^{[(|\lambda|-1)/2]}({{\widetilde {\sigma}}}_\lambda^++{{\widetilde {\sigma}}}_\lambda^-), &\text{ if $n$ is odd and $\lambda$ is odd};\\
&=2^{[(|\lambda|)/2-1]}({{\widetilde {\sigma}}}_\lambda^{{\epsilon}}+{{\widetilde {\sigma}}}_\lambda^{{\epsilon}}\otimes{\mathsf{sign}}), &\text{ if $n$ is even}.\\\end{aligned}$$
Every Speh module $a(m,d)$ has nonzero Dirac cohomology. More precisely, $H^D_{{\epsilon}}(a(m,d))=2^{(d-1)/2}~({{\widetilde {\sigma}}}_{(m+d-1,m+d-3,\dots,|m-d|+1)}^++{{\widetilde {\sigma}}}_{(m+d-1,m+d-3,\dots,|m-d|+1)}^-),$ if $d$ is odd and $m$ is even, $H^D_{{\epsilon}}(a(m,d))=2^{[(d-1)/2]}~{{\widetilde {\sigma}}}_{(m+d-1,m+d-3,\dots,|m-d|+1)}^{{\epsilon}},$ if $d$ is odd and $m$ is odd, or $H^D_{{\epsilon}}(a(m,d))=2^{[(d+1)/2]}~{{\widetilde {\sigma}}}_{(m+d-1,m+d-3,\dots,|m-d|+1)}^{{\epsilon}},$ otherwise.
Every complementary series induced module $\pi(a_y(m,d),\nu)$ as in (\[deform\]) has zero Dirac cohomology.
\(a) This is immediate by (\[bomac\]) and the upper unitriangular property of the numbers $g_{\lambda,\mu}$ in Theorem \[glammu\].
\(b) By Lemma \[charspeh\], the central character of $a(m,d)$ is $h_{\lambda'}$ where $\lambda'=\text{hook}(\lambda)\in DP(n).$ By Example \[tensorhook\], the genuine ${{\widetilde {S}}}_n$-type ${{\widetilde {\sigma}}}_{\lambda'}$ occurs with nonzero multiplicity in $\sigma_{(m,m,\dots,m)}\otimes S.$ By construction, $a(m,d)$ is isomorphic with $\sigma_{(m,m,\dots,m)}$ as $S_n$-representations. This means that the hypothesis of Proposition \[criterion\] are satisfied, hence ${{\widetilde {\sigma}}}_{\lambda'}$ occurs in $H^D(a(m,d))$.
\(c) This is immediate from Lemma \[ccdirac\], since $a_y(m,d)$, $y\neq 0$ and $\pi(a_y(m,d),\nu)$, $0<\nu<\frac 12$ do not have the allowable central characters.
\[main\] An irreducible unitary ${{\mathbb H}}_n$-module has nonzero Dirac cohomology if and only if it is isomorphic with an induced module $$\label{possible}
\begin{aligned}
&X={{\mathbb H}}_n\otimes_{{{\mathbb H}}_{ev}\otimes
{{\mathbb H}}_{odd}}(\pi_{ev}\boxtimes\pi_{odd}),\quad \text{where}\\
&{{\mathbb H}}_{ev}={{\mathbb H}}_{k_1}\times {{\mathbb H}}_{k_2}\times\dots\times {{\mathbb H}}_{k_\ell},\
{{\mathbb H}}_{odd}={{\mathbb H}}_{k'_1}\times {{\mathbb H}}_{k'_2}\times\dots\times
{{\mathbb H}}_{k'_t},\quad \text{and}\\
&\pi_{ev}=a(m_1,d_1)\boxtimes a(m_2,d_2)\boxtimes \dots\boxtimes
a(m_\ell,d_\ell),\ \pi_{odd}=a(m_1',d_1')\boxtimes a(m_2',d_2')\boxtimes \dots\boxtimes
a(m_t',d_t'),\\
&m_i+d_i\equiv 0 (\text{mod }2),\ m_j'+d_j'\equiv 1 (\text{mod }2),
\end{aligned}$$ $k_1+k_2+\dots+k_\ell+k_1'+k_2'+\dots+k_t'=n$, where $a(m_i,d_i), a(m_j',d_j')$ are Speh modules for ${{\mathbb H}}_{k_i},{{\mathbb H}}_{k'_j}$ and such that the following conditions are satisfied: $$\label{gaps}
\begin{aligned}
&m_1+d_1-1\ge |m_1-d_1|+1>m_2+d_2-1\ge
|m_2-d_2|+1>\dots>m_\ell+d_\ell-1;\\
&m_1'+d_1'-1\ge |m_1'-d_1'|+1>m_2'+d_2'-1\ge
|m_2'-d_2'|+1>\dots>m_t'+d_t'-1.\\
\end{aligned}$$
From Theorem \[unitdual\], a unitary irreducible module $X$ is induced from a combination of Speh modules and complementary series modules. It is immediate that in order for $X$ to have one of the central characters from Lemma \[ccdirac\], a first restriction is that only Speh modules can appear in the induction, so $X$ is of the form (\[possible\]). Notice then that the central character of $X$ is obtained by concatenating the central characters of $a(m_i,d_i).$ Therefore the central character of $X$ is $S_n$-conjugate to $h_\lambda$, where $\lambda$ is the composition $\lambda=\lambda^1\sqcup\dots\sqcup\lambda^\ell\sqcup\mu^1\sqcup\dots\sqcup\mu^t$, where $\lambda^i=(m_i+d_i-1,m_i+d_i-3,\dots, |m_i-d_i|+1)$, $1\le i\le \ell$ and $\mu^j=(m_j'+d_j'-1,m_j'+d_j'-3,\dots, |m_j'-d_j'|+1)$, $1\le j\le t$. The entries in the first type of strings are all even, while the entries in the second type of strings are all odd. Since we need $\lambda$ to have no repetitions, condition (\[gaps\]) follows.
For the converse, assume $X$ is as in (\[possible\]) and (\[gaps\]). Then the central character of $X$ is $h_\lambda$, where $\lambda$ is as above. By Proposition \[criterion\], it remains to check that $X\otimes S$ contains the ${{\widetilde {S}}}_n$-type ${{\widetilde {\sigma}}}_\lambda$. From Lemma \[indlemma\], we see that ${\operatorname{Hom}}_{{{\widetilde {S}}}_n}[{{\widetilde {\sigma}}}_\lambda,X\otimes S]=\frac{\dim{{\mathcal S}}}{\dim {{\mathcal S}}_M}{\operatorname{Hom}}_{{{\widetilde {W}}}_{M}}[{{\widetilde {\sigma}}}|_{{{\widetilde {W}}}_M},(\pi_{ev}\boxtimes\pi_{odd})|_{W_M}\otimes {{\mathcal S}}_M],$ where ${{\widetilde {W}}}_M={{\widetilde {S}}}_{k_1}\cdot \dotsc\cdot {{\widetilde {S}}}_{k_\ell}\cdot {{\widetilde {S}}}_{k_1'}\cdot\dotsc\cdot {{\widetilde {S}}}_{k'_t}$, and ${{\mathcal S}}_M$ is the corresponding spin module. (Here $\cdot$ denotes the graded version of the direct product coming from the graded tensor product of Clifford algebras as in Section \[s:2.6\].) From Lemma \[prelimresults\], we know that the ${{\widetilde {S}}}_{k_i}$-representation ${{\widetilde {\sigma}}}_{\lambda^i}$ occurs in $a(m_i,d_i)|_{S_{k_i}}$ tensored with the spin ${{\widetilde {S}}}_{k_i}$-module and similarly the ${{\widetilde {S}}}_{k'_j}$-representation ${{\widetilde {\sigma}}}_{\mu^j}$ occurs in $a(m'_j,d'_j)|_{S_{k'_j}}$ tensored with the spin ${{\widetilde {S}}}_{k'_j}$-module. Therefore the tensor product representation ${{\widetilde {\sigma}}}_{\lambda,M}:={{\widetilde {\sigma}}}_{\lambda^1}\boxtimes\dots\boxtimes{{\widetilde {\sigma}}}_{\lambda^\ell}\boxtimes
{{\widetilde {\sigma}}}_{\mu^1}\boxtimes\dots\boxtimes{{\widetilde {\sigma}}}_{\mu^t}$ occurs in $(\pi_{ev}\boxtimes\pi_{odd})|_{W_M}\otimes {{\mathcal S}}_M$. Finally, since the composition $\lambda$ is just the concatenation of the $(\lambda^i)$’s and the $(\mu^j)$’s, one sees that ${{\widetilde {\sigma}}}_{\lambda,M}$ occurs with nonzero multiplicity in ${{\widetilde {\sigma}}}_\lambda|_{{{\widetilde {W}}}_M}$.
[20]{} J. Arthur, *Unipotent automorphic representations: conjectures. Orbites unipotentes et représentations, II*, Astérisque No. 171-172 (1989), 13–71.
M. Atiyah, W. Schmid, *A geometric construction of the discrete series for semisimple Lie groups*, Invent. Math. 42 (1977), 1–62.
D. Barbasch, D. Ciubotaru, P. Trapa, *The Dirac operator for graded affine Hecke algebras*, to appear in Acta Math.
D. Barbasch, A. Moy, *Unitary spherical spectrum for p-adic classical groups*, Acta Appl. Math. 44 (1996), no. 1-2, 3–37.
W. Borho, R. MacPherson *Partial resolutions of nilpotent varieties*, Analysis and topology on singular spaces, II, III (Luminy, 1981), 23–74, Ast' erisque, 101-102, Soc. Math. France, Paris, 1983.
J. Bernstein, A. Zelevinsky, *Induced representations of reductive p-adic groups. I*, Ann. Sci. École Norm. Sup. (4) 10 (1977), no. 4, 441–472.
A. Borel, N. Wallach, *Continuous cohomology, discrete subgroups, and representations of reductive groups,* Princeton University Press, Princeton, New Jersey, 1980.
C. Chevalley, *The algebraic theory of spinors*, Columbia University Press, New York, 1954. viii+131 pp.
D. Ciubotaru, *Spin representations of Weyl groups and Springer’s correspondence*, to appear in J. Reine Angew. Math.
D. Ciubotaru, P. Trapa, *Characters of Springer representations on elliptic conjugacy classes*, to appear in Duke Math.
T. Enright, N. Wallach *Embeddings of unitary highest weight representations and generalized Dirac operators*, Math. Ann. 307 (1997), no. 4, 627–646.
J.-S. Huang, P. Pandzic, *Dirac cohomology, unitary representations and a proof of a conjecture of Vogan*, J. Amer. Math. Soc. 15 (2002), no. 1, 185–202.
A. Knapp, G. Zuckerman, *Classification theorems for representations of semisimple Lie groups*, Non-commutative harmonic analysis (Actes Colloq., Marseille-Luminy, 1976), 138–159. Lecture Notes in Math., Vol. 587, Springer, Berlin, 1977.
G. Lusztig, *Affine Hecke algebras and their graded version*, J. Amer. Math. Soc. 2 (1989), 599–635.
I.G. MacDonald, *Symmetric functions and Hall polynomials*, Oxford Mathematical Monographs, Oxford Science Publications, The Clarendon Press, Oxford University Press, New York, 1995. x+475 pp.
A. Morris, *Projective representations of reflection groups. II*, Proc. London Math. Soc. (3) 40 (1980), no. 3, 553–576.
A. Okounkov, A. Vershik, *A new approach to representation theory of symmetric groups*, Selecta Math.2 (4) (1996), 581–605.
R. Parthasarathy, *Dirac operator and the discrete series*, Ann. of Math. (2) 96 (1972), 1–30.
J. Rogawski, *On modules over the Hecke algebra of a p-adic group*, Invent. Math. 79 (1985), no. 3, 443–465.
J. Stembridge, *Shifted tableaux and the projective representations of symmetric groups*, Adv. Math. 74 (1989), no. 1, 87–134.
M. Tadić, *Classification of unitary representations in irreducible representations of general linear group (non-Archimedean case)*, Ann. Sci. École Norm. Sup. (4) 19 (1986), no. 3, 335–382.
[^1]: The first author was partially supported by NSF grants DMS-0967386, DMS-0901104 and an NSA-AMS grant. The second author was partially supported by NSF DMS-0968065 and NSA-AMS 081022
|
---
abstract: 'We prove the equivalence of many-photon Green functions in statistical quantum field Duffin-Kemmer-Petiau (DKP) and Klein-Gordon-Fock (KGF) theories using functional path integral formalism for partition functional in statistical quantum (finite temperature) field theory. We also calculate the polarization operators in these theories in one-loop approximation, and demonstrate their coincidence.'
author:
- |
V.Ya.Fainberg[^1], B.M. Pimentel[^2] and J. S. Valverde\
[* *]{}\
title: 'Equivalence of Many-Photon Green Functions in DKP and KGF Statistical Quantum Field Theories.'
---
Introduction
============
The method of Green functions (GF) in quantum statistics has a long history which begins with Matzubara’s work in 1953 [@Matzubara]. The method of generating or partition functional was first applied for calculation of temperature renormalizable GF in [@Fradkin], [@Proc] by Fradkin[^3]. Later, the method of functional path-integral in statistics was developed in Bernard’s work [@Bernard].
In the last years, the equivalence of DKP and KGF theories was proved in [@PF], [@FP], [@FP1] for the mass shell S-matrix elements of scalar-charged particles interacting with the quantized EM and YM fields, as well as for GF with external photons *off the mass shell*.
An interesting physical question arises in this connection: can one prove the equivalence of many-photon GF in statistical quantum (finite temperature) DKP and KGF theories?
The main goal of this paper is to give positive answer to this question.
In section **2** we give the general proof of the equivalence by path integral method in statistical quantum theory. This result can also be understood from the physical point of view. Photon does not acquire mass and consequently the chemical potential $\mu $ too; i.e., photon in temperature medium conserves its nature. As an illustration, in section **3** we calculate polarization operators in one-loop approximation of both theories, and prove that these operators do coincide.
Section **4** contains the conclusions.
Coincidence of Many-Photon GF in DKP and KGF at Finite Temperature Theories
===========================================================================
To obtain the partition functional $Z(J,\bar{J},J_{\mu })$ in statistical theory one must make transition to Euclidean space and restrict integration on $x_{4}$: $0\leq x_{4}\leq \beta $; here $\beta =1/T$ and $J,\bar{J}%
,J_{\mu }$ are external currents. As it follows from general considerations, the partition functional in DKP theory of charged spin-zero particles interacting with the quantized EM field $A_{\mu }$ (in $\alpha $-gauge) has the following form[^4]: $$\begin{aligned}
&&Z_{DKP}=Z_{0}\int_{\beta }\,DA_{\mu }\,D\psi \,D\bar{\psi}\,\exp \left\{
-\int_{0}^{\beta }dx_{4}\int_{-\infty }^{\infty }d\mathbf{x}\left[ \frac{1}{4%
}F_{\mu \nu }^{2}+\frac{1}{2\alpha }(\partial _{\mu }A_{\mu })^{2}\right.
\right. \nonumber \\
&&\qquad {}\left. \left. +\bar{\psi}(x)(\beta _{\mu }D_{\mu }+m)\psi
(x)+J_{\mu }(x)A_{\mu }(x)+\bar{J}(x)\psi (x)+\bar{\psi}(x)J(x)\right]
\right\} ,\end{aligned}$$ where $Z_{0}=Z(0,0,0)$[^5]; $D_{\mu
}=\partial _{\mu }^{\ast }-ieA_{\mu },$ $\partial _{4}^{\ast }=\partial
_{4}-\mu $; $\mu $ is the chemical potential[^6]. In Euclidean space $\bar{\psi}(x)=\psi ^{\ast }(x)$; all the fields satisfy periodical conditions: $$\bar{\psi}(0,\mathbf{x})=\bar{\psi}(\beta ,\mathbf{x}),\quad {\psi }(0,%
\mathbf{x})={\psi }(\beta ,\mathbf{x}),\quad A_{\mu }(0,\mathbf{x})=A_{\mu
}(\beta ,\mathbf{x}).$$ For example, in Eq. (1): $$\int_{\beta }DA_{\mu }(x)=\int \prod_{0\leq x_{4}\leq \beta }\prod_{-\infty
\leq x_{i}\leq \infty }\prod dA_{\mu }(x_{4},\mathbf{x}).$$ In Euclidean space $\bar{\psi}(x)=\psi ^{\ast }(x)$; we choose $\beta _{\mu
} $-matrices in the form: $${}{}\beta _{4}=
\begin{array}{|lllll|}
\cdot & 1 & \cdot & \cdot & \cdot \\
1 & \cdot & \cdot & \cdot & \cdot \\
\cdot & \cdot & \cdot & \cdot & \cdot \\
\cdot & \cdot & \cdot & \cdot & \cdot \\
\cdot & \cdot & \cdot & \cdot & \cdot
\end{array}
\,,\beta _{1}=
\begin{array}{|lllll|}
\cdot & \cdot & 1 & \cdot & \cdot \\
\cdot & \cdot & \cdot & \cdot & \cdot \\
1 & \cdot & \cdot & \cdot & \cdot \\
\cdot & \cdot & \cdot & \cdot & \cdot \\
\cdot & \cdot & \cdot & \cdot & \cdot
\end{array}
\,,\beta _{2}=
\begin{array}{|lllll|}
\cdot & \cdot & \cdot & 1 & \cdot \\
\cdot & \cdot & \cdot & \cdot & \cdot \\
\cdot & \cdot & \cdot & \cdot & \cdot \\
1 & \cdot & \cdot & \cdot & \cdot \\
\cdot & \cdot & \cdot & \cdot & \cdot
\end{array}
\,,\beta _{3}=
\begin{array}{|lllll|}
\cdot & \cdot & \cdot & \cdot & 1 \\
\cdot & \cdot & \cdot & \cdot & \cdot \\
\cdot & \cdot & \cdot & \cdot & \cdot \\
\cdot & \cdot & \cdot & \cdot & \cdot \\
1 & \cdot & \cdot & \cdot & \cdot
\end{array}$$ After the integration over $\psi $ and $\bar{\psi}$ in Eq. (1) we get: $$\begin{aligned}
&&Z_{DKP}(\bar{J},J,J_{\mu })=Z_{0}\int_{\beta }DA_{\mu }(x)\exp \left\{
-\int_{\beta }d^{4}x\left[ \frac{1}{4}F_{\mu \nu }^{2}+\frac{1}{2\alpha }%
(\partial _{\mu }A_{\mu })^{2}+J_{\mu }A_{\mu }\right. \right. \nonumber \\
&&\qquad \qquad \left. \left. {}+\mbox{Tr}\ln S(x,x,A)\right] ^{\beta
}-\int_{\beta }d^{4}xd^{4}y\bar{J}(x)S(x,y,A)J(y)\right\} .\end{aligned}$$ Here $$S(x,y,A)=\left( \beta _{\mu }D_{\mu }+m\right) ^{-1}\delta ^{4}(x-y)$$ is the GF of a DKP particle in external field $A_{\mu }(x)$; the term $%
\mbox{Tr}\ln S(x,x,A)$ gives rise to all vacuum perturbations diagrams. This term can be transformed into the following component form: $$\begin{aligned}
&&\det S(x,y,A)=\int_{\beta }D\psi D\bar{\psi}\exp \left\{ -\int_{\beta
}d^{4}x\bar{\psi}\left( \beta _{\mu }D_{\mu }+m\right) \psi \right\} =
\nonumber \\
&&\kern-10pt{}=\int_{\beta }\prod_{\mu =1}^{4}D\phi _{\mu }D\phi _{\mu
}^{\ast }D\phi D\phi \exp \biggl\{-\int_{\beta }d^{4}x\bigl(\phi ^{\ast
}D_{\mu }\phi _{\mu }+\phi _{\mu }^{\ast }D_{\mu }\phi +m(\phi \phi ^{\ast
}+\phi _{\mu }\phi _{\mu }^{\ast }\bigr)\biggr\}. \nonumber \\
&&\end{aligned}$$ Now let us integrate over $\phi _{\mu }$ and $\phi _{\mu }^{\ast }$. We get: $$\begin{aligned}
&&\det S(x,y,A)=\det G(x,y,A)=\exp \mbox{Tr}\ln G(x,x,A)= \nonumber \\
&&\qquad \qquad \frac{1}{m}\int_{\beta }D\phi D\phi ^{\ast }\exp \left\{ -%
\frac{1}{m}\int_{\beta }d^{4}x\phi ^{\ast }\left( -D_{\mu }^{2}+m^{2}\right)
\phi \right\} ,\end{aligned}$$ where $$G(x,y,A)=\left( -D_{\mu }^{2}+m^{2}\right) ^{-1}\delta ^{4}(x-y)$$ is the GF of the KGF equation in the case of external field $A_{\mu }(x)$. Thus, we conclude from Eqs. (7–9) that all many-photon GF (not only matrix elements of S-matrix for *real* photons) coincide in DKP and KGF statistical theories[^7]. This concludes the proof of equivalence for many-photon GF in KGF and DKP statistical theories.
Polarization Operator in One-Loop Approximation
===============================================
Polarization operator in KGF statistical theory for charged spin-zero scalar particles in one-loop approximation has the form[^8] $$\Pi _{\mu \nu }^{k}(k)=-\frac{e^{2}}{(2\pi )^{3}\beta }\sum_{p_{4}}\int d%
\mathbf{p}\left( \frac{(2p+k)_{\mu }(2p+k)_{\nu }}{%
(p^{2}+m^{2})((p+k)^{2}+m^{2})}-\frac{2\delta _{\mu \nu }}{p^{2}+m^{2}}%
\right) ,$$ where $$p^{2}=p_{4}^{2}+\mathbf{p}^{2};\quad p_{4}=\frac{2\pi n}{\beta };\quad
-\infty <n<+\infty .$$ The term proportional to $\delta _{\mu \nu }$ in Eq. (10) is important in the proof of transversality of $\Pi _{\mu \nu }$ ($k_{\mu }\Pi _{\mu \nu
}(k)=0$). However, this term does not contribute to $\Pi _{\mu \nu }$ after the renormalization. In DKP theory, the one-particle GF in momentum space is: $$\begin{aligned}
&&G(\hat{p})=-\frac{1}{m}\left( \frac{i\hat{p}(i\hat{p}+m)}{p^{2}+m^{2}}%
+1\right) , \nonumber \\
&&\hat{p}=\beta _{\mu }p_{\mu }.\end{aligned}$$ It is easy to check that $$(i\hat{p}-m)G(\hat{p})=1.$$ Using Eqs. (12)–(13) we obtain the polarization operator in DKP theory (in $%
e^{2}$-approximation): $$\begin{aligned}
&&\Pi _{\mu \nu }^{D}(k)=\frac{e^{2}}{m^{2}(2\pi )^{3}\beta }\mbox{Tr}%
\sum_{p_{4}}\int d\mathbf{p}\,\beta _{\mu }G(\hat{p}+\hat{k})\beta _{\nu }G(%
\hat{p}) \\
&&\kern-17pt{}=-\frac{e^{2}}{(2\pi )^{3}\beta }\sum_{p_{4}}\int d\mathbf{p}%
\left( \frac{(2p+k)_{\mu }(2p+k)_{\nu }}{(p^{2}+m^{2})((p+k)^{2}+m^{2})}-%
\frac{\delta _{\mu \nu }}{p^{2}+m^{2}}-\frac{\delta _{\mu \nu }}{%
(p+k)^{2}+m^{2}}+\frac{\delta _{\mu \nu }}{m^{2}}\right) . \nonumber\end{aligned}$$ The last term $\sim \delta _{\mu \nu }$ in DKP theory breaks the gauge invariance[^9], but disappears after the renormalization. It is easy to show that after the substitution $%
(p+k)\leftrightarrow p$ in the regularization term $\delta _{\mu \nu
}((p+k)^{2}+m^{2})^{-1}$, it will be equal to $\delta _{\mu \nu
}(p^{2}+m^{2})^{-1}$. This coincidence of $\Pi _{\mu \nu }^{K}$ and $\Pi
_{\mu \nu }^{D}$ in one-loop approximation confirms the general proof given in Section 2, see Eqs. (8)–(9)[^10].
The $\Pi _{\mu \nu }(k)$ tensor has the form [@Fainberg], [@FPV]: $$\Pi _{\mu \nu }=(k_{\mu }k_{\nu }-k^{2}\delta _{\mu \nu })\Pi (k^{2}).$$ In quantum statistics, $\Pi _{\mu \nu }$ depends on the two vectors: $k_{\mu
}$ and $u_{\mu }$, this latter is the single vector of medium velocity. Thus, in the general case (see \[3\], p. 75) $$\begin{aligned}
&&\Pi _{\mu \nu }=\left( \delta _{\mu \nu }-\frac{k_{\mu }k_{\nu }}{k^{2}}%
\right) A_{1}+\left( u_{\mu }u_{\nu }-\frac{k_{\mu }u_{\nu }(ku)}{k^{2}}-%
\frac{k_{\nu }u_{\mu }(ku)}{k^{2}}+\frac{k_{\mu }k_{\nu }(ku)^{2}}{k^{4}}%
\right) A_{2} \nonumber \\
&&\qquad \qquad {}\equiv \Phi _{\mu \nu }^{1}A_{1}+\Phi _{\mu \nu }^{2}A_{2}.
\label{ec16}\end{aligned}$$ Introducing the notation (for any approximation) $$\begin{aligned}
&&a_{1}\equiv \Pi _{\mu \mu }=3A_{1}+\lambda A_{2} \\
&&a_{2}\equiv u_{\mu }\Pi _{\mu \nu }u_{\nu }=\lambda (A_{1}+\lambda
A_{2}),\qquad \lambda =\left( 1-\frac{(ku)^{2}}{k^{2}}\right) ,\end{aligned}$$ we get: $$A_{1}=\frac{1}{2}\left( a_{1}-\frac{1}{\lambda }a_{2}\right) ,\qquad A_{2}=%
\frac{1}{2\lambda }\left( -a_{1}+\frac{3}{\lambda }a_{2}\right) .$$ If the system is at rest[^11], $$\lambda =\left( 1-\frac{k_{4}^{2}}{k^{2}}\right) ,$$ and $$a_{2}=\Pi _{44}=\left( 1-\frac{k_{4}^{2}}{k^{2}}\right) A_{1}+\left( 1-\frac{%
k_{4}^{2}}{k^{2}}\right) ^{2}A_{2}.$$ It is convenient to represent $a_{1},a_{2}$ in the form: $$a_{i}=a_{i}^{R}+a_{i}^{\beta }.$$ Here $a_{i}^{R}$ are the parts which do not depend on $\beta $ and which must be renormalizable; $a_{i}^{\beta }$ depend on $\beta $. It is important that when the temperature is zero $$\lim_{\beta \rightarrow \infty }a_{i}^{\beta }=0.$$ Now we can write the Eqs. (\[ec16\]) in the following form: $$\Pi _{\mu \nu }=\frac{1}{2}\Phi _{\mu \nu }^{1}\left( a_{1}^{R}-{\frac{1}{%
\lambda }}a_{2}^{R}+a_{1}^{\beta }-{\frac{1}{\lambda }}a_{2}^{\beta }\right)
+\frac{1}{2\lambda }\Phi _{\mu \nu }^{2}\left( -a_{1}^{R}+{\frac{3}{\lambda }%
}a_{2}^{R}-a_{1}^{\beta }+{\frac{3}{\lambda }}a_{2}^{\beta }\right) .$$ The term $\sim \Phi _{\mu \nu }^{2}$ must vanish in the limit $\beta
\rightarrow \infty $. Therefore in this limit we obtain $\Pi _{\mu \nu }$ of the Euclidean quantum field theory. So far as $a_{1}^{R}$ and $a_{2}^{R}$ do not depend on $\beta $, we conclude that (after the renormalization) $$a_{2}^{R}={\frac{\lambda }{3}}a_{1}^{R}.$$ Thus $$\lim_{\beta \rightarrow \infty }\Pi _{\mu \nu }={\frac{1}{3}}\Phi _{\mu \nu
}^{1}a_{1}^{R}\qquad \mbox{or}\qquad \Pi _{\mu \mu }=a_{1}^{R}.$$ We calculate $a_{1}$ and $a_{2}$ using the general formula for summation over $p_{4}$ in Eq. (14). We ignore the terms $\sim \delta _{\mu \nu }$ in Eqs. (10) and (14) since these terms disappear after regularization and renormalization. $$\begin{aligned}
&&a_{1}=\Pi _{\mu \mu }=-\frac{e^{2}}{(2\pi )^{3}\beta }\int d\mathbf{p}%
\sum_{p_{4}}\frac{(2p+k)^{2}}{(p^{2}+m^{2})((p+k)^{2}+m^{2})} \\
&&a_{2}=\Pi _{44}=-\frac{e^{2}}{(2\pi )^{3}\beta }\int d\mathbf{p}%
\sum_{p_{4}}\frac{(2p_{4}+k_{4})^{2}}{(p^{2}+m^{2})((p+k)^{2}+m^{2})}\,,\end{aligned}$$ where $p_{4}={2\pi n/\beta }$.
The general formula for summation over $\beta $ is[^12] $$\kern-20pt{\frac{1}{\beta }}\sum_{n}f({\frac{2\pi n}{\beta }},K)={\frac{1}{%
2\pi }}\int_{-\infty }^{\infty }d\omega f(\omega ,K)+{\frac{1}{2\pi }}%
\int_{-\infty +i\epsilon }^{\infty +i\epsilon }d\omega \frac{f(\omega
,K)+f(-\omega ,K)}{e^{-i\beta \omega }-1}\,.$$ The Eq. (27) can be rewritten in the following form: $$\begin{aligned}
&&a_{1}=-\frac{e^{2}}{(2\pi )^{3}\beta }\int d\mathbf{p}\sum_{p_{4}}\frac{%
4p^{2}+4pk+2k^{2}+4m^{2}-(4m^{2}+k^{2})}{(p^{2}+m^{2})((p+k)^{2}+m^{2})}
\nonumber \\
&&{}=-\frac{e^{2}}{(2\pi )^{3}\beta }\int d\mathbf{p}\sum_{p_{4}}\left\{ -%
\frac{4m^{2}+k^{2}}{(p^{2}+m^{2})((p+k)^{2}+m^{2})}+\frac{2}{p^{2}+m^{2}}+%
\frac{2}{(p+k)^{2}+m^{2}}\right\} \nonumber \\
&&\qquad \qquad {}\Rightarrow +\frac{e^{2}}{(2\pi )^{3}\beta }\int d\mathbf{p%
}\sum_{p_{4}}\frac{4m^{2}+k^{2}}{(p^{2}+m^{2})((p+k)^{2}+m^{2})},\end{aligned}$$ where we omit the last two terms which vanish after renormalization; $%
p^{2}=p_{4}^{2}+\mathbf{p}^{2}$.
Introducing the Feynman’s parameter $x$ into Eq. (30), we obtain: $$a_{1}(k^{2})=-\frac{e^{2}}{(2\pi )^{3}\beta }(4m^{2}+k^{2})\int d\mathbf{p}%
\frac{\partial }{\partial m^{2}}\int_{0}^{1}dx\sum_{p_{4}}\frac{1}{\left[
(p^{2}+m^{2})+\frac{k^{2}}{4}(1-x^{2})\right] }.$$
To get the contribution to $a_{1}(k^{2})$ which does not depend on $\beta $ we must use only the first term of Eq. (29): $$a_{1}^{R}(k^{2})=-\frac{e^{2}}{(2\pi )^{3}}(4m^{2}+k^{2})\int d\mathbf{p}%
\frac{\partial }{\partial m^{2}}\int_{0}^{1}dx\frac{1}{2\pi }\int_{-\infty
}^{\infty }\frac{d\omega }{\left[ \omega ^{2}+\mathbf{p}^{2}+m^{2}+\frac{%
k^{2}}{4}(1-x^{2})\right] }.$$
Closing the integration contour at infinity in the upper half-plane, we find: $$a_{1}^{R}(k^{2})=-\frac{e^{2}}{(2\pi )^{3}}(4m^{2}+k^{2})\int^{l}d\mathbf{p}%
\frac{\partial }{\partial m^{2}}\int_{0}^{1}dx\frac{1}{2\left[ \mathbf{p}%
^{2}+m^{2}+\frac{k^{2}}{4}(1-x^{2})\right] ^{1/2}},$$ where $l$ is the momentum cut-off.
After the integration over $x$ and renormalization $$a_{1}^{R}(k^{2})\rightarrow a_{1}^{R}(k^{2})-a_{1}^{R}(0)-k^{2}{\frac{%
\partial }{\partial k^{2}}}a_{1}^{R}(0),$$ we obtain: $$a_{1}^{R}(k^{2})=-\frac{e^{2}k^{4}}{16\pi ^{2}}\int_{4m^{2}}^{\infty }\frac{%
dz^{2}\left( 1-\frac{4m^{2}}{z^{2}}\right) ^{3/2}}{z^{2}(z^{2}+k^{2})}.$$ In the limit $\beta \rightarrow \infty $ we get the Euclidean expression for $\Pi _{\mu \nu }$ (see Eq. (26)): $$\lim_{\beta \rightarrow \infty }\Pi _{\mu \nu }=\left( -\frac{k_{\mu }k_{\nu
}}{k^{2}}+\delta _{\mu \nu }\right) \left( \frac{e^{2}}{48\pi ^{2}}\right)
k^{4}\int_{4m^{2}}^{\infty }\frac{dz\left( 1-\frac{4m^{2}}{z^{2}}\right)
^{3/2}}{z^{2}(z^{2}+k^{2})}.$$ This expression for $\Pi _{\mu \nu }$ also follows from Eq. (11) in [@FPV], where the photon GF has been calculated in DKP theory using dispersion approach.
One can easily find $a_{2}^{R}$ from the Eqs. (25),(35): $$a_{2}^{R}(k^{2})=\frac{\lambda }{3}a_{1}^{R}=+\frac{e^{2}k^{2}}{48\pi ^{2}}%
(k_{4}^{2}-k^{2})\int_{4m^{2}}^{\infty }\frac{dz\left( 1-\frac{4m^{2}}{z^{2}}%
\right) ^{3/2}}{z^{2}(z^{2}+k^{2})}.$$ One can write the expression[^13] for $a_{1}^{\beta }$ and $%
a_{2}^{\beta }$ ($\mu \neq 0$): $$\begin{aligned}
&&a_{1}^{\beta }=\frac{e^{2}}{16\pi ^{2}}(4m^{2}+k^{2})\int_{0}^{\infty }%
\frac{p\,dp}{E|\mathbf{k}|}\left( e^{\beta (E-\mu )}-1\right) ^{-1}\ln \frac{%
(k^{2}+2p\mathbf{k})^{2}+4E^{2}k_{4}^{2}}{(k^{2}-2p\mathbf{k}%
)^{2}+4E^{2}k_{4}^{2}} \nonumber \\
&& \\
&&a_{2}^{\beta }=\frac{e^{2}}{16\pi ^{2}}\int \frac{p^{2}\,dp}{Ep|\mathbf{k}|%
}\left( e^{\beta (E-\mu )}-1\right) ^{-1}\biggl\{(E^{2}-k_{4}^{2})\ln \frac{%
(k^{2}+2p\mathbf{k})^{2}+4E^{2}k_{4}^{2}}{(k^{2}-2p\mathbf{k}%
)^{2}+4E^{2}k_{4}^{2}} \nonumber \\
&&{}+2iEk_{4}\ln \frac{(k^{2}+2iEk_{4})^{2}-4p^{2}\mathbf{k}^{2}}{%
(k^{2}-2iEk_{4})^{2}-4p^{2}\mathbf{k}^{2}}\biggr\},\end{aligned}$$ where $$E=(p^{2}+m^{2})^{1/2}.\eqno{(39a)}$$
Some details of calculations can be found in the Appendix.
Conclusions
===========
We have proved the equivalence of photon GF in DKP and KGF statistical theories (Section **2**), and carried out calculations of polarization operator in one-loop approximation to illustrate the equivalence (Section **3**). It would be interesting to generalize the proof of equivalence for GF of many non-Abelian gluons in statistical quantum field DKP and KGF theories (see [@FP1]).
The generalization of the proof of equivalence for photons GF in DKP and KGF statistical theories to the case of charged vector fields can also be made, however this proof will have a formal character due to non-renormaliza-bility of the theory.
Acknowledgments {#acknowledgments .unnumbered}
===============
One of us (V.Ya.F) thanks Prof. I.V.Tyutin and Prof. A.E.Shabad for useful discussions. B.M.P. and J.S. Valverde are grateful for R. Casana’s comments. This work was supported by FAPESP (V.Ya.F., grant 01/12585-6; B.M.P., grant 02/00222-9; J.S.V. full support grant 00/03812-6), RFBR/Russia (V.Ya.F., grant 02-02-16946 and 02-01-00556), LSS-1578.2003.2 and CNPq/Brazil (B.M.P.).
Appendix {#appendix .unnumbered}
========
1\. Let us consider the derivation of Eq. (35) in more detail. We start from Eq. (33), which can be rewritten in the form:
$$\begin{aligned}
&&a_{1}^{R}(k^{2})=-\frac{e^{2}}{(2\pi )^{3}}4\pi (m^{2}+k^{2})\frac{%
\partial }{\partial m^{2}}\int_{0}^{1}dx\int_{0}^{l}p^{2}\,dp\left(
p^{2}+m^{2}+\frac{k^{2}}{4}(1-x^{2})\right) ^{-1/2} \\
&&\kern-18pt{}=-\frac{e^{2}}{(2\pi )^{2}}\sqrt{\frac{4}{k^{2}}}%
\int_{0}^{l}p^{2}\,dp(m^{2}+k^{2})\frac{\partial }{\partial m^{2}}%
\int_{0}^{1}dx(\alpha +1-x^{2})^{-1/2};\mbox{ where }\alpha =\frac{%
4(p^{2}+m^{2})}{k^{2}}, \\
&&\qquad {}=-\frac{e^{2}}{(2\pi )^{2}}\sqrt{\frac{4}{k^{2}}}%
\int_{0}^{l}p^{2}\,dp(4m^{2}+k^{2})\left( \frac{4}{k^{2}}\right) {\frac{%
\partial }{\partial \alpha }}\int_{0}^{\frac{1}{\sqrt{1+\alpha }}}\frac{dy}{%
(1-y^{2})^{1/2}} \\
&&\quad {}=-\left( \frac{e^{2}}{8\pi ^{2}}\right) \int_{0}^{l}\frac{p^{2}\,dp%
}{E}\frac{4m^{2}+k^{2}}{k^{2}+4m^{2}+p^{2}}\Rightarrow
\mbox{\rm after
renormalization, see Eq.(34)} \\
&&\qquad \qquad {}=-\frac{e^{2}}{16\pi ^{2}}(k^{2})^{2}\int_{4m^{2}}^{\infty
}\frac{dz^{2}\left( 1-\frac{4m^{2}}{z^{2}}\right) ^{3/2}}{z^{2}(z^{2}+k^{2})}%
.\end{aligned}$$
[99]{} J.Matzubara, Progr. Theor. Phys., **9** (1953) 550
E.S.Fradkin, Doklady Akad. Nauk, **98** (1954) 47; ibid **100** (1955) 897
Proceedings of P.N.Lebedev Phys. Institute, Vol. 29, 1965.
C.W.Bernard, Phys. Rev., **D9** (1974) 3312.
J. I. Kapusta, *Finite Temperature Field Theory,* Cambridge University Press (1989).
M. Le Bellac, *Thermal Field Theory*, Cambridge University Press (2000).
R.Casana, V.Ya.Fainberg, B.M.Pimentel, J.S.Valverde; to be published in Phys. Lett. **A.**
V.Ya.Fainberg and B.M.Pimentel, Theor. Math. Phys., **124** (2000) 1234.
V.Ya.Fainberg and B.M.Pimentel, Braz. J. Phys. **30** (2000) 275.
V.Ya.Fainberg and B.M.Pimentel, Phys. Lett., **A271** (2000) 16.
V.Ya. Fainberg, B.M. Pimentel and J.S. Valverde; Proceedings of the XX Brazilian National Meeting of Particles and Fields, São Lourenço (1999),
e-Proc. http://www.sbf1.if.usp.br/eventos/enfpc/xx/procs/res127/
V.Ya.Fainberg, B.M.Pimentel and J.S.Valverde, Dispersion method in DKP theory, Proceedings of the International Meeting *Quantization Gauge Theories and Strings* dedicated to the Memory of E. S. Fradkin, Moscow, (2000), Vol II, p79 (edited by A. Semikhatov, M. Vasilied and V. Zaikin, Scientific World 2001).
H.J. Rothe, *Lattice Gauge Theories: An Introduction*, World Scientific, Singapure (1996).
[^1]: P.N. Lebedev Physical Institute, Moscow, Russia.
[^2]: Instituto de Física Teórica, Universidade Estadual Paulista, São Paulo, SP, Brazil.
[^3]: The detailed list of publications may be found in Proceedings of P.N.Lebedev Physical Institute, vol. , 1965, in Fradkin’s dissertation [@Proc], and in Kapusta’s [@Kapusta] and Le Bellac’s [@Le; @Bellac] books. See also references in the paper [@Casana].
[^4]: See the paper [@Bernard] for the discussion of gauge-dependence.
[^5]: In the case of charge $e=0$: $$Z_{0}=\prod_{\mathbf{p},\mathbf{k}}\left( 1-\exp (-\beta (E-\mu ))\right)
^{-1}\left( 1-\exp (-\beta \omega )\right) ^{-1},$$ where $E=\sqrt{\mathbf{p}^{2}+m^{2}},$ $\omega =|\mathbf{k}|$.
[^6]: In Bose-Einstein case the statistical potential is always negative.
[^7]: Strictly speaking, the scalar fields $\phi (x)$ in DKP and $\varphi (x)$ in KGF theory are related by the following equation: $$\varphi (x)=\frac{1}{\sqrt{m}}\phi (x)$$
[^8]: The last term in Eq. (10) appears due to the term $e^{2}\int A_{\mu
}^{2}(x)\phi ^{*}(x)\phi (x)d^{4}x$ in the Lagrangian of the KGF theory.
[^9]: See Bernard’s work [@Bernard].
[^10]: One may note that $\Pi _{\mu \nu }^{D}$ given by Eq. (8.23) in [@Proc] does not coincide with our Eq. (14), breaking the equivalence.
[^11]: One can show that Eq. (\[ec16\]), strictly speaking, is satisfied only if the system is at *rest* ($\mathbf{u}=0,u_{4}=1$), see [@Proc], Chapter , sect. 7.
[^12]: See [@Proc], page 123, supplement 3 and [@Rothe], page 299.
[^13]: Some details are given in the Appendix.
|
---
abstract: 'An approximate stochastic model for the topological dynamics of the periodic triangular Lorentz gas is constructed. The model, together with an extremum principle, is used to find a closed form approximation to the diffusion coefficient as a function of the lattice spacing. This approximation is superior to the popular Machta and Zwanzig result and agrees well with a range of numerical estimates.'
address:
- 'School of Physics, University of New South Wales, Sydney Australia 2052'
- 'School of Mathematics and Statistics, University of New South Wales, Sydney Australia 2052'
author:
- 'C. Angstmann'
- 'G. P. Morriss'
bibliography:
- 'LGD.bib'
title: An analytic approximation to the Diffusion Coefficient for the periodic Lorentz Gas
---
Deterministic diffusion ,Lorentz gas
Introduction
============
The diffusion coefficient is perhaps the simplest example of a transport coefficient as it describes the transport of mass in a system. The Lorentz gas, originally proposed as a model for the movement of electrons in a crystal lattice, comprises a single particle moving in a lattice of fixed scatterers, and in the absence of a field the electron diffuses through the lattice.
A variety of methods have been employed to calculate the diffusion coefficient of the Lorentz gas. Machta and Zwanzig [@Machta:1983qy] used a Markov hopping process to generate an analytical diffusion coefficient approximation. A similar approach has been used more recently for the three-dimensional Lorentz gas [@GNS11]. As the model used by Machta and Zwanzig is stochastic it will be examined in more depth in section \[mz\]. Morriss and Rondoni [@Morriss:1994fj] used the periodic orbit expansion method to calculate the diffusion coefficient but also performed detailed calculations using the Green-Kubo relations and the mean squared displacement. Baranyai, Evans, and Cohen [@Baranyai:1993uq] calculated the diffusion coefficient based on the Green-Kubo relation although only for a limited number of densities. Gaspard and Baras [@Gaspard:1995vn] calculated the diffusion coefficient using an escape rate formalism which has been extended to other systems [@HG01]. Although these results have been known for some time there remains much interest in diffusion in billiard systems [@flower] and the periodic Lorentz gas [@Ch97; @D00; @MS08; @GS09].
Lorentz Gas Parameters
======================
We consider a triangular lattice of hard disk scatterers with a minimal spacing of $w$ between the surfaces of adjacent disks, as shown in Figure \[fig\_states\]. The wandering particle moves in a straight line in the area outside the scatterers until it has a specular collision with a scatterer. As long as the spacing is small enough, $w<\frac{4}{\sqrt{3}}-2$, the horizon is finite and there are no paths of infinite length, so a collision must occur. The path of the wandering particle can be described by a symbolic dynamics [@CGS92] constructed by assigning a symbol to the next (relative) scatterer. If the scatterer is a nearest neighbour then we can assign that event as a [*short*]{} flight. If the scatterer was a next nearest neighbour then that path is a [*long*]{} flight (see Figure \[fig\_states\]). Any displacement of the wandering particle in the Lorentz gas with finite horizon is composed of a combination of short and long flights. Although this approach has the flavour of a periodic orbit expansion, at no stage do we use periodic orbits, and the similarity is just in the symbolic dynamics used. In the stochastic model we develop the symbolic dynamics to become the state space.
![\[fig\_states\]A section of the periodic triangular Lorentz gas including a typical path for the wandering particle beginning from the central scatterer. This path becomes the basis for the deterministic model that we introduce later. The two symbol state space used to label each segment of trajectory is a relative symbol defined by shifting the trajectory so that the segment originates from the central scatterer, and then choosing the symbol by the next scatterer it hits. Thus a flight from the central scatterer to a nearest neighbor is labelled [*s*]{} for short and flight to a second nearest neighbor is labelled [*l*]{} for long. ](state.pdf){width="50.00000%"}
\[mz\]The Machta and Zwanzig Result
===================================
Machta and Zwanzig [@Machta:1983qy] construct their simple analytical estimate for the diffusion coefficient by replacing the trajectory of a particle with a random walk between triangular regions of the lattice called [*traps*]{}. A diagram of the [*trapping*]{} region is given in Figure \[fig\_mz\]. The probability of a transition from this region is calculated by considering the volume of phase space that will leave the trap in a time $\Delta t$. This leads to an expression for the mean occupation time of a trap as: $$\tau=\frac{\pi}{6w}\left(\frac{\sqrt{3}(w+2)^{2}-\pi}{2}\right).$$ The diffusion coefficient on a two-dimensional isotropic lattice can be expressed as: $$D=\frac{l^{2}}{4\tau}$$ where $l=(2+w)/\sqrt{3}$ is the distance between the traps. This gives an approximation for the diffusion coefficient as: $$\label{eq_mz}
D_{\mathrm{MZ}}=\frac{w(w+2)^{2}}{\pi(\sqrt{3}(w+2)^{2}-2\pi)}.$$
![\[fig\_mz\]The Machta-Zwanzig trap indicated by the shaded region between three scatterers in the triangular lattice. In Figure \[fig\_states\] there is a trap between every group of three scatterers.](MZ_trap.pdf){width="50.00000%"}
This derivation relies on the assumption that the process of transitions between traps is Markov. The more collisions that occur in the trap, the more information about which hole the particle entered by is lost. The state space that Machta and Zwanzig use differs from the state space that will be employed here. There are similarities in the approaches but the method that is outlined here does not require the same Markov assumption. The Markov assumption is only justified at very small $w$ whereas the approximation outlined below should be applicable over a wider range of values.
Klages and Dellago [@Klages:2000lr] and Klages and Korabel [@KK02] have developed successive systematic refinements based on explicit correlations of the non-Markov events to improve the accuracy of the Machta and Zwanzig result.
A Deterministic Model
=====================
The diffusion coefficient is defined in terms of the linear growth of the mean-square displacement with time, and can be found using the Einstein relation: $$\label{eq_diff_1}
D=\lim_{t\rightarrow\infty}\frac{\langle \Delta r^{2}(t)\rangle}{4t}.$$ Here $\Delta {\bf r}(t)$ is the displacement of the particle as a function of time. For the Lorentz gas the dynamics consists of repetitions of a free-flight at velocity $v=1$, followed by a collision with a scatterer giving the diffusion coefficient as a function of the spacing between the scatterers $w$. If no infinite length flights are possible, the Lorentz gas is said to have a [*finite horizon*]{} and the free-flights are of two types; flights between nearest neighour scatterers (short flights) and flights between second nearest neighbours (long flights). Any physical trajectory can be written as a sequence of short and long flights, where each flight has a length $r_{i}$ and a unit vector direction ${\boldsymbol {\omega}}_{i}$.
To evaluate the diffusion constant from the Einstein relation we need both the length of the trajectory and the time taken to travel along it. The total time after $N$ flights can be written as the sum of the times for each flight $\delta t_{i}$ as $$t_{N}=\sum_{i=1}^{N}\delta t_{i} = \sum_{i=1}^{N} \frac {r_{i}} {v} = \frac {N_{s} \langle r_{s}\rangle + N_{l} \langle r_{l}\rangle} {v}$$ where $N_{s}$ is the number of short flights, $N_{l}$ is the number of long flights, $\langle r_{s}\rangle$ is the average length of a short flight and $\langle r_{l}\rangle$ is the average length of a long flight. The square displacement $\Delta r^{2} (t_{N})$ at time $t_{N}$ is given by $$\label{msd}
\Delta r^{2} (t_{N})=\big( \sum_{i=1}^{N} r_{i} {\boldsymbol {\omega}}_{i}\big)^{2}.$$ Notice that despite the fact that in the Lorentz gas the flights occur in a particular order, the value of both the square displacement and the time do not depend on that ordering. Thus we can rearrange the order of the terms in the sum in equation \[msd\] to collect together the short and long flights separately as $$\label{delr}
\begin{split}
\Delta r^{2} (t) &= \left( \sum_{i=1}^{N_{s}} r_{i} \boldsymbol {\omega}_{i} + \sum_{i=1}^{N_{l}} r_{i} \boldsymbol {\omega}_{i} \right)^{2}\\
&=\left( \sum_{i=1}^{N_{s}} r_{i} {\boldsymbol {\omega}}_{i} \right)^{2}+ \left( \sum_{j=1}^{N_{l}} r_{j} {\boldsymbol {\omega}}_{j} \right)^{2}
+ 2 \left( \sum_{i=1}^{N_{s}}r_{i}{\boldsymbol {\omega}}_{i} \right) \cdot \left( \sum_{j=1}^{N_{l}}r_{j}{\boldsymbol {\omega}}_{j} \right) .\\
&= \sum_{i=1}^{N_{s}} r_{i}^{2} + \sum_{i \neq j}^{N_{s}} r_{i} r_{j} {\boldsymbol {\omega}}_{i} \cdot {\boldsymbol {\omega}}_{j}
+ \sum_{j=1}^{N_{l}} r_{j}^{2} + \sum_{i \neq j}^{N_{l}} r_{i} r_{j} {\boldsymbol {\omega}}_{i} \cdot {\boldsymbol {\omega}}_{j}
+ 2 \sum_{i=1}^{N_{s}} \sum_{j=1}^{N_{l}} r_{i} r_{j} {\boldsymbol {\omega}}_{i} \cdot {\boldsymbol {\omega}}_{j} .\\
\end{split}$$
Averaging the square displacement over all initial conditions, assuming that $r_{i}$ and ${\boldsymbol {\omega}}_{i}$ are uncorrelated and that the average $\langle {\boldsymbol {\omega}}_{i} \cdot {\boldsymbol {\omega}}_{j}\rangle = 0$ for all $i \neq j$, we obtain
$$\label{delr}
\begin{split}
\langle \Delta r^{2} (t) \rangle = N_{s} \langle r_{s}^{2} \rangle + N_{l} \langle r_{l}^{2} \rangle
\end{split}$$
The assumption that $\langle {\boldsymbol {\omega}}_{i} \cdot {\boldsymbol {\omega}}_{j}\rangle = 0$ is only likely to miss correlations between subsequent events, especially when both events are short flights. The dominant term in repeated short flights will be when the two flights are in nearly opposite directions, and then $\langle {\boldsymbol {\omega}}_{i} \cdot {\boldsymbol {\omega}}_{j}\rangle < 0$. Repeated long flights will also produce negative correlations but these occurrences are much rarer.
The probability of a short flight is $P_{s} = N_{s}/N$ and the probability of a long flight is $P_{l} = N_{l}/N$ so that in the limit $N\rightarrow\infty$ the diffusion coefficient becomes $$\label{eq_diff_1a}
D= \frac{P_{s} \langle r^{2}_{s}\rangle + P_{l} \langle r^{2}_{l}\rangle}{4(P_{s} \langle r_{s}\rangle + P_{l} \langle r_{l}\rangle)}$$
To evaluate this expression for the diffusion coefficient we need values for the averages $\langle r_{s}\rangle$, $\langle r_{l}\rangle$, $\langle r^{2}_{s}\rangle$ and $\langle r^{2}_{l}\rangle$, and the values of the probabilities $P_{s}$ and $P_{l}=1-P_{s}$ which will all be functions of the spacing $w$. The physical constraints on the system set lower bounds on $r_{s}$ and $r_{l}$ (and hence on their averages), as they cannot be lower than the minimum separation between the relevant scatterers. Thus $r_{s}>w$ and $r_{l}>\sqrt{3}(w+2)-2$. The values of the averages could be found numerically or by explicit integration over the billiard measure, as could the probabilities $P_{s}$ and $P_{l}$, but it is less difficult to calculate the diffusion coefficient numerically.
Equation \[eq\_diff\_1a\] gives an upper bound on the diffusion coefficient as the correlations that are ignored are mostly negative and in the numerator. To gain some quantitative understanding of the accuracy it is perhaps interesting to consider the numerical values obtain for various quantities at a single value of the spacing, $w=0.2$. Here the deterministic model (Equation \[eq\_diff\_1a\]) gives $D=0.208$ rather than the correct value $D=0.17$. This is obtained using the average values $\langle r_{s}\rangle = 0.446$, $\langle r_{l}\rangle=1.88$, $\langle r^{2}_{s}\rangle=0.2576$ and $\langle r^{2}_{l}\rangle=3.56$.
Stochastic Model
================
Rather than proceeding with the deterministic model of the previous section we will use its structure, in particular Equation (\[eq\_diff\_1a\]), to construct a stochastic model for the deterministic Lorentz gas. The probability space for this model will be formed from the phase space of the Lorentz gas together with its natural measure. Rather than considering the whole state space we will take the Poincaré section and develop a discrete model comprised of the two states, long flights and short flights. The state space labels transitions between iterations on the Poincaré surface rather than parameterising the position on the surface. The stochastic model constructed in this fashion will not be Markovian, as the correlations in the deterministic trajectory persist but this is not an issue as only the long run, or stationary, probabilities are required. From this it is obvious that all transient behaviour is lost.
Our model for the diffusion coefficient is given by $$\label{eq_diff_1s}
D= \frac{P_{s} \hat r^{2}_{s} + P_{l} \hat r^{2}_{l}}{4(P_{s} \hat r_{s} + P_{l} \hat r_{l})}$$ where the $P_{s}$ and $P_{l} = 1 - P_{s}$ are the probabilities for short flights and long flights. Here $\hat r_{s}$ and $\hat r_{l}$ are considered to be parameters which may depend on the spacing $w$. There is now no mapping of this stochastic model on to random walk on a lattice as the determinism of the previous model has been lost.
As we still have three parameters in the stochastic model, we look for a method of reducing this number. To do this we consider looking for an extremum of $D$ as a function of the parameters $\hat r_{s}$ and $\hat r_{l}$. If we minimise $D$ with respect to both $\hat r_{l}$ and $\hat r_{s}$ we find that the required value of $\hat r_{l}$ is less than the physical minimum for $r_{l}$, that is $\sqrt{3}(w+2)-2$. So we set $\hat r_{l}$ at this lower bound and minimise $D$ with respect to $\hat r_{s}$ alone.
The calculation of the extremum is straightforward by differentiating equation \[eq\_diff\_1a\] with respect to $\hat r_{s}$, we find that $$\frac{\partial D}{\partial \hat r_{s}} = \frac{\hat r_{s}^{2} P_{s}^{2}+P_{s} P_{l} \hat r_{l} (2 \hat r_{s} - \hat r_{l})}{4(\hat r_{s}P_{s}+\hat r_{l}P_{l})}$$ which we set to zero. The positive solution of the quadratic in $\hat r_{s}$ gives a minimum at $$\hat r_{s} = r^{*}_{s}=\frac{\hat r_{l}(\sqrt{P_{l}}-P_{l})}{P_{s}}$$ Notice that $r^{*}_{s}$ now depends on $w$ through the dependence of the probabilities $P_{s}$ and $P_{l}$ and the fixed value of $\hat r_{l}$. The stochastic model now only depends upon a single parameter $P_{l}=1-P_{s}$. After some algebraic manipulation the diffusion coefficient is given by $$\label{eq_dp}
D=\frac{\hat r_{l}(\sqrt{P_{l}}-P_{l})}{2 P_{s}}=\frac{r^{*}_{s}}{2} .$$
Probabilities
=============
The probabilities of short and long flights should be an integral over the regions of initial conditions, weighted with the uniform measure, leading to short and long flights and hence are in principal exactly computable. Rather than undertake this computation directly, the probability is calculated from a numerical simulation which is equivalent to a Monte-Carlo evaluation of the integral. We calculate the probability based on the frequency of occurrence of short and long flights over a range of spacings $w$ for trajectories of at least $5 \times 10^5$ collisions. The probability of a long flight $P_{l}$ as a function of $w$ appears continuous and smooth and is fitted well by a simple power law in $w$. $$\label{eq_plf}
P_{l}=\beta w^{\alpha}$$ where the coefficient choices $\alpha=2 ( \sqrt{3}-1)$ and $\beta =\frac{1}{\sqrt{3}}$, give a good fit to the data. We plot the simulation probabilities and the power law fit in Figure \[prob\_lf\].
![The probability of a long flight as a function of scatterer spacing $w$. Each point is a molecular dynamics calculation of at least $5\times 10^5$ collisions and the solid line is the power law fit $P_{l}=\beta w^{\alpha}$. Notice that this fit is a very good approximation throughout the whole range of spacings. []{data-label="prob_lf"}](Pl.pdf){width="100.00000%"}
Numerical results for the diffusion coefficient are obtained using the fitted expression for the probability $P_{l}$ and the lower bound for $\hat r_{l}$, thus the final result for the diffusion coefficient is $$\label{D_coeff}
D=\frac { \left(\sqrt{3} (w+2) - 2 \right) \left( \sqrt{\beta} w^{\alpha/2}-\beta w^{\alpha} \right) } {2 \left( 1 - \beta w^{\alpha} \right) }$$ A comparison of these results with simulation results is given in figure \[dif\_min\]. This equation fits the other data remarkably well, only falling outside the error bars at $w=0.3$ and for very small values of $w$. A deviation may be expected at large $w$ as we are using the lower bound for $r_{l}$.
At $w=0.2$ the stochastic model (Equation \[D\_coeff\]) gives a value of $D=0.171$, using $r^{*}_{s}=0.343$ which less than the value of $\langle r_{s}\rangle$ and should therefore lead to a smaller value for $D$. At first glance, the stochastic model appears to replace $\langle r^{2}_{s}\rangle$ by $\langle r_{s}\rangle^{2}$ and the same replacement for the long flights. The difference in these two terms is $\langle r^{2}_{s}\rangle - \langle r_{s}\rangle^{2} =0.058$ and $\langle r^{2}_{l}\rangle - \langle r_{l}\rangle^{2} =0.03$ which accounts for only part of the difference between the diffusion coefficients of the deterministic and stochastic models.
![A comparison of values of the diffusion coefficient obtained from numerical simulation with two analytic results, that of Machta and Zwanzig (Equation \[eq\_mz\] the dashed line), and the result obtained here in Equation \[D\_coeff\] (the solid line). The diamonds are Green Kubo results from [@Machta:1983qy], the squares are escape rate calculations from Gaspard and Baras [@Gaspard:1995vn], and the filled circles are Green Kubo calculations from Baranyai, Evans, and Cohen [@Baranyai:1993uq]. The triangles are the Green Kubo results, and the upside down triangles are periodic orbit calculations from Morriss and Rondoni [@Morriss:1994fj].[]{data-label="dif_min"}](DC1.pdf){width="100.00000%"}
The fact that the minimum value of the diffusion coefficient gives the best value for $D$ over the whole range of $w$ suggests that there is some overall minimum principle at work, although there is no strong justification for this.
Small $w$ limit
===============
The limiting behaviour of the diffusion coefficient is incorrect in the small $w$ limit, where linear behaviour is expected [@Bunimovich:1985yq]. For small $w$ the power law fit for the probability, Equation \[eq\_plf\], may be less accurate. The derivation of the diffusion coefficient is independent of the form of the probability. For the derivation to hold and the diffusion coefficient to be linear as $w\rightarrow0$ the probability must be of the form: $$P_{l}\approx a w^{2}$$ where $a$ is the slope approaching $w=0$. Taking the following form for the probability we can ind a better fit over the entire range of $w$: $$P_{l}= \beta (e^{-\gamma w} w^{2}+(1-e^{-\gamma w})w^{\alpha})$$ where $\alpha$ and $\beta$ are as given before. The parameter $\gamma$ is fitted to the simulation data and found as $167.37$. With this probability the diffusion coefficient has the correct limiting behaviour for small $w$.
Conclusion
==========
The simple stochastic model presented here works remarkably well over the full range of physical $w$ values. Based on very little information, the diffusion coefficient for the Lorentz gas in Equation \[D\_coeff\] appears in agreement with previous numerical results. The main advantage of the present method is computational simplicity and surprising accuracy. A dynamically motivated stochastic model, combined with a minimisation procedure and a simple power law approximation to the probabilities gives an accurate closed form approximation to the diffusion coefficient. It can be corrected at very small $w$ to have the correct limiting behaviour.
|
---
abstract: 'The thermal expansion coefficients of $\mathrm{Na}_{N}$ clusters with $8 \le N \le 40$ and $\mathrm{Al}_{7}$, $\mathrm{Al}_{13}^-$ and $\mathrm{Al}_{14}^-$ are obtained from [*ab initio* ]{} Born-Oppenheimer LDA molecular dynamics. Thermal expansion of small metal clusters is considerably larger than that in the bulk and size-dependent. We demonstrate that the average static electric dipole polarizability of Na clusters depends linearly on the mean interatomic distance and only to a minor extent on the detailed ionic configuration when the overall shape of the electron density is enforced by electronic shell effects. The polarizability is thus a sensitive indicator for thermal expansion. We show that taking this effect into account brings theoretical and experimental polarizabilities into quantitative agreement.'
address: |
$^1$Institute for Theoretical Physics, University of Regensburg, D-93040 Regensburg, Germany\
$^2$Department of Physics, University of Jyväskylä, P.O. Box 35, FIN-40351 Jyväskylä, Finland
author:
- 'S. Kümmel$^1$, J. Akola$^2$, and M. Manninen$^2$'
title: Thermal expansion in small metal clusters and its impact on the electric polarizability
---
Since electronic shell effects were put into evidence in small metallic systems [@knightp; @ekardt; @beck; @manninen], metal clusters have continously attracted great interest both experimentally and theoretically [@moullet2; @guan; @rubio; @ullrich; @chelikowsky; @rayane]. Besides technological prospects, one of the driving forces for this research has been the fundamental question of how matter develops from the atom to systems of increasing size, and how properties change in the course of this growing process. In some cases it has been possible to extract detailed information from experiments done at low temperatures [@expcoldopen] and the related theories [@koutecky96]. In many cases, however, a deeper understanding is complicated by the finite temperature which is present in most experiments due to the cluster production process, see e.g. the discussion in [@brockhaus]. Whereas a lot of theoretical information about finite temperature effects in nonmetallic systems has been gained in the last years [@thermal], only little is known about it in metallic clusters. Here, sodium is a particularly interesting reference system because of its textbook metallic properties and the fact that it has been extensively studied within the jellium model, see e.g.[@revmod] for an overview. Aluminum, on the other hand, is of considerable technological interest. Some advances to study temperature effects in metal clusters including the ionic degrees of freedom were made using phenomenological molecular dynamics [@bulgac], a tight-binding hamiltonian [@poteau], the Thomas-Fermi approximation [@tf] or the Car-Parrinello method [@roethlis]. Recently, it has also become possible to study sodium clusters of considerable size [@rytkoenen] using [*ab initio*]{} Born-Oppenheimer, local spin density molecular dynamics (BO-LSD-MD) [@ldamd].
In this work we report on the size dependence of a thermal property which is well known for bulk systems, namely the linear thermal expansion coefficient $$\label{defbeta}
\beta=\frac{1}{l}\frac{\partial l}{\partial T}.$$ For crystalline sodium at room temperature, it takes [@am] the value $71 \times 10^{-6} K^{-1}$, for Al $23.6 \times 10^{-6}
K^{-1}$. To the present date, however, it has not been known how small systems are affected by thermal expansion. At first sight, it is not even obvious how thermal expansion can be defined in small clusters. Whereas in the bulk it is no problem to define the length $l$ appearing in Eq. (\[defbeta\]), e.g. the lattice constant, it is less straightforward to choose a meaningful $l$ in the case where many different ionic geometries must be compared to one another. For small metal clusters, the latter situation arises because of the many different isomers which appear at elevated temperatures.
We have calculated the thermal expansion coefficients for $\mathrm{Na}_{8}$, $\mathrm{Na}_{10}$, $\mathrm{Na}_{12}$, $\mathrm{Na}_{14}$, $\mathrm{Na}_{20}$ and $\mathrm{Na}_{40}$ in BO-LSD-MD simulations. Results concerning isomerization processes in these simulations have been presented in [@aisspic], and the BO-LSD-MD method is described in detail in Ref. [@ldamd]. A meaningful length to be used in Eq. (\[defbeta\]) if it is applied to finite systems with similar overall deformation is the mean interatomic distance $$l_{\mathrm miad} = \frac{1}{N(N-1)}
\sum_{i,j=1}^{N} \left| {\bf R}_i - {\bf R}_j \right| ,$$ where $ {\bf R}_i$ are the positions of the $N$ atoms in the cluster. Obviously, $l_{\mathrm miad}$ measures the average “extension” of a clusters ionic structure, and we calculated it for all configurations obtained in a BO-LSD-MD run. Two different methods were used to calculate $\beta$. First, we discuss the heating runs, in which the clusters were thermalized to a starting temperature and then heated linearly with a heating rate of 5K/ps and a time step of 5.2 fms. $l_{\mathrm miad}$ was recorded after each time step. In this way, for $\mathrm{Na}_{8}$ the temperature range from about 50 K to 670 K was covered, corresponding to 24140 configurations, for $\mathrm{Na}_{10}$ from ca. 150 K to 390 K (9260 configurations), for $\mathrm{Na}_{14}$ from ca. 50 K to 490 K (17020 configurations), for $\mathrm{Na}_{20}$ from ca. 170 K to 380 K (8000 configurations), and for $\mathrm{Na}_{40}$ from ca. 200 K to 400 K (7770 configurations).
Fig. \[miadvt8u10\] shows how $l_{\mathrm miad}$ changes with temperature for $\mathrm{Na}_{8}$ and $\mathrm{Na}_{10}$. Both curves show large fluctuations, as is to be expected for such small systems. However, one clearly sees a linear rise as the general trend. We therefore made linear fits to the data for each cluster in two ways. The first column in the left half of table \[coeff\] gives the linear thermal expansion coefficients which we obtained from fitting the data in the temperature interval between 200 K and 350 K, i.e.around room temperature, where bulk sodium is usually studied. In order to allow for an estimate of the statistical quality of the fits in view of the fluctuations, the second and third column in the left half of Table \[coeff\] list the ratio of the fit parameters, i.e.the axis interception $a$ and the slope $b$, to their standard deviations. It becomes clear from these results that thermal expansion in the small clusters is considerably larger than that in the bulk. This can be understood as an effect of the increased surface to volume ratio in the finite systems. However, the expansion coefficient also strongly depends on the cluster size. This can even be seen directly from the different slopes in Fig. \[miadvt8u10\]. As we will show below, this size dependence has far reaching consequences for the interpretation of experimental data which is usually measured on hot clusters, as e.g. the static electric polarizability.
In addition to the values given in Table \[coeff\], we calculated the expansion coefficient of $\mathrm{Na}_{12}$ with a different method. In two separate runs, the cluster was thermalized to temperatures of about 200 K and 350 K, and then BO-LSD-MD was performed for 5 ps at each temperature, i.e. without heating. From the average $l_{\mathrm miad}$ found in the two simulations, $\beta_{\mathrm{Na}_{12}}=2.5 \, \beta_{\mathrm bulk}$ was calculated. Thus, also the second method leads to a $\beta$ that is larger than that of the bulk, i.e. it confirms the results of the heating runs.
The average thermal expansion coefficient for the full temperature range covered in each simulation is obtained from a fit to the complete set of data, shown as a dashed line in Fig.\[miadvt8u10\] for $\mathrm{Na}_{8}$ and $\mathrm{Na}_{10}$. This average is of interest because it covers several hundred K for each cluster in the range of temperatures which are to be expected for clusters coming from the usual supersonic expansion sources [@durgourd]. The right half of table \[coeff\] lists these average expansion coefficients and their statistical deviations in the same way as before. As is to be expected, the values differ from the previous ones for the small clusters, because the expansion coefficient is influenced by which isomers are or become accessible at a particular temperature, i.e. especially at low temperatures it is temperature dependent. In Fig. \[miadvt8u10\] one e.g. sees from comparison with the average dashed line that for temperatures between 50 K and 100 K, the thermal expansion is smaller than that seen for higher temperatures. However, once the cluster has reached a temperature where it easily changes from one isomer to another, the thermal expansion coefficient becomes nearly independent of the temperature. In the case of $\mathrm{Na}_{8}$, e.g., $\beta$ changes only by about 5 % in the interval between 300 K and 670 K.
Detailed previous investigations [@rytkoenen; @aisspic] have shown that small clusters do not show a distinct melting transition. However, the largest cluster studied here, $\mathrm{Na}_{40}$, shows a phase transition above 300 K [@rytkoenen]. At the melting point, the octupole and hexadecupole deformation of the electronic density sharply increase. If $l_{\mathrm
miad}$ is a relevant indicator for structural changes, then melting should also be detectable from it. Indeed we find a noticeable increase in $l_{\mathrm miad}$ at 300 K, and similar fluctuation patterns as in the multipole moments. In our simulation, we could only determine the expansion coefficient for the solid phase, and it is given in the right half of table \[coeff\].
$\beta / \beta_{\mathrm bulk}$ $\sigma (a)/a$ $\sigma (b)/b$ $\beta / \beta_{\mathrm bulk}$ $\sigma (a)/a$ $\sigma (b)/b$
-------------------- -------------------------------- ---------------- ---------------- -------------------------------- ---------------- ----------------
$\mathrm{Na}_{8} $ 2.4 0.001 0.04 1.7 $<$ 0.001 0.01
$\mathrm{Na}_{10}$ 3.6 0.002 0.03 2.8 0.001 0.02
$\mathrm{Na}_{14}$ 1.2 0.002 0.07 1.7 $<$ 0.001 0.01
$\mathrm{Na}_{20}$ 1.9 0.001 0.03 1.9 0.001 0.01
$\mathrm{Na}_{40}$ - - - 1.2 0.001 0.04
: Left half, first column: Linear thermal expansion coefficient of small Na clusters in the temperature interval between 200 and 350 K, given in terms of the bulk value $71 \times 10^{-6} K^{-1}$. Columns two and three give the ratio of the axis interception $a$ and the slope $b$ to their standard deviations as obtained from the fits. Right half: Expansion coefficient averaged over 50-670 K for $\mathrm{Na}_{8}$, 150-390 K for $\mathrm{Na}_{10}$, 50-490 K for $\mathrm{Na}_{14}$, 150-460 K for $\mathrm{Na}_{20}$, and 200-300 K for $\mathrm{Na}_{40}$. See text for discussion.[]{data-label="coeff"}
As seen in Fig. \[miadvt8u10\], $\mathrm{Na}_{8}$ shows thermal expansion already at 50 K. This raises the question at which temperature the expansion actually starts, i.e. where anharmonic effects in the ionic oscillations will start to become important. In this context we note that one can compare the $l_{\mathrm miad}$ at T=0 K found by extrapolation from the heating data to the $l_{\mathrm
miad}$ which is actually found for the ground state structure at T=0 K. We have done this for $\mathrm{Na}_{8}$, $\mathrm{Na}_{10}$ and $\mathrm{Na}_{14}$, where the ground state structures are well established. In all cases, the differences between the two values were less than 1%. This indicates that the anharmonic effects for Na clusters are important down to very low temperatures. Furthermore, the anharmonicities should also be observable in the heat capacities [@rytkoenen], where they will lead to deviations from Dulong-Petit’s law. We have checked this and indeed found deviations between 8 % ($\mathrm{Na}_{20}$) and 19 % ($\mathrm{Na}_{8}$) from the Dulong-Petit value.
As an example for the considerable influence of thermal expansion on measurable physical properties we discuss the average static electric dipole polarizability $\alpha$, which is defined as the trace of the polarizability tensor. It was one of the first observables from which the existence of electronic shell effects in metal clusters was deduced [@knightp], and it has been measured for clusters of various sizes and materials [@rayane]. For Na clusters with up to eight atoms, the polarizability was also calculated in different approaches [@moullet2; @guan; @rubio; @chelikowsky; @rayane]. These calculations qualitatively reproduce the experimentaly observed trends, but they all underestimate the measured value. We show that this discrepancy is to a large part due to the fact that the calculations were done for T=0, whereas the measurement is done on clusters having temperatures of about 400 to 600 K [@durgourd].
For various, different isomers obtained in our heating runs for $\mathrm{Na}_{8}$ and $\mathrm{Na}_{10}$, we have calculated the polarizability from the derivative of the induced dipole moment with respect to the electric field (finite field method). Since highly unsymmetric isomers from the high temperature part of the simulations were taken into account, the full tensor was computed by numerically applying the dipole field in the different directions in seperate calculations. We have checked that the used field strength of $5
\times 10^{-5} e/a_0^2$ is large enough to give a numerically stable signal and small enough to be in the regime of linear response. In Fig. \[alvmiad8u10\] we have plotted the thus obtained polarizabilities versus $l_{\mathrm miad}$, and show three instances of ionic geometries for each cluster that demonstrate how different the structures actually are. Nevertheless, within a few percent the polarizabilities are on a straight line. This shows that the average polarizability depends mainly and strongly on the mean interatomic distance, and only to a minor extent on details in the ionic configurations. Of course, the situation might be more complicated for clusters where the overal shape, i.e. the lowest terms in the multipole expansion of the valence electron density, is not stabilized by electronic shell effects. For the present clusters, however, the deformation induced by the electronic shell effects persists even at elevated temperatures. That $\alpha$ is less sensitive to the detailed ionic configuration than, e.g., the photoabsorption spectrum, is understandable because it is an average quantity.
The dependence of the polarizability on the mean interatomic distance has the consequence that $\alpha$ also strongly depends on the temperature. From Fig. \[alvmiad8u10\] one deduces that an average bondlength increase of 1 $a_0$ in $\mathrm{Na}_{8}$ and $\mathrm{Na}_{10}$ leads to an increase in the polarizability of about 25 $\AA^3$. Thus, neglection of the thermal expansion in T=0 calculations leads to polarizabilities which are smaller than the ones measured on clusters coming from supersonic expansion sources [@knightp; @rayane]. Of course, also underestimations of the cluster bond lengths that are due to other reasons will directly appear in the polarizability. With the Troullier-Martins pseudopotential, e.g. the BO-LSD-MD underestimates the dimer bond length by 4.5%, and it is to be expected that the situation is similar for the bond lengths of larger clusters. Taking this into account, one can proceed to calculate the polarizability for clusters with a temperature corresponding to the experimental one of about 500 K [@durgourd]. In the experiments the clusters are spending about $10^{-4}$s in the deflecting field from which the polarizability is deduced, i.e. the experimental timescale is orders of magnitude larger than the timescale of the fluctuations in the mean interatomic distance (see Fig. \[miadvt8u10\]). Thus, the fluctuations will be averaged over and can be neglected. From the average expansion coefficients we obtain a bond length increase of 0.48 $a_0$ for $\mathrm{Na}_{8}$ and 0.87 $a_0$ for $\mathrm{Na}_{10}$ at 500 K, which in turn leads to an increase in the polarizability of 12 $\AA^3$ and 23 $\AA^3$, respectively. The resulting polarizabilities of 130 $\AA^3$ for $\mathrm{Na}_{8}$ and 172 $\AA^3$ for $\mathrm{Na}_{10}$ compare favourably with the experimental values 134$\pm$16$\AA^3$ and 190$\pm$20$\AA^3$ [@knightp; @rayane]. For all other cluster sizes, the two experiments [@knightp; @rayane] give different values for the polarizability. From the present work it becomes clear that differences in the experimental temperatures might be the reason for the discrepancies. Therefore, an accurate measurement of the clusters’ temperatures is necessary before further quantitative comparisons can be made. However, a detailed comparison to both experiments showed that the theoretical T=0 polarizability of all isomers underestimates both experimental results [@tobepu]. Thus, the increase in $\alpha$ that is brought about by thermal expansion will lead to better agreement between theory and experiment for all cluster sizes.
Thermal expansion is also observed in aluminum clusters. For $\mathrm{Al}_{7}$ we performed 5 ps of BO-LSD-MD at each of the fixed temperatures 100 K, 300 K, 500 K and 600 K, for $\mathrm{Al}_{13}^-$ at 260 K, 570 K and 930 K, and for $\mathrm{Al}_{14}^-$ at 200 K, 570 K and 900 K, in analogy to the procedure for $\mathrm{Na}_{12}$. From the average $l_{\mathrm miad}$ at each temperature, we calculated the expansion coefficients $\beta_{\mathrm Al_7}=1.3 \, \beta_{\mathrm bulk}$, $\beta_{\mathrm Al_{13}^-}=1.4 \, \beta_{\mathrm bulk}$, $\beta_{\mathrm Al_{14}^-}=1.4 \, \beta_{\mathrm bulk}$. It should be noted that with $\mathrm Al_{13}^-$ we have chosen an electronically as well as geometrically magic cluster [@akola], i.e. a particularly rigid one, and the fact that it also shows a larger expansion coefficient than the bulk is further evidence for the conclusion that the increased expansion coefficient is indeed a finite size effects. A noteworthy difference between Al and Na is seen in the temperatures where the expansion sets in. Whereas for Na this temperature is below 50 K, we observe that $\mathrm Al_{13}^-$ and $\mathrm Al_{14}^-$ show no expansion below 300 K.
In summary, we have calculated thermal expansion coefficients for small metal cluster and demonstrated that thermal expansion in these systems is larger than that in the bulk. For the case of sodium, the dependence of the expansion coefficient is not monotonous according to the cluster size. We showed that the average static electric dipole polarizability of clusters whose overall shape is fixed by electronic shell effects depends linearly on the mean interatomic distance. Thus, thermal expansion increases the static electric polarizability, and we demonstrated that taking this effect into account brings the theoretical values in a close agreement with the experimental ones.
We thank M. Brack and A. Rytkönen for clarifying discussions. J.A. acknowledges support by the Väisälä Foundation, S.K. by the Deutsche Forschungsgemeinschaft, and all authors by the Academy of Finland.
W. D. Knight [*et al.*]{}, Phys. Rev. B [**31**]{}, (1985) 2539.
W. Ekardt, Phys. Rev. Lett. [**52**]{}, (1984) 1925.
D. E. Beck, Phys. Rev. B [**30**]{}, 6935 (1984).
M. Manninen, Phys. Rev. B [**34**]{}, 6886 (1986).
I. Moullet [*et al.*]{}, Phys. Rev. B [**42**]{}, 11589 (1990).
J. Guan [*et al.*]{}, Phys. Rev. B [**52**]{}, 2184 (1995).
A. Rubio [*et al.*]{}, Phys. Rev. Lett. [**77**]{}, 247 (1996).
C. A. Ullrich, P.-G. Reinhard, and E. Suraud, J. Phys. B [**31**]{}, 1871 (1998).
I. Vasiliev, S. Öğüt, and J. R. Chelikowsky, Phys. Rev. Lett. [**82**]{}, 1919 (1999).
D. Rayane [*et al.*]{}, Eur. Phys. J. D [**9**]{}, 243 (1999).
C. Ellert [*et al.*]{}, Phys. Rev. Lett. 75, 1731 (1995).
V. Bonačic-Koutecký [*et al.*]{}, J. Chem. Phys. [**104**]{}, 1427 (1996).
P. Brockhaus [*et al.*]{}, Phys. Rev. A [**59**]{}, 495 (1999).
J. Jellinek, T. Beck, and R. S. Berry, J. Chem. Phys. [**84**]{}, 2783 (1986); J. D. Honeycutt and H. C. Andersen, J. Phys. Chem. [**91**]{}, 4950 (1987); J. P. Rose and R. S. Berry, J. Chem. Phys. [**98**]{}, 3246 (1993); C. L. Cleveland, U. Landman, and W. D. Luedtke, J. Phys. Chem. [**98**]{}, 6272 (1994).
M. Brack, Rev. Mod. Phys. [**65**]{}, 677 (1993).
N. Ju and A. Bulgac, Phys. Rev. B [**48**]{}, 2721 (1993); F. Calvo and F. Spiegelmann, Phys. Rev. Lett. [**82**]{}, 2270 (1999).
R. Poteau, F. Spiegelmann, and P. Labastie, Z.Phys. D [**30**]{}, 57 (1994).
P. Blaise, S. Blundell, and C. Guet, Phys. Rev. B [**55**]{}, 15856 (1997); A. Aguado [*et al.*]{}, J. Chem. Phys. [**111**]{}, 6026 (1999).
U. Röthlisberger and W. Andreoni, J. Chem. Phys. [**94**]{}, 8129 (1991).
A. Rytkönen, H. Häkkinen, and M. Manninen, Phys. Rev. Lett. [**80**]{}, 3940 (1998).
R. Barnett and U. Landmann, Phys. Rev. B [**48**]{}, 2081 (1993).
N. W. Ashcroft and N. D. Mermin, [*Solid State Physics*]{}, (Saunders College Publishing, Fort Worth, 1976).
A. Rytkönen, H. Häkkinen, and M. Manninen, Eur. Phys. J. D [**9**]{}, 451 (1999).
P. Dugourd [*et al.*]{}, Chem. Phys [**218**]{}, 163 (1997).
S. Kümmel [*et al.*]{}, to appear in Eur. Phys. J. D.
J. Akola [*et al*]{}, Phys. Rev. B [**60**]{}, R11297 (1999).
|
---
abstract: 'We construct a relativistic and curved space version of action-angle variables for a particle trapped in a gravity and electromagnetic background with time-like isometry. As an example, we consider a particle in AdS background. Furthermore, we obtain the semiclassical quantisation of its energy levels.'
author:
- |
Jaehun Lee[^1], and Corneliu Sochichiu[^2]\
[*GIST College, Gwangju Institute Science and Technology*]{}\
[*123 Cheomdan-gwagiro, Buk-gu, Gwangju*]{}\
[*Republic of KOREA*]{}
bibliography:
- 'AAvariable.bib'
title: 'Action-angle variables in curved space-time'
---
Introduction {#sec:Intro}
============
Action-angle variables give a parameterisation of a classical system with finite motion in which one variable (action) remains constant, while its canonical conjugate (angle) is evolving linearly in time.
At the early stage of quantum mechanics the discrete nature of energy was explained by the property of action variable to take only discrete values, an approach known today as Born-Sommerfeld quantisation [@sommerfeld1921atombau]. So, the action-angle variables were studied extensively and became a standard subject for major classical mechanics textbooks (see e.g. [@goldstein2014classical]). We now know that although in some cases the Born-Sommerfeld prescription gives exact energy eigenvalues, it is generally not a proper way to describe a quantum system. Even so, Born-Sommerfeld quantisation remains a valid approximation when quantum effects are not very strong. Hence, action-angle variables are a powerful and intuitive tool for semiclassical analysis.
Beyond the application to semiclassical analysis the set of action-angle variables is an interesting construction by itself because it explicitly reveals the structure of a classically integrable system with finite motion. As the approach was mainly developed in the era of non-relativistic physics, application to relativistic systems was largely overlooked.
Here we address the question of whether a relativistic (re)formultaion of the action-angle variables is also possible?
Although, there is a number of applications of action-angle variables to relativistic particles in gravity backgrounds (see [@Galajinsky:2013osa; @Galajinsky:2013mla; @Saghatelian:2014uba] for recent works), the method is applied in situations when the original relativistic system is equivalently reformulated in a non-relativistic manner. So, in spite of very basic nature of this original question, we did not find a satisfactory answer in the literature.
Here we consider a relativistic particle moving in a curved $ (1+1) $ - dimensional space-time background with a time-like Killing vector. In addition, the particle is charged and interacts with a background electromagnetic field with Killing vector invariant strength. We explicitly construct (in quadrature) the action-angle variables and give the (implicit) formula for semiclassical Born-Sommerfeld quantisation of energy levels for the particle trapped by the background.
The organization of the paper is as follows. In the next section, we introduce our approach starting from the discussion of geometric criteria of applicability of our formalism and further deriving the quadrature formulas for action and angle variables. There we also give the semi-classical quantisation prescription. Then we consider the example of two-dimensional Anti-de Sitter (AdS) space (or radial part of a higher-dimensional AdS space) and obtain quantised semiclassical energy levels for a particle trapped in such a space. Finally, we discuss our results.
Relativistic particle in a background with time-like Killing vector {#sec:sec}
===================================================================
The charged particle in a curved space-time is described by the following action, $$\mathcal{S}=\int \{-m\sqrt{g_{\mu\nu}\dot{x}^{\mu}\dot{x}^{\nu}}-e\dot{x}^{\mu}A_{\mu}\}{\mathrm{d}}\tau\;,$$ where $g_{\mu\nu}$, is the $ 1+1 $-dimensional metric, and $ A_{ \mu} $, respectively, the vector potential of two-dimensional Abelian gauge field. Greek indices $ \mu, \nu $, etc. run through the range $ (0,1) $.
In classical mechanics action-angle variable method is applied to one–dimensional particles moving in static binding potential. The notion of ‘binding’ we will clarify later, but the geometric analogue of static potential is existence of an isometry given by a time-like Killing vector field. In our situation we also require the Killing vector to commute with the gauge field strength.
The time-like isometry together with diffeomorphism invariance allow us to choose a coordinate system in which metric is time-independent and diagonal, i.e. the only non-trivial components of metric in $ (t,x) $-coordinates are $ g_{00}(x) $ and $ -g_{11}(x) $.[^3]
As for the gauge potential $ A_{ \mu} $, we can impose the gauge condition, $ A_{1}=0 $. In our coordinate system the remaining component $ A_{0} \equiv \phi(x) $ will be time-independent as long as the field strength is time-independent.[^4] The last is precisely the required above Killing invariance of the field strength tensor.
Fixing the world-line time reparametrisation by choosing, $$\tau = t(\equiv x^{0}),$$ we bring the action to the form, $$\label{Action:stand}
\mathcal{S}=\int \{- m\sqrt{g_{00}-g_{11}\dot{x}^{2}}-e\phi(x)\}{\mathrm{d}}t .$$ The action is the starting point of our analysis.
Before starting the analysis, let us observe that with a space-like coordinate transformation $ x'=x'(x) $, where $$x'(x)= \int \sqrt{ \frac{g_{11}}{ \sqrt{ g_{00}}}} {\mathrm{d}}x,$$ we can bring the metric to the form $ g_{00}=g_{11}^{2} $. In such a coordinate system, the non-relativistic limit of the action is particularly natural and easy, $$\label{nonrel-act}
\mathcal{S}_{\text{non-rel}}=
\int \left( \frac{1}{2}m \dot{ x'}^{2}- m\sqrt{g_{00}}-e \phi \right) {\mathrm{d}}t + \dots,$$ where $ g_{00}= g_{00}(x(x')) $ and $ \phi= \phi (x(x')) $. Hence, in this parameterisation our system is approaching a standard non-relativistic massive particle with the potential, $$\label{nonrel-pot}
V(x')= m\sqrt{ g_{00}}+e \phi,$$ as long as $ \dot{ x}'{}^{2}\ll \sqrt{g_{00}} $. Dots in eq. denote terms which are small in this limit.
Action-angle variables
======================
Let’s start by the Legendre transform of the action to the Hamiltonian description. The canonical momentum is given by, $$\label{Leg:momentum}
p \equiv \frac{ \partial \mathcal{L}}{ \partial \dot{x}}=
\frac{m g_{11} \dot{ x}}{ \sqrt{g_{00}- g_{11}\dot{ x}^{2}}}.$$
The expression can be easily reversed for the velocity, $$\dot{ x}= \sqrt{ \frac{g_{00}}{g_{11}}}
\frac{p}{ \sqrt{p^{2}+m^{2}g_{11} }}.$$
The Hamiltonian is then given by, $$\label{Hamiltonian}
H \equiv p \dot{x}-\mathcal{L}= \sqrt{ \frac{g_{00}}{g_{11}}}
\sqrt{ p^{2}+ m^{2} g_{11}}
+ e \phi.$$
Now let us apply the Hamilton-Jacobi method to solve this system. Recall that the idea of the method is to make a canonical transformation to new variables $ J $ and $ \theta $, such that the new canonical momentum $ J $ is cyclic. The characteristic function $ W(x, E) $ of the transformation is found from the condition, $$\label{HJeq}
H(x,\partial W/ \partial x)-E =0.$$ where $E $ is a constant depending on initial conditions and determining the value of the Hamiltonian. The formal solution to the eq. is given by, $$\label{Char-func}
W=
\int_{x_{0}}^{x}
\sqrt{ \frac{g_{11}}{g_{00}}(E - e \phi)^{2}-m^{2} g_{11}}\,{\mathrm{d}}x.$$ where $ x_{0} $ is the initial value of position.
In the case of compact motion, we can define the *action variable* by extending the integral defining the characteristic function to one periodicity cycle of motion, $$\label{action-var}
J \equiv\oint p {\mathrm{d}}x=\oint \sqrt{ \frac{g_{11}}{g_{00}}( E- e \phi)^{2}-m^{2} g_{11}}\,{\mathrm{d}}x.$$
The integration path in is bounded by *classical turning points*, at which the integrand vanishes, i.e., $$\label{eq:tp}
E-e \phi(x)= \pm m\sqrt{g_{00}}.$$ The existence of turning points determines the bound motion. As the particle at turning points is in a non-relativistic regime, the turning point condition, curiously, is exactly the same as for a non-relativistic system with potential , apart from the possibility of sign variation.
The turning points $ x_{i} $ and $ x_{i+1} $ are bounding a classical region determined by the condition, $$m\sqrt{g_{00}} \leq |E- e \phi|.$$
Notice, that for non-vanishing mass the two branches, corresponding to either choice of sign in are well separated. This implies that both ends of the same classical region should have turning points corresponding to the same choice of the sign, which means that a particle can never “turn back in time” within a classical region. This is a manifestation of particle/antiparticle conservation in relativistic mechanics. In the case of zero mass, however, the situation could be different.[^5]
In a generic reference frame with the Killing vector given by components $ \xi^{ \mu} $, where $ \xi^{2} =\xi^{ \mu} \xi_{ \mu}>0 $ the general covariant form of the action variable is given by, $$\label{action-var-gi}
J(E)=
\oint_{C_{ E,\xi}} \sqrt{(E-e \xi \cdot A)^{2}/ \xi^{2}-m^{2}} {\mathrm{d}}\lambda,$$ where $ {\mathrm{d}}\lambda $ is the invariant integration measure over the ‘cycle’ $ C_{E, \xi} $ given by $ x^{ \mu}( \tau) $ such that, $$\xi_{ \mu} \dot{ x}^{ \mu}=0,$$ and condition, $$E-e \xi^{ \mu} A_{ \mu} \geq m.$$ The expression is manifestly reparametrisation invariant, while in the special coordinate frame in which the Killing vector is $ \xi^{ \mu}=(1,0) $ we recover the eq. .[^6] Notice, that $ A_{ \mu} $ is still gauge fixed.
Classical equations of motion imply that the action variable $ J $ take constant values along the classical path. Let us solve the Eq. for $ E $. Then, the conjugate angle variable $ \theta $ satisfies the equation, $$\dot{ \theta}= \frac{ \partial E (J)}{ \partial J} \equiv \omega (J).$$ As $ \omega(J) $ is a constant of motion too, the solution for $ \theta(t) $ is given by, $$\theta(t)= \omega t+ \theta_{0},$$ where $ \theta_{0} $ is the (new) initial condition, which together with the value of $ J $ (or energy $ E $) gives a complete set of initial conditions.
Now we are ready to consider the semiclassical quantisation of the system. According to Bohr-Sommerfeld quantisation prescription the action variable should take discrete values given by, $$J(E_{n})= 2\pi (n+ \gamma)\hbar,$$ where $ n=0,1,2\dots $, and $ 0\leq \gamma <1 $ is a constant determining the vacuum value of the action variable. This gives an implicit formula for the energy level $ E_{n} $.
Example: AdS space
------------------
There are many good examples to play with our approach, among which the Anti-de Sitter space (AdS) is, perhaps, distinguished due to importance of this geometry.
The metric of the $ D+1 $-dimensional AdS is given by, $$\label{AdS-metr}
{\mathrm{d}}s^{2}=\left(1+\frac{r^{2}}{R^{2}}\right){\mathrm{d}}t^{2}-\left(1+\frac{r^{2}}{R^{2}}\right)^{-1}dr^{2}-r^{2}{\mathrm{d}}\Omega_{D-1}^{2},$$ where $ {\mathrm{d}}\Omega_{D-1} $ is the differential of $ (D-1) $-solid angle (the hyper-volume of a unit $ (D-1) $-sphere).
In the case of $ (1+1) $-dimensional space or a pure radial motion, the angular part can be discarded. In this case we deal with a two-dimensional space with the metric, $${\mathrm{d}}s^{2}=\left(1+\frac{r^{2}}{R^{2}}\right){\mathrm{d}}t^{2}-\left(1+\frac{r^{2}}{R^{2}}\right)^{-1}{\mathrm{d}}r^{2},$$ which is in the “standard” form of Eq. .
Applying directly the definition , we find the action variable, $$\label{JAdS}
J=
\oint \sqrt{ \frac{E^{2}}{(1+r^{2}/R^{2})^{2}}- \frac{m^{2}}{1+ r^{2}/R^{2}}} {\mathrm{d}}r.$$ The integration is over one cycle of motion bounded by turning points $ r_{0} $ determined by the equation, $$E^{2}-m^{2}\left( 1+ \frac{r^{2}}{R^{2}}\right)=0 \Rightarrow
r_{0} = R \sqrt{(E/m)^{2}-1},$$ i.e. a particle in the AdS space is trapped in the region $ r \leq R \sqrt{(E/m)^{2}-1} $. Obviously, its energy can not be less than its mass…
Evaluation of the integral in Eq. yields, $$J=2\pi R(|E|-m).$$ Solving this for the energy we get, $$|E|=m+J/2\pi R.$$ This gives the angular speed $ \omega_{\text{AdS}}=1/R $, and complete classical description of the model.
The semiclassical energy levels are given by, $$|E_{n}|=m+ \frac{ \hbar}{R} (n+ \gamma),$$ which are pretty much compatible with the result of solving the Schrödinger equation in AdS, [@Fitzpatrick:2014vua] (see also [@Kapl]). Let us note, that positive energies ($ E_{n}>m $) correspond to energy levels of particle, while negative energies ($ E_{n}<-m $) stand for the anti-particle.
Remarkably, the particle in the AdS space background provides an example, alongside the harmonic oscillator, for which semiclassical energy levels are exact.
Discussion
==========
In this work, we introduced the action-angle variables for a relativistic particle moving in gravity and electromagnetic field background. The approach is readily available as long as background is static, i.e. there is a time-like Killing vector field commuting with the metric and the electromagnetic field strength. The existence of time isometry is the relativistic counterpart of the conservativeness of a non-relativistic system.
As an example, we apply the approach to the radial motion of a particle in the anti-de Sitter background for which we were able to find semi-classically quantised energy levels. Let us note that when $ E\gg m $, the classical trajectory of the particle is deeply relativistic. Therefore the non-relativistic approximation couldn’t be used here, while the system can be still semiclassical.
Although the method is explicitly constructed for 1+1-dimensional spaces, it can be generalised to higher dimensions as long as geometry allows separation of dynamical variables.
### Acknowledgements {#acknowledgements .unnumbered}
This work is done within the undergraduate research program for the bachelor’s degree thesis [@Jaehun-thes].
[^1]: e-mail:`[email protected]`
[^2]: e-mail: `[email protected]`
[^3]: Notice, that we separate the sign from the definition of $ g_{11} $, i.e. $ g_{11}(x)>0 $.
[^4]: The proof is left to the reader.
[^5]: In this case one should consider non-charged particles.
[^6]: In this coordinates the covariant Killing vector is $ \xi_{ \mu}= (g_{00},0) $.
|
---
abstract: 'This article proposes a family of link functions for the multinomial response model. The link family includes the multicategorical logistic link as one of its members. Conditions for the local orthogonality of the link and the regression parameters are given. It is shown that local orthogonality of the parameters in a neighbourhood makes the link family location and scale invariant. Confidence regions for jointly estimating the percentiles based on the parametric family of link functions are also determined. A numerical example based on a combination drug study is used to illustrate the proposed parametric link family and the confidence regions for joint percentile estimation.'
author:
- |
I. Das$$, S.~Mukhopadhyay$$[^1]\
$^{}$Department of Mathematics, Indian Institute of Technology Bombay, Mumbai, India
bibliography:
- 'reidview.bib'
title: On generalized multinomial models and joint percentile estimation
---
Keywords: confidence regions, multicategorical logistic link, parameter orthogonality, standardization
Introduction
============
In this article we address two issues related to multinomial response models, (i) a family of link functions and (ii) percentile estimation under a parametric link family. In the first few sections we propose a family of link functions for multinomial nominal response models. When working with multinomial data sets the common practice is to fit the multicategory logistic link function [@agresti_2002 pp. 267-274]. However, [@1992_czado] show that if the link function is incorrectly assumed then it leads to biased estimates thus increasing the mean squared error of prediction. Using a data set based on a combination drug therapy experiment we show that parameter estimation is improved by using the proposed link family instead of the commonly used multivariate logistic link. The parametric link family proposed includes the multivariate logistic link as one of its members. In the later part of the article we discuss three methods for finding confidence regions for the percentiles of a multinomial response model. The confidence regions determined are based on the estimated values of the link parameters. In univariate generalized linear models (GLMs), especially for binary data, family of link functions have been discussed by many researchers. Some of the one and two parameter link families for binary models are, proposed by namely [@1975_prentice; @1976_prentice; @1980_pregibon; @1982_johnson; @1981_aranda; @1988_stukel; @1992_czado; @1992_czadob; @1993_czado; @1997_czado; @1999_lang]. However, unlike binary regression models research papers on link families for multinomial responses are rarely found in the literature. The two parametric link families proposed by [@1985_genter] and [@1999_lang] are only applicable to multinomial data sets with ordered categories. Till date we were unable to find any work which addresses a family of link functions for multinomial data sets with nominal responses. The situation is similar for percentile estimation methods in multinomial response models. Though a huge number of research papers (namely, [@1979_hamilton; @1986_carter; @1986_williams; @2001_huang; @2006_biedermann; @2011_li]) have been written on percentile estimation and effect of link misspecification on percentile estimation for binary data, almost no work has been done in the case of multinomial data. There are, however, many experimental situations where multinomial responses may be observed for each setting of a group of control variables. As a typical example we may consider a drug testing experiment, where both the efficacious and toxic responses of the drug/s are measured on the subjects. This results in two responses, efficacy and toxicity of the drug, both of which are binary in nature. Since the responses come from the same subject they are assumed to be correlated, and can be modeled using a multinomial distribution [@mukhopadhyaykhuri_2008b]. In this situation it may be of interest to the experimenter to jointly estimate the $100p$ percentile of the efficacy and toxic responses. In this article we discuss a numerical example based on the pain relieving and toxic effects of two analgesic drugs and determine confidence regions for the $100$p percentiles of both the responses.
While parametric link families are able to improve the maximum likelihood fit when compared to canonical links, any correlation between the link and the regression parameters leads to an increase in the variances of the parameter estimates \[[@1997_czado]\]. However, it can be shown that if the parameters are orthogonal to each other then the variance inflation reduces to zero for large sample sizes [@1987_cox]. Conditions for local orthogonality in a neighbourhood was proposed by [@1997_czado] for univariate GLMs. In this article we extend these conditions so that we can apply them to a multiresponse situation. It is also shown that the local orthogonality of the parameters imply location and scale invariance of the family of link functions.
The remainder of the article is organized as follows: In Section \[mvglm\] we describe the family of link functions for the multinomial model. Detailed conditions of local orthogonality between the link and the regression parameters are given in Section \[ortho\]. In Section \[cr\] we discuss three interval methods for percentile estimation in a multinomial model. The proposed link family and confidence regions are illustrated with a numerical example based on a drug testing experiment in Section \[example\]. Concluding remarks are given in Section \[conclusion\].
A family of link functions for multinomial data {#mvglm}
===============================================
In this section the multinomial response model with a parametric link function is introduced. We use a scaled version of the multinomial distribution. The following three components are used to describe it:
- Distributional component: A random sample of size $n$, $\bold{y}_1,\ldots,\bold{y}_n$, is selected from a multinomial distribution with parameters $({{\boldsymbol{\pi}}}_i,n_i);\,{{\boldsymbol{\pi}}}_i = (\pi_{i1},\ldots,\pi_{iq}),\, i = 1,\ldots,n$. The density function of $\bar{\textbf{y}}_i={\textbf{y}}_i/n_i$ also called the scaled multinomial distribution [@fahrmeirtutz_2001 p 76] is, $$s(\bar{\textbf{y}}_i|\boldsymbol\theta_i,\phi,\omega)=\exp\left\{\frac{[\bar{\textbf{y}}'_i\boldsymbol
\theta_i-b(\boldsymbol\theta_i)]}{\phi}\omega_i+c(\textbf{y}_i,\phi,\omega_i)\right\},\label{smd}$$ where $\boldsymbol\theta_i=\left[\log(\frac{\pi_{i1}}{1-\sum_{j=1}^q\pi_{ij}}),
\ldots,\log(\frac{\pi_{iq}}{1-\sum_{j=1}^q\pi_{ij}})\right]'$, $b(\boldsymbol\theta_i)=-\log({1-\sum_{j=1}^q\pi_{ij}})$, $c({\textbf{y}}_i,\phi,\omega_i)=\log\left(
\frac{n_i!}{y_{i1}!\ldots y_{iq}!(n_i-y_{i1}-\ldots-y_{iq})!}\right)$, $\omega_i=n_i$ and $\phi=1$. The total number of observations is $N=\sum_{i=1}^n n_i$.
- Linear predictor: A $q$ dimensional linear predictor, ${{\boldsymbol{\eta}}}({{\mathbf{x}}})={{\mathbf{Z}}}({{\mathbf{x}}}){{\boldsymbol{\beta}}}$, where ${{\mathbf{Z}}}({{\mathbf{x}}})=\bigoplus_{j=1}^{q}\textbf{f}_j(\textbf{x})$, $\textbf{f}_j(\textbf{x})$ is a known vector function of $\textbf{x}$, $\boldsymbol\beta=[\boldsymbol\beta_1',\ldots, \boldsymbol\beta_q']'$ is the $p \times 1$ vector of unknown parameters with the $j$th component, $\boldsymbol\beta_j$, of length $p_j$ and $p=\sum_{j=1}^q p_j$.
- Parametric link function: ${{\boldsymbol{\mu}}}={{\boldsymbol{\pi}}}={{\mathbf{h}}}({{\boldsymbol{\alpha}}},{{\boldsymbol{\eta}}})$, where ${{\mathbf{h}}}({{\boldsymbol{\alpha}}},\cdot)=[h_1({{\boldsymbol{\alpha}}},\cdot),\ldots,h_q({{\boldsymbol{\alpha}}},\cdot)]'$, ${{\boldsymbol{\alpha}}}_{r\times 1}=[{{\boldsymbol{\alpha}}}_1',\ldots,{{\boldsymbol{\alpha}}}_q']'$, ${{\boldsymbol{\alpha}}}_j$ is of length $r_j$ and $\sum_{j=1}^{q}r_j=r$.
Proposed form of parametric link function
-----------------------------------------
Several researchers [@1988_stukel; @1989_czado] propose the following generalization for binary response models with a logistic link function $$\mu({{\mathbf{x}}})=E(y|{{\mathbf{x}}})=h({{\boldsymbol{\alpha}}},\eta)=\frac{\exp\{G({{\boldsymbol{\alpha}}},\eta)\}}{[1+\exp\{G({{\boldsymbol{\alpha}}},\eta)\}]},$$ where $G({{\boldsymbol{\alpha}}},\cdot)$ is a generating family with the unknown link parameter ${{\boldsymbol{\alpha}}}$. For example using the generating family by [@1989_czado] we get $$G({{\boldsymbol{\alpha}}},\eta)
=\left\{\begin{matrix}\frac{(1+\eta)^{\alpha_{1}}-1}{\alpha_{1}} & \text{if} & \eta\geq 0\\
-\frac{(1-\eta)^{\alpha_{2}}-1}{\alpha_{2}} & \text{if} & \eta<0,
\end{matrix}\right.\label{czado}$$ where ${{\boldsymbol{\alpha}}}=[\alpha_1,\alpha_2]'$. Usually when modeling the mean in a multinomial response model the multivariate version of the logit model [@agresti_2002 pp. 267-274] is used, $$\begin{aligned}
\pi_{j}=h_j({{\boldsymbol{\eta}}})=\frac{\exp(\eta_{j})}{1+\sum_{l=1}^q\exp(\eta_{l})},
\text{ for $j=1,\ldots,q$}.\end{aligned}$$ An alternative form of the above model is given by using the link function ${{\mathbf{g}}}$ where ${{\mathbf{g}}}={{\mathbf{h}}}^{-1}$, $$\begin{aligned}
\eta_j=g_j({{\boldsymbol{\mu}}})=\log\frac{\mu_{j}}{1-\sum_{j=1}^q\mu_{j}}.\end{aligned}$$ Analogous to the binary case we propose the following generalization of the multicategorical logit model, $$\begin{aligned}
\pi_{j}=h_j({{\boldsymbol{\alpha}}},{{\boldsymbol{\eta}}})=\frac{\exp\{G_j({{\boldsymbol{\alpha}}}_j,\eta_j)\}}
{1+\sum_{l=1}^q\exp\{G_l({{\boldsymbol{\alpha}}}_l,\eta_l)\}},\text{ for $j=1,\ldots,q$},\label{Gh}\end{aligned}$$ where ${{\mathbf{G}}}=[G_1,G_2,\ldots,G_q]'$, $G_j({{\boldsymbol{\alpha}}}_j,\cdot)$ is a generating family for binary response models as described above, $h_j({{\boldsymbol{\alpha}}},\cdot)$ is the $j$th component of ${{\mathbf{h}}}({{\boldsymbol{\alpha}}},\cdot)={{\mathbf{g}}}^{-1}({{\boldsymbol{\alpha}}},\cdot)$. The family $\Lambda=\{{{\mathbf{h}}}({{\boldsymbol{\alpha}}},\cdot):{{\boldsymbol{\alpha}}}\in{{\boldsymbol{\Omega}}}\}$ includes the multivariate logistic link function if there exists an ${{\boldsymbol{\alpha}}}_0\in{{\boldsymbol{\Omega}}}$ such that ${{\mathbf{G}}}({{\boldsymbol{\alpha}}}_0,\cdot)$ is a identity function.
Parameter estimation
--------------------
Summing up the previous sections we can write the multinomial model with a parametric link function as, $${{\boldsymbol{\pi}}}({{\mathbf{x}}})={{\mathbf{h}}}[{{\boldsymbol{\alpha}}},{{\boldsymbol{\eta}}}({{\mathbf{x}}})],$$ where ${{\boldsymbol{\eta}}}({{\mathbf{x}}})=[{{\boldsymbol{\eta}}}_1({{\mathbf{x}}}),\ldots,{{\boldsymbol{\eta}}}_q({{\mathbf{x}}})]' ={{\mathbf{Z}}}({{\mathbf{x}}}){{\boldsymbol{\beta}}},\,{{\mathbf{x}}}\in R^{k}$, $\boldsymbol\beta$ is a $p \times 1$ vector of unknown parameters and ${{\boldsymbol{\alpha}}}$ is $r\times 1$ vector of unknown link parameters. Also, $${{\boldsymbol{\eta}}}_{j}={{\boldsymbol{\eta}}}_j({{\mathbf{x}}})={{\mathbf{f}}}_j({{\mathbf{x}}}){{\boldsymbol{\beta}}}_j={{\mathbf{g}}}_j[{{\boldsymbol{\alpha}}},{{\boldsymbol{\pi}}}({{\mathbf{x}}})],\text{ for $j=1,\ldots,q$},$$ where ${{\mathbf{g}}}=[{{\mathbf{g}}}_1,\ldots,{{\mathbf{g}}}_q]'$ is the inverse of ${{\mathbf{h}}}=[{{\mathbf{h}}}_1,\ldots,{{\mathbf{h}}}_q]'$. We use the notation ${{\boldsymbol{\delta}}}$ to denote the joint vector of the regression and link parameters, thus ${{\boldsymbol{\delta}}}=[{{\boldsymbol{\beta}}}',{{\boldsymbol{\alpha}}}']'$ is a vector of length $(p+r)$. The parameter vector ${{\boldsymbol{\delta}}}$ is estimated using the maximum likelihood estimation (MLE) method. A brief description of the procedure is given as follows:
Using the scaled version of the multinomial distribution as described in equation (\[smd\]) the log-likelihood function for the sample ${{\mathbf{y}}}_1,\ldots,{{\mathbf{y}}}_n$ is, $$\begin{aligned}
l({{\boldsymbol{\delta}}}) &=& \sum_{i=1}^n l_i({{\boldsymbol{\delta}}})\nonumber\\
&=& \sum_{i=1}^n[{\bar{{{\mathbf{y}}}}}_i'{{\boldsymbol{\theta}}}_i-b({{\boldsymbol{\theta}}}_i)]n_i+constant.\label{ll}
\end{aligned}$$ Thus the score function is [@fahrmeirtutz_2001 p 436], $$\begin{aligned}
\frac{\partial l({{\boldsymbol{\delta}}})}{\partial{{\boldsymbol{\delta}}}} &=& \frac{\partial}{\partial{{\boldsymbol{\delta}}}}\sum_{i=1}^n[{\bar{{{\mathbf{y}}}}}_i'{{\boldsymbol{\theta}}}_i-b({{\boldsymbol{\theta}}}_i)]n_i\nonumber\\
&=& \sum_{i=1}^n\frac{\partial{{\boldsymbol{\mu}}}_i}{\partial{{\boldsymbol{\delta}}}}[Var({\bar{{{\mathbf{y}}}}}_i)]^{-1}({\bar{{{\mathbf{y}}}}}_i-{{\boldsymbol{\mu}}}_i),
\label{dlddeltav}
\end{aligned}$$ and [@fahrmeirtutz_2001 p 436] $$\begin{aligned}
-\frac{\partial^2 l({{\boldsymbol{\delta}}})}{\partial{{\boldsymbol{\delta}}}\partial{{\boldsymbol{\delta}}}'} &=& \sum_{i=1}^n\frac{\partial{{\boldsymbol{\mu}}}_i}{\partial{{\boldsymbol{\delta}}}}[Var({\bar{{{\mathbf{y}}}}}_i)]^{-1}\frac{\partial{{\boldsymbol{\mu}}}_i}{\partial{{\boldsymbol{\delta}}}'}-
\sum_{i=1}^n\sum_{j=1}^q\frac{\partial^2\theta_{ij}}{\partial{{\boldsymbol{\delta}}}\partial{{\boldsymbol{\delta}}}'}(\bar{y}_{ij}-\mu_{ij})n_i\nonumber\\
&=& {{\mathbf{H}}}_n,\ (say).\label{dl2ddeltav}
\end{aligned}$$ From equation (\[dl2ddeltav\]) we get the Fisher information matrix to be $$\begin{aligned}
{{\mathbf{J}}}_n &=& -E\left[\frac{\partial^2 l({{\boldsymbol{\delta}}})}{\partial{{\boldsymbol{\delta}}}\partial{{\boldsymbol{\delta}}}'}\right] = \sum_{i=1}^n\frac{\partial{{\boldsymbol{\mu}}}_i}{\partial{{\boldsymbol{\delta}}}}[Var({\bar{{{\mathbf{y}}}}}_i)]^{-1}\frac{\partial{{\boldsymbol{\mu}}}_i}{\partial{{\boldsymbol{\delta}}}'}.\label{jv}
\end{aligned}$$
For maximizing the log-likelihood the Fisher scoring iteration method is used which yields, $${{\boldsymbol{\delta}}}^{(m+1)}={{\boldsymbol{\delta}}}^{(m)}+{{\mathbf{J}}}_n^{-1}\frac{\partial l[{{\boldsymbol{\delta}}}^{(m)}]}{\partial{{\boldsymbol{\delta}}}},$$ where $m$ indicates the $m$th iteration. It is also possible to use the method given in [@1988_stukel] for finding an approximate MLE of ${{\boldsymbol{\delta}}}$. In this method, the MLE of ${{\boldsymbol{\beta}}}$ is obtained by fixing ${{\boldsymbol{\alpha}}}$ and denoted by $\hat{\beta}({{\boldsymbol{\alpha}}})$. An approximate MLE of ${{\boldsymbol{\delta}}}$ is then given by $\hat{{{\boldsymbol{\delta}}}}=[\hat{{{\boldsymbol{\beta}}}}'(\hat{{{\boldsymbol{\alpha}}}}),\hat{{{\boldsymbol{\alpha}}}}']'$ which maximizes the log-likelihood function over a set ${{\boldsymbol{\alpha}}}$.
Asymptotic distribution of $\hat{{{\boldsymbol{\delta}}}}$ {#asym}
----------------------------------------------------------
Suppose ${\hat{{{\boldsymbol{\delta}}}}}=[{\hat{{\boldsymbol{\beta}}}}',{\hat{{{\boldsymbol{\alpha}}}}}']'$ denotes the MLE of ${{\boldsymbol{\delta}}}$. Using equation (\[dlddeltav\]) and the central limit theorem we know $\frac{\partial l({{\boldsymbol{\delta}}})}{\partial{{\boldsymbol{\delta}}}}$ asymptotically follows a normal distribution with mean $\mathbf{0}$ and variance ${{\mathbf{J}}}_n$. By first order Taylor series expansion, $$\begin{aligned}
\mathbf{0}=\frac{\partial l({\hat{{{\boldsymbol{\delta}}}}})}{\partial{{\boldsymbol{\delta}}}}&=&\frac{\partial l({{\boldsymbol{\delta}}})}{\partial{{\boldsymbol{\delta}}}}+
\left[\frac{\partial^2 l({{\boldsymbol{\delta}}})}{\partial{{\boldsymbol{\delta}}}\partial{{\boldsymbol{\delta}}}'}\right]({\hat{{{\boldsymbol{\delta}}}}}-{{\boldsymbol{\delta}}}).
\end{aligned}$$ This implies [@fahrmeirtutz_2001 p 439], $$\begin{aligned}
\sqrt{N}({\hat{{{\boldsymbol{\delta}}}}}-{{\boldsymbol{\delta}}})&=&
\sqrt{N}{{\mathbf{H}}}^{-1}_n\frac{\partial l({{\boldsymbol{\delta}}})}{\partial{{\boldsymbol{\delta}}}}\nonumber=\sqrt{N}{{\mathbf{J}}}_n^{-1}\frac{\partial l({{\boldsymbol{\delta}}})}{\partial{{\boldsymbol{\delta}}}}+O_p(N^{-1/2}).\label{tsd} \end{aligned}$$ Thus, we get that ${\hat{{{\boldsymbol{\delta}}}}}$ has an asymptotic normal distribution with mean ${{\boldsymbol{\delta}}}$ and variance ${{\mathbf{J}}}_n^{-1}$.
Orthogonalization of link and regression parameter vectors {#ortho}
==========================================================
In this section we discuss certain conditions for which the link parameters are approximately orthogonal to the regression parameters in a neighbourhood asymptotically. In our numerical examples we show that approximate orthogonality of the parameters reduces the variance inflation of ${\hat{{\boldsymbol{\beta}}}}$ while increasing the numerical stability of the computations. The family of link functions for which the regression parameters are approximately orthogonal to the link parameters in a neighbourhood are also location and scale invariant. @1989_li noted the importance of a family of link function being location and scale invariant. In their paper they observed that for a unspecified link function the intercept parameter was not identified while the slope parameter was identified only up to a multiplicative constant. Thus any variation in the location and scale was absorbed by the link function.\
**Proposition 1:** The regression parameter vector ${{\boldsymbol{\beta}}}$ and link parameter vector ${{\boldsymbol{\alpha}}}$ are approximately orthogonal in a neighbourhood around ${{\boldsymbol{\eta}}}_0$, if the family of link functions ${{\mathbf{h}}}({{\boldsymbol{\alpha}}},\cdot)$ satisfies the following conditions, (i) there exists ${{\boldsymbol{\eta}}}_0$ and ${{\boldsymbol{\pi}}}_0$ such that $${{\mathbf{h}}}({{\boldsymbol{\alpha}}},{{\boldsymbol{\eta}}}_0)={{\boldsymbol{\pi}}}_0,\ \forall\ {{\boldsymbol{\alpha}}}\in\Omega, \label{cond1}$$ and (ii) there exists a ${{\mathbf{s}}}_0$ such that $$\frac{\partial {{\mathbf{h}}}({{\boldsymbol{\alpha}}},{{\boldsymbol{\eta}}})}{\partial{{\boldsymbol{\eta}}}}\left|_{{{\boldsymbol{\eta}}}={{\boldsymbol{\eta}}}_0}\right.={{\mathbf{s}}}_0,\
\forall\ {{\boldsymbol{\alpha}}}\in\Omega.\label{cond2}$$
**Proof:** By first order Taylor series expansion of ${{\mathbf{h}}}({{\boldsymbol{\alpha}}},{{\boldsymbol{\eta}}})$ around ${{\boldsymbol{\eta}}}_0$ and equations (\[cond1\]) and (\[cond2\]), $$\begin{aligned}
{{\mathbf{h}}}({{\boldsymbol{\alpha}}},{{\boldsymbol{\eta}}}) &\approx& {{\mathbf{h}}}({{\boldsymbol{\alpha}}},{{\boldsymbol{\eta}}}_0)+\frac{\partial {{\mathbf{h}}}({{\boldsymbol{\alpha}}},{{\boldsymbol{\eta}}})}{\partial{{\boldsymbol{\eta}}}}
\left|_{{{\boldsymbol{\eta}}}={{\boldsymbol{\eta}}}_0}\right.({{\boldsymbol{\eta}}}-{{\boldsymbol{\eta}}}_0)\label{te}\\&=&
{{\boldsymbol{\pi}}}_0+{{\mathbf{s}}}_0({{\boldsymbol{\eta}}}-{{\boldsymbol{\eta}}}_0).\label{ute}\end{aligned}$$ Equation (\[ute\]) shows that the family of link functions ${{\mathbf{h}}}({{\boldsymbol{\alpha}}},\cdot)$ is approximately independent of ${{\boldsymbol{\alpha}}}$ in a neighbourhood of ${{\boldsymbol{\eta}}}_0$ where approximation (\[te\]) holds asymptotically. Hence, if the conditions (\[cond1\]) and (\[cond2\]) are satisfied, then the regression parameters ${{\boldsymbol{\beta}}}$ and the link parameters ${{\boldsymbol{\alpha}}}$ are approximately orthogonal in a neighbourhood of ${{\boldsymbol{\eta}}}_0$ asymptotically.\
Extending the definition of a location and scale invariant family given by [@1997_czado] to the multiple response case we state: a family ${{\boldsymbol{\Lambda}}}$ is said to be [*location and scale invariant*]{} if for every ${{\mathbf{h}}}\in {{\boldsymbol{\Lambda}}}$, the function ${{\mathbf{h}}}^*({{\boldsymbol{\alpha}}},{{\boldsymbol{\eta}}})={{\mathbf{h}}}({{\boldsymbol{\alpha}}},{{\mathbf{a}}}+{{\mathbf{b}}}{{\boldsymbol{\eta}}})\notin{{\boldsymbol{\Lambda}}}$ for all ${{\mathbf{a}}}\neq \mathbf{0}_a$ and ${{\mathbf{b}}}\neq
\mathbf{0}_b$ or ${{\mathbf{I}}}_b$, where ${{\mathbf{a}}}=[a_1,\ldots,a_q]'$, ${{\mathbf{b}}}=diag(b_1,\ldots,b_q)$, $\mathbf{0}_a$ is a matrix of the same order as ${{\mathbf{a}}}$ with all elements zero, $\mathbf{0}_b$ is a matrix of same order as ${{\mathbf{b}}}$ with all elements zero and ${{\mathbf{I}}}_b$ is an identity matrix with the same order as ${{\mathbf{b}}}$.\
**Proposition 2:** If every member of a family ${{\boldsymbol{\Lambda}}}=\{{{\mathbf{h}}}({{\boldsymbol{\alpha}}},\cdot):{{\boldsymbol{\alpha}}}\in{{\boldsymbol{\Omega}}}\}$ satisfies conditions (\[cond1\]) and (\[cond2\]) for fixed ${{\boldsymbol{\pi}}}_0$ and ${{\mathbf{s}}}_0$, then ${{\boldsymbol{\Lambda}}}$ is location and scale invariant.\
**Proof:** Suppose every member ${{\mathbf{h}}}$ of the family ${{\boldsymbol{\Lambda}}}$ satisfies conditions (\[cond1\]) and (\[cond2\]) for fixed ${{\boldsymbol{\pi}}}_0$ and ${{\mathbf{s}}}_0$. Define ${{\mathbf{h}}}^*({{\boldsymbol{\alpha}}},{{\boldsymbol{\eta}}})={{\mathbf{h}}}({{\boldsymbol{\alpha}}},{{\mathbf{a}}}+{{\mathbf{b}}}{{\boldsymbol{\eta}}})$, then at ${{\boldsymbol{\eta}}}^*={{\mathbf{b}}}^{-1}({{\boldsymbol{\eta}}}_0-{{\mathbf{a}}})$, $${{\mathbf{h}}}^*({{\boldsymbol{\alpha}}},{{\boldsymbol{\eta}}}^*)={{\mathbf{h}}}({{\boldsymbol{\alpha}}},{{\boldsymbol{\eta}}}_0)={{\boldsymbol{\pi}}}_0\ \forall\ {{\boldsymbol{\alpha}}}\ \in{{\boldsymbol{\Omega}}},$$ where ${{\mathbf{a}}}\neq \mathbf{0}_a$ and ${{\mathbf{b}}}\neq
\mathbf{0}_b$ or ${{\mathbf{I}}}_b$. Thus, equation (\[cond1\]) is satisfied by ${{\mathbf{h}}}^*$ at ${{\boldsymbol{\eta}}}={{\boldsymbol{\eta}}}^*$. Also, $$\begin{aligned}
\frac{\partial{{\mathbf{h}}}^*({{\boldsymbol{\alpha}}},{{\boldsymbol{\eta}}})}{\partial{{\boldsymbol{\eta}}}}|_{{{\boldsymbol{\eta}}}={{\boldsymbol{\eta}}}^*}
&=& \frac{\partial{{\mathbf{h}}}({{\boldsymbol{\alpha}}},{{\mathbf{a}}}+{{\mathbf{b}}}{{\boldsymbol{\eta}}})}{\partial{{\boldsymbol{\eta}}}}|_{{{\boldsymbol{\eta}}}={{\boldsymbol{\eta}}}^*}=
{{\mathbf{b}}}\frac{\partial{{\mathbf{h}}}({{\boldsymbol{\alpha}}},{{\boldsymbol{\eta}}})}{\partial{{\boldsymbol{\eta}}}}|_{{{\boldsymbol{\eta}}}={{\boldsymbol{\eta}}}_0}= {{\mathbf{b}}}{{\mathbf{s}}}_0\neq{{\mathbf{s}}}_0,\end{aligned}$$ for ${{\mathbf{b}}}\neq {{\mathbf{I}}}_b$, implying ${{\mathbf{h}}}^*\notin{{\boldsymbol{\Lambda}}}$ for ${{\mathbf{a}}}\neq \mathbf{0}_a$ and ${{\mathbf{b}}}\neq
\mathbf{0}_b$ or ${{\mathbf{I}}}_b$. Hence the family ${{\boldsymbol{\Lambda}}}$ is location and scale invariant.
Construction of $({{\boldsymbol{\pi}}}_0,{{\mathbf{s}}}_0)$-standardized link families at ${{\boldsymbol{\eta}}}_0$
-------------------------------------------------------------------------------------------------------------------
A family ${{\boldsymbol{\Lambda}}}$ satisfying conditions (\[cond1\]) and (\[cond2\]) is called $({{\boldsymbol{\pi}}}_0,{{\mathbf{s}}}_0)$-standardized at ${{\boldsymbol{\eta}}}_0$ [@1997_czado].\
**Proposition 3:** Suppose ${{\mathbf{G}}}({{\boldsymbol{\alpha}}},{{\boldsymbol{\eta}}})=[G_1({{\boldsymbol{\alpha}}}_1,\eta_1),\ldots,G_q({{\boldsymbol{\alpha}}}_q,\eta_q)]'$ where ${{\boldsymbol{\alpha}}}=[{{\boldsymbol{\alpha}}}_1,\ldots,{{\boldsymbol{\alpha}}}_q]'$ and ${{\boldsymbol{\eta}}}=[\eta_1,\ldots,\eta_q]'$, such that each {$G_j({{\boldsymbol{\alpha}}}_j,\eta_j),\ {{\boldsymbol{\alpha}}}_j\in\Omega_j$} is a generating family for binary response models and are $(\mu_{0j}, s_{0j})$-standardized at $\eta_{0j}$ for $j=1,2,\ldots,q$. Then the family ${{\boldsymbol{\Lambda}}}_{{{\mathbf{g}}}}=\{{{\mathbf{G}}}({{\boldsymbol{\alpha}}},\cdot):
{{\boldsymbol{\alpha}}}\in{{\boldsymbol{\Omega}}}=\Omega_1\times\ldots\times\Omega_q$} is $({{\boldsymbol{\pi}}}_0,{{\mathbf{s}}}_0)$-standardized at ${{\boldsymbol{\eta}}}_0$, where ${{\boldsymbol{\eta}}}_0=[\eta_{01},\ldots,\eta_{0q}]'$, ${{\boldsymbol{\pi}}}_0=[\mu_{01},\ldots,\mu_{0q}]'$, and ${{\mathbf{s}}}_0=diag\{s_{01},\ldots,s_{0q}\}$.\
**Proof:** Since, $G_j({{\boldsymbol{\alpha}}}_j,\eta_j)$ is $(\mu_{0j}, s_{0j})$-standardized at $\eta_{0j}$, $G_j({{\boldsymbol{\alpha}}}_j,\eta_{0j})=\mu_{0j},\ \forall\ {{\boldsymbol{\alpha}}}_j\in\Omega_j, $ and $\frac{\partial G_j({{\boldsymbol{\alpha}}},\eta)}{\partial\eta}|_{\eta=\eta_{0j}}= s_{0j},\ \forall\ {{\boldsymbol{\alpha}}}_j\in\Omega_j$. Thus, ${{\mathbf{G}}}({{\boldsymbol{\alpha}}},{{\boldsymbol{\eta}}}_0) = [\mu_{01},\mu_{02},\ldots,\mu_{0q}]'={{\boldsymbol{\pi}}}_0,\ \forall\ {{\boldsymbol{\alpha}}}\in{{\boldsymbol{\Omega}}}$, and $\frac{\partial {{\mathbf{G}}}({{\boldsymbol{\alpha}}},{{\boldsymbol{\eta}}})}{\partial{{\boldsymbol{\eta}}}}|_{{{\boldsymbol{\eta}}}={{\boldsymbol{\eta}}}_0}= diag\{s_{01},s_{02},\ldots,s_{0q}\}={{\mathbf{s}}}_0,\ \forall\
{{\boldsymbol{\alpha}}}\in{{\boldsymbol{\Omega}}}$. Hence, the family ${{\boldsymbol{\Lambda}}}_{{{\mathbf{g}}}}$ is $({{\boldsymbol{\pi}}}_0,{{\mathbf{s}}}_0)$-standardized at ${{\boldsymbol{\eta}}}_0$.\
For using a $({{\boldsymbol{\pi}}}_0,{{\mathbf{s}}}_0)$-standardized generating family at ${{\boldsymbol{\eta}}}_0$ three parameters, ${{\boldsymbol{\pi}}}_0$, ${{\mathbf{s}}}_0$ and ${{\boldsymbol{\eta}}}_0$, need to be estimated. Avoiding estimation of extra parameters, the generating family can be standardized by choosing ${{\boldsymbol{\pi}}}_0={{\boldsymbol{\beta}}}_0$, ${{\boldsymbol{\eta}}}_0={{\boldsymbol{\beta}}}_0$ and ${{\mathbf{s}}}_0={{\mathbf{I}}}$. This selection allows for a meaningful interpretation of ${{\boldsymbol{\pi}}}_0$ [@1997_czado], when centered covariates are used. The $({{\boldsymbol{\pi}}}_0={{\boldsymbol{\beta}}}_0,{{\mathbf{s}}}_0={{\mathbf{I}}})$-standardized at ${{\boldsymbol{\eta}}}_0={{\boldsymbol{\beta}}}_0=[\beta_{10}\ldots,\beta_{q0}]'$ generating family is denoted by ${{\mathbf{G}}}_c({{\boldsymbol{\alpha}}},{{\boldsymbol{\eta}}})$, where the $j$th component of ${{\mathbf{G}}}_c$ is, $$G_{cj}({{\boldsymbol{\alpha}}}_j,\eta_j)=\beta_{j0}+G({{\boldsymbol{\alpha}}}_j,\eta_{cj}),\ \eta_{cj}=\eta_j-\beta_{j0}.$$ Here, $G({{\boldsymbol{\alpha}}}_j,\cdot)$ is a $(\mu_0=0,\ s_0=1)$-standardized at $\eta_0=0$ generating family for binary response models and $\beta_{j0}$ is the intercept parameter for the $j$th response.
Using condition (\[Gh\]), the family of link functions, ${{\mathbf{h}}}({{\boldsymbol{\alpha}}},{{\boldsymbol{\eta}}})$, for the multinomial response model is $({{\boldsymbol{\pi}}}_0,{{\mathbf{s}}}_0)$-standardized at ${{\boldsymbol{\eta}}}_0$ when $\pi_{0j}=\frac{\exp(\beta_{j0})}{1+\sum_{j=1}^q\exp(\beta_{j0})}$, and ${{\mathbf{s}}}_0={{\mathbf{c}}}{{\mathbf{I}}}$, ${{\mathbf{c}}}$ is a constant matrix with its $(j,k)$th element equal to, $$c_{jk}=\left\{\begin{matrix} \pi_{0j}(1-\pi_{0j}) & \text{if} & j=k\\ -\pi_{0j}\pi_{0k} & \text{if} & j\neq k.\end{matrix}\right.$$ As an example if we are using the generating family as suggested by [@1989_czado] for binary response models as our $G_j$, the generating family for multinomial responses is then $({{\boldsymbol{\pi}}}_0={{\boldsymbol{\beta}}}_0,{{\mathbf{s}}}_0={{\mathbf{I}}})$-standardized at ${{\boldsymbol{\eta}}}_0={{\boldsymbol{\beta}}}_0$ $$G_{cj}({{\boldsymbol{\alpha}}}_j,\eta_j)={{\boldsymbol{\beta}}}_{j0}+\left\{\begin{matrix}\frac{(1+\eta_{cj})^{\alpha_{j1}}-1}{\alpha_{j1}} & \text{if} & \eta_{cj}\geq 0\\
-\frac{(1-\eta_{cj})^{\alpha_{j2}}-1}{\alpha_{j2}} & \text{if} & \eta_{cj}<0,
\end{matrix}\right.\label{etabeta0}$$ where $\eta_{cj}=\eta_{j}-\beta_{j0}$, for $j=1,\ldots,q$.
In our numerical example (given in Section \[example\]) we observe that the variance inflation ratios are reduced when a $({{\boldsymbol{\pi}}}_0={{\boldsymbol{\beta}}}_0,{{\mathbf{s}}}_0={{\mathbf{I}}})$-standardized generating family at ${{\boldsymbol{\eta}}}_0={{\boldsymbol{\beta}}}_0$ is used instead of using a generating family which is $({{\boldsymbol{\pi}}}_0=\mathbf{0},{{\mathbf{s}}}_0={{\mathbf{I}}})$-standardized at ${{\boldsymbol{\eta}}}_0=\mathbf{0}$. Also, we observe that the Newton-Raphson iteration method does not converge when the $({{\boldsymbol{\pi}}}_0=\mathbf{0},{{\mathbf{s}}}_0={{\mathbf{I}}})$-standardized generating family at ${{\boldsymbol{\eta}}}_0=\mathbf{0}$ is selected, and the grid selection method of [@1988_stukel] has to be implemented. For the generating family $({{\boldsymbol{\pi}}}_0={{\boldsymbol{\beta}}}_0,{{\mathbf{s}}}_0={{\mathbf{I}}})$-standardized at ${{\boldsymbol{\eta}}}_0={{\boldsymbol{\beta}}}_0$ the Newton-Raphson algorithm however converges. The estimation of unknown parameters requires less computational time when the Newton-Raphson method converges instead of using the grid searching method. Hence, we are able to show in our example that the numerical stability is increased and the computational time is reduced when using a $({{\boldsymbol{\pi}}}_0={{\boldsymbol{\beta}}}_0,{{\mathbf{s}}}_0={{\mathbf{I}}})$-standardized at ${{\boldsymbol{\eta}}}_0={{\boldsymbol{\beta}}}_0$ generating family .
Interval estimation of the percentiles {#cr}
======================================
Suppose we define ${{\mathbf{S}}}_{{{\boldsymbol{\pi}}}_0}({{\boldsymbol{\delta}}})$ as the settings of the control variables at which ${{\boldsymbol{\pi}}}_0={{\mathbf{h}}}[{{\boldsymbol{\alpha}}},{{\boldsymbol{\eta}}}({{\mathbf{x}}})]$, $${{\mathbf{S}}}_{{{\boldsymbol{\pi}}}_0}({{\boldsymbol{\delta}}})=\{{{\mathbf{x}}}\in R^k:{{\boldsymbol{\pi}}}_0={{\mathbf{h}}}[{{\boldsymbol{\alpha}}},{{\boldsymbol{\eta}}}({{\mathbf{x}}})]\}.$$ Then, ${{\mathbf{S}}}_{{{\boldsymbol{\pi}}}_0}({{\boldsymbol{\delta}}})$ can be called the ${{\boldsymbol{\pi}}}_0$th percentile of the multinomial distribution. In this section we propose three different methods for determining confidence regions for ${{\mathbf{S}}}_{{{\boldsymbol{\pi}}}_0}({{\boldsymbol{\delta}}})$.
Method 1: an asymptotic conservative confidence region based on ML estimates {#crm1}
----------------------------------------------------------------------------
From Section \[asym\] we know that $\sqrt{N}({\hat{{{\boldsymbol{\delta}}}}}-{{\boldsymbol{\delta}}})\sim MVN(\mathbf{0},{{\boldsymbol{\Sigma}}}({{\boldsymbol{\delta}}})=N{{\mathbf{J}}}_n^{-1})$ asymptotically. This implies, $\sqrt{N}({\hat{{\boldsymbol{\beta}}}}_j-{{\boldsymbol{\beta}}}_{j})$ follows an asymptotic multivariate normal distribution with mean $\mathbf{0}_{p_j}$ and variance ${{\boldsymbol{\Sigma}}}_j$, here ${{\boldsymbol{\Sigma}}}_j$ is a sub matrix of ${{\boldsymbol{\Sigma}}}({{\boldsymbol{\delta}}})$ corresponding to ${\hat{{\boldsymbol{\beta}}}}_j$. Using the normality of $\sqrt{N}({\hat{{\boldsymbol{\beta}}}}_j-{{\boldsymbol{\beta}}}_j)$ we get, $N({\hat{{\boldsymbol{\beta}}}}_j-{{\boldsymbol{\beta}}}_{j})'{{\boldsymbol{\Sigma}}}^{-1}_j({\hat{{\boldsymbol{\beta}}}}_j-{{\boldsymbol{\beta}}}_{j})\sim\chi^{2}_{p_j}$ asymptotically. Thus, $Pr[N({\hat{{\boldsymbol{\beta}}}}_j-{{\boldsymbol{\beta}}}_{j})'{{\boldsymbol{\Sigma}}}^{-1}_j({\hat{{\boldsymbol{\beta}}}}_j-{{\boldsymbol{\beta}}}_{j})\leq\chi^2_{p_j,(1-\tau)}]=(1-\tau)$, where $\chi^2_{p_j,(1-\tau)}$ is the $(1-\tau)$th quantile of a $\chi^{2}_{p_j}$ distribution. Using Cauchy Schwarz inequality, $$\begin{aligned}
\underset{{{\mathbf{x}}}\in R^{k}}{sup}\frac{N[{{\mathbf{f}}}_j({{\mathbf{x}}})({\hat{{\boldsymbol{\beta}}}}_j-{{\boldsymbol{\beta}}}_{j})]^2}{{{\mathbf{f}}}_j({{\mathbf{x}}}){{\boldsymbol{\Sigma}}}_j{{\mathbf{f}}}_j'({{\mathbf{x}}})}
&\leq& \underset{{{\mathbf{z}}}\in R^{p_j}}{sup}\frac{N[{{\mathbf{z}}}'({\hat{{\boldsymbol{\beta}}}}_j-{{\boldsymbol{\beta}}}_{j})]^2}{{{\mathbf{z}}}'{{\boldsymbol{\Sigma}}}_j {{\mathbf{z}}}}\nonumber\\
&=& N({\hat{{\boldsymbol{\beta}}}}_j-{{\boldsymbol{\beta}}}_{j})'{{\boldsymbol{\Sigma}}}^{-1}_j({\hat{{\boldsymbol{\beta}}}}_j-{{\boldsymbol{\beta}}}_{j}),\nonumber\\
&&
\end{aligned}$$ thus, $$\frac{N[{{\mathbf{f}}}_j({{\mathbf{x}}})({\hat{{\boldsymbol{\beta}}}}_j-{{\boldsymbol{\beta}}}_{j})]^2}{{{\mathbf{f}}}_j({{\mathbf{x}}}){{\boldsymbol{\Sigma}}}_j{{\mathbf{f}}}_j'({{\mathbf{x}}})}\leq N({\hat{{\boldsymbol{\beta}}}}_j-{{\boldsymbol{\beta}}}_{j})'{{\boldsymbol{\Sigma}}}^{-1}_j({\hat{{\boldsymbol{\beta}}}}_j-{{\boldsymbol{\beta}}}_{j})
\text{ for all ${{\mathbf{x}}}\in R^k$}.\label{ci}$$ Suppose we define two events $A_j$ and $B_j$ as, $A_j=\left[\frac{N[{{\mathbf{f}}}_j({{\mathbf{x}}})({\hat{{\boldsymbol{\beta}}}}_j-{{\boldsymbol{\beta}}}_{j})]^2}
{{{\mathbf{f}}}_j({{\mathbf{x}}}){{\boldsymbol{\Sigma}}}_j{{\mathbf{f}}}_j'({{\mathbf{x}}})}\leq\right.$ $\left.\chi^2_{p_j,(1-\tau)},\ \forall\ {{\mathbf{x}}}\in R^k\right]$ and $B_j=\left[N({\hat{{\boldsymbol{\beta}}}}_j-{{\boldsymbol{\beta}}}_{j})'{{\boldsymbol{\Sigma}}}^{-1}_j({\hat{{\boldsymbol{\beta}}}}_j-{{\boldsymbol{\beta}}}_{j})\leq\chi^2_{p_j,(1-\tau)}\right]$. From equation (\[ci\]), we know that $B_j\subset A_j$, thus, $$P[\eta_j({{\mathbf{x}}})\in {{\mathbf{C}}}_j({{\mathbf{x}}}),\ \forall\ {{\mathbf{x}}}\in R^k]\geq (1-\tau),\text{ for all $j=1,2,\ldots,q$},$$ where $\mathcal{C}_j({{\mathbf{x}}})=\{\xi\in R:L_j({{\mathbf{x}}})\leq \xi\leq U_j({{\mathbf{x}}})\}$, $$\begin{aligned}
L_j({{\mathbf{x}}}) &=& {{\mathbf{f}}}_j({{\mathbf{x}}}){\hat{{\boldsymbol{\beta}}}}_j
-\sqrt{N^{-1}{{\mathbf{f}}}_j({{\mathbf{x}}}){{\boldsymbol{\Sigma}}}_j{{\mathbf{f}}}'_j({{\mathbf{x}}})\chi^2_{p_j,(1-\tau)}},\nonumber\\
U_j({{\mathbf{x}}})&=&{{\mathbf{f}}}_j({{\mathbf{x}}}){\hat{{\boldsymbol{\beta}}}}_j
+\sqrt{N^{-1}{{\mathbf{f}}}_j({{\mathbf{x}}}){{\boldsymbol{\Sigma}}}_j{{\mathbf{f}}}'_j({{\mathbf{x}}})\chi^2_{p_j,(1-\tau)}}.\label{ljuj}
\end{aligned}$$ Then, using Boole’s inequality, $$\begin{aligned}
&&Pr[\eta_j({{\mathbf{x}}})\in {{\mathbf{C}}}_j({{\mathbf{x}}}),\ \forall\ {{\mathbf{x}}}\in R^k,\text{ for all $j=1,2,\ldots,q$}]\geq (1-q\tau),\end{aligned}$$ which implies, $$\begin{aligned}
Pr[{{\boldsymbol{\eta}}}({{\mathbf{x}}})\in {{\mathbf{C}}}({{\mathbf{x}}}),\ \forall\ {{\mathbf{x}}}\in R^k]\geq (1-q\tau),\label{cifetav}
\end{aligned}$$ where ${{\mathbf{C}}}({{\mathbf{x}}})=\times_{j=1}^q{{\mathbf{C}}}_j({{\mathbf{x}}})$. If we now denote $P_{L,j}({{\mathbf{x}}})=\min_{{{\boldsymbol{\xi}}}\in\mathcal{C}({{\mathbf{x}}})}h_j({{\boldsymbol{\alpha}}},{{\boldsymbol{\xi}}})$ and $P_{U,j}({{\mathbf{x}}})=\max_{{{\boldsymbol{\xi}}}\in\mathcal{C}({{\mathbf{x}}})}h_j({{\boldsymbol{\alpha}}},{{\boldsymbol{\xi}}})$, for $j=1,2,\ldots,q$, then using the result given in [@1973_rao p 240], $$\begin{aligned}
&&Pr[P_{L,j}({{\mathbf{x}}})\leq h_j({{\boldsymbol{\alpha}}},{{\boldsymbol{\eta}}})\leq P_{U,j}({{\mathbf{x}}}), \forall\ {{\mathbf{x}}}\in R^k \text{ and }\forall\
j=1,\ldots,q\nonumber] \geq (1-q\tau),\end{aligned}$$ This implies that, $$\begin{aligned}
Pr[{{\mathbf{P}}}_L({{\mathbf{x}}})\leq {{\mathbf{h}}}\{{{\boldsymbol{\alpha}}},{{\boldsymbol{\eta}}}({{\mathbf{x}}})\}\leq
{{\mathbf{P}}}_U({{\mathbf{x}}}), \forall\ {{\mathbf{x}}}\in R^k]
\geq (1-q\tau),\label{cifpiv}
\end{aligned}$$ where ${{\mathbf{P}}}_L$ and ${{\mathbf{P}}}_U$ are $q$ dimensional vectors with their $j$th elements equal to $P_{L,j}$ and $P_{U,j}$, respectively. Since the link parameter ${{\boldsymbol{\alpha}}}$ is unknown, we use ${\hat{{{\boldsymbol{\alpha}}}}}$ (MLE of ${{\boldsymbol{\alpha}}}$) for computing $P_{L,j}$ and $P_{U,j}$, for $j=1,2,\ldots,q$. Estimating ${{{\mathbf{S}}}_{{{\boldsymbol{\pi}}}_0}}({{\boldsymbol{\delta}}})$ by ${{{\mathbf{S}}}_{{{\boldsymbol{\pi}}}_0}}(\hat{{{\boldsymbol{\delta}}}})$, we get a approximate $100(1-\tau')\%$ ($\tau'=q\tau$) conservative confidence region for ${{\mathbf{S}}}_{{{\boldsymbol{\pi}}}_0}({{\boldsymbol{\delta}}})$ as $$\begin{aligned}
\{{{\boldsymbol{\zeta}}}\in R^k: {{\mathbf{P}}}_L({{\mathbf{x}}}) &\leq& {{\mathbf{h}}}[{\hat{{{\boldsymbol{\alpha}}}}},{\hat{{{\boldsymbol{\eta}}}}}({{\boldsymbol{\zeta}}})]
\leq {{\mathbf{P}}}_U({{\mathbf{x}}})\text{ for all ${{\mathbf{x}}}\in{{{\mathbf{S}}}_{{{\boldsymbol{\pi}}}_0}}(\hat{{{\boldsymbol{\delta}}}})$}\}.\label{crco}\end{aligned}$$
Method 2: confidence region using the likelihood ratio test {#lrt}
-----------------------------------------------------------
We derive the confidence region of ${{\mathbf{S}}}_{{{\boldsymbol{\pi}}}_0}({{\boldsymbol{\delta}}})$ using the likelihood ratio (LR) test corresponding to the hypotheses, $H_0: {{\mathbf{x}}}\in{{\mathbf{S}}}_{{{\boldsymbol{\pi}}}_0}({{\boldsymbol{\delta}}})$ versus $H_1:{{\mathbf{x}}}\notin{{\mathbf{S}}}_{{{\boldsymbol{\pi}}}_0}({{\boldsymbol{\delta}}})$. Under the null hypothesis we have, $$\begin{aligned}
{{\boldsymbol{\eta}}}({{\mathbf{x}}}) &=& {{\mathbf{g}}}({{\boldsymbol{\alpha}}},{{\boldsymbol{\pi}}}_0),\end{aligned}$$ which implies $$\begin{aligned}
{{\mathbf{f}}}_j({{\mathbf{x}}}){{\boldsymbol{\beta}}}_{j}&=&{{\mathbf{g}}}_j({{\boldsymbol{\alpha}}},{{\boldsymbol{\pi}}}_0)
\ for\ j=1,2,\ldots,q.
\label{lreq1}$$ Suppose $D({{\mathbf{x}}})$ is the deviance [@fahrmeirtutz_2001 p 108] under the null hypothesis while $D({\hat{{{\mathbf{x}}}}})$ is the deviance of the fitted model. Then, the LR statistic $L({{\mathbf{x}}})=D({{\mathbf{x}}})-D({\hat{{{\mathbf{x}}}}})$ has an asymptotic $\chi^2$ distribution with $q$ degrees of freedom. The $100(1-\tau)\%$ confidence region for ${{\mathbf{S}}}_{{{\boldsymbol{\pi}}}_0}({{\boldsymbol{\delta}}})$ using the LR statistic is given by $$\{{{\mathbf{x}}}\in R^k: L({{\mathbf{x}}})\leq \chi^2_{q,1-\tau}\}.\label{crlr}$$
Method 3: confidence region using the score test {#crsc}
-------------------------------------------------
Suppose, ${{\boldsymbol{\beta}}}_{0}=[\beta_{10},\ldots,\beta_{q0}]'$ and ${{\mathbf{u}}}_0=\left[\frac{\partial l}{\partial{{\boldsymbol{\beta}}}_0}\right]_{{\hat{{{\boldsymbol{\delta}}}}}_0}$. Let ${\hat{{{\boldsymbol{\Sigma}}}}}_0$ be the estimated variance of ${{\mathbf{u}}}_0$ at ${{\boldsymbol{\delta}}}={\hat{{{\boldsymbol{\delta}}}}}_0$, where $l({{\boldsymbol{\delta}}})$ is the log-likelihood function and ${\hat{{{\boldsymbol{\delta}}}}}_0$ is the MLE of ${{\boldsymbol{\delta}}}$ under $H_0$ in Section \[lrt\]. Then $s({{\mathbf{x}}})={{\mathbf{u}}}'_0{\hat{{{\boldsymbol{\Sigma}}}}}_0^{-1}{{\mathbf{u}}}_0$ has an asymptotic $\chi^2$ distribution with $q$ degrees of freedom [@fahrmeirtutz_2001 p 48].
Using the score test, the $100(1-\tau)\%$ confidence region for ${{\mathbf{S}}}_{{{\boldsymbol{\pi}}}_0}({{\boldsymbol{\delta}}})$ is given by $$\{{{\mathbf{x}}}\in R^k: s({{\mathbf{x}}})\leq \chi^2_{q,1-\tau}\}.\label{crs}$$
Example
=======
We consider a data set based on a combination drug experiment reported by [@gennings_1994 pp. 429-451]. The main goal of the experiment is to study and model the relationship between the dose levels of two drugs, morphine sulfate and $\Delta^9$-tetrahydro-cannabinol ($\Delta^9$-THC), on the pain relief and toxic responses of male mice. Eighteen groups of male mice (six animals per group) were randomly assigned to receive the treatment combinations and three responses were recorded. They were, $E$: the number of mice in each group who exhibit only pain relief (no toxic effect), $T$: the number of mice in each group experiencing toxic effects (irrespective of pain relief), and $W$: the number of mice who experienced neither pain relief nor any toxic effects. So we may consider $E$ and $T$ as the efficacy and toxicity responses of the two analgesic drugs. The dose levels of the two drugs formed a $3\times 6$ factorial design, where a treatment combination consisted of a single injection using one of three levels of morphine sulfate (2, 4, 6 mg/kg) in addition to one of 6 levels of $\Delta^9$-THC (0.5, 1.0, 2.5, 5.0, 10.0, 15.0 mg/kg). The centered dose levels of the two drugs morphine sulfate and $\Delta^9$-THC were denoted as $x_1$ and $x_2$. The $3\times 6$ factorial design $D$ and the three responses are given in Table \[responses\]. Since the responses were obtained from the same mouse they may be correlated. The binary nature of the responses allowed us to model them using a two category multinomial model. The response $W$ was taken correlated binary to be the dummy category. For more details on modeling binary responses using the multinomial distribution see [@mukhopadhyaykhuri_2008b].
Fitting a generalized multinomial model
---------------------------------------
We start by fitting a multinomial regression model with the multicategorical logit link function to the data. The model is given by $$\begin{aligned}
\eta_{E}(\textbf{x}) &=& \beta_{10}+\beta_{11}x_1+\beta_{12}x_2,\nonumber\\
\eta_{T}(\textbf{x}) &=& \beta_{20}+\beta_{21}x_1+\beta_{22}x_2.\label{aeta}$$ The maximum likelihood estimates (MLEs) of ${{\boldsymbol{\beta}}}$ and their standard errors are reported in Table \[mleofbeta\]. The scaled deviance for the above fitted model is $29.6048$ with 12 degrees of freedom (p-value=0.0032). Since the p-value is 0.0032, the results show evidence of lack of fit. We thus consider the proposed parametric link functions for the multinomial models and see if it is possible to improve the fit. We use two choices for ${{\boldsymbol{\eta}}}_0$, the fixed choice of ${{\boldsymbol{\eta}}}_0=\mathbf{0}$ and later ${{\boldsymbol{\eta}}}_0={{\boldsymbol{\beta}}}_0$.
The multinomial model with a parametric link function considering ${{\boldsymbol{\eta}}}_0$ to be fixed at $\mathbf{0}$ is given by $$\begin{aligned}
\pi_{ij}=h_j({{\boldsymbol{\alpha}}},{{\boldsymbol{\eta}}}_i)=\frac{\exp\{G_j({{\boldsymbol{\alpha}}}_j,\eta_{ij})\}}
{1+\sum_{l=1}^2\exp\{G_l({{\boldsymbol{\alpha}}}_l,\eta_{il})\}},\text{ for $j=E,T$},\label{Ghfe}\end{aligned}$$ where [@1989_czado] $$G_j({{\boldsymbol{\alpha}}}_j,\eta_{ij})
=\left\{\begin{matrix}\frac{(1+\eta_{ij})^{\alpha_{j1}}-1}{\alpha_{j1}} & \text{if} & \eta_{ij}\geq 0\\
-\frac{(1-\eta_{ij})^{\alpha_{j2}}-1}{\alpha_{j2}} & \text{if} & \eta_{ij}<0,
\end{matrix}\right.\label{czadogffe}$$ where ${{\boldsymbol{\alpha}}}_j=[\alpha_{j1},\alpha_{j2}]'$ for $j=E,T$. The above link function becomes equivalent to the multicategorical logistic link function when ${{\boldsymbol{\alpha}}}=[1,1,1,1]'$. Using the score test by [@fahrmeirtutz_2001 p 48], we test the hypotheses, $$\begin{aligned}
H_0 &:& \alpha_{jk}=1\ \text{ versus }
H_1 : \alpha_{jk}\neq 1\ \forall\ j=1,2\text{ and }\ k=1,2.\nonumber\end{aligned}$$ From results of the score tests we observe that the null hypotheses are rejected for $\alpha_{11}$ and $\alpha_{12}$. This implies that there is a need to modify both tails of the first response. Stepwise selection of each link parameter based on the akaike information criterion (AIC) were considered in the score tests. For computing the MLE of the parameters, ${{\boldsymbol{\delta}}}=({{\boldsymbol{\beta}}}',{{\boldsymbol{\alpha}}}')'$ we use the method detailed in [@1988_stukel] since the Fisher scoring iterative method does not converge for ${{\boldsymbol{\eta}}}_0=\mathbf{0}$. The parameter estimates, standard errors and variance inflation ratios are given in Table \[mlegl\]. The computations are done by once considering ${{\boldsymbol{\alpha}}}$ fixed in the information matrix and later estimating it from the data set.
We also consider the parametric link function with ${{\boldsymbol{\eta}}}_0={{\boldsymbol{\beta}}}_0$ (refer to equation (\[etabeta0\])). Using score tests we again note that the link parameters $\alpha_{11}$ and $\alpha_{12}$ need to be included in the model. The Fisher scoring iteration method for obtaining MLE of ${{\boldsymbol{\delta}}}$ converges and the results are given in Table \[mlegl\].
From Table \[mlegl\] we note that the deviance using parametric link function for a generating family standardized at ${{\boldsymbol{\eta}}}_0=\mathbf{0}$ is 23.8866 at 10 degrees of freedom, and for a generating family standardized at ${{\boldsymbol{\eta}}}_0={{\boldsymbol{\beta}}}_0$ is 22.9148 at 10 degrees of freedom. Since, we have estimated two extra parameters for using the parametric link function, so the difference between the deviances using logistic link function and parametric link function is a $\chi^2$ distribution with 2 degrees of freedom [@fahrmeirtutz_2001 p 49]. The differences between the deviances using a multivariate logistic link function and parametric link function with generating families (\[czadogffe\]) and (\[etabeta0\]) are 5.7182 (p-value 0.0573) and 6.69 (p-value 0.0353), respectively. This shows that using the parametric family of link functions with generating family (\[etabeta0\]), we are able to significantly improve the fit over the multicategory logistic link function. Also in Table \[mlegl\], we report the estimates of parameters and two estimated standard errors for each regression parameter for both the generating family. The first one assumes that the link parameters are fixed at their estimated values and second one assumes that the link parameters are estimated from the data set. The variance inflation ratio is the ratio of the standard error when ${{\boldsymbol{\alpha}}}$ is estimated to the standard error when ${{\boldsymbol{\alpha}}}$ is fixed. From Table \[mlegl\] we note that the variance inflation ratios corresponding to the parametric link function with ${{\boldsymbol{\eta}}}_0=\mathbf{0}$ are higher than those corresponding to ${{\boldsymbol{\eta}}}_0={{\boldsymbol{\beta}}}_0$. This implies that we achieve greater numerical stability when using a parametric link function with ${{\boldsymbol{\eta}}}_0={{\boldsymbol{\beta}}}_0$.
Percentile estimation
---------------------
In this section we apply the three methods of interval estimation and find confidence regions for ${{\boldsymbol{\pi}}}_0=[0.75,0.2]'$. Thus, we are interested in jointly estimating the $ED_{75}$ and $LD_{20}$ percentiles (where ED=Effective Dose and LD=Lethal Dose). For computing confidence intervals, we use MLE of ${{\boldsymbol{\delta}}}$ for generating family $({{\boldsymbol{\pi}}}_0={{\boldsymbol{\beta}}}_0,{{\mathbf{s}}}_0={{\mathbf{I}}})$ standardized at ${{\boldsymbol{\eta}}}_0={{\boldsymbol{\beta}}}_0$ as it provides better numerical stability and also smaller variance inflation ratios. The estimated ${{\boldsymbol{\pi}}}_0$th percentile is given by $$\begin{aligned}
{{\mathbf{S}}}_{{{\boldsymbol{\pi}}}_0}({\hat{{{\boldsymbol{\delta}}}}}) &=& \{{{\mathbf{x}}}\in R^2: {{\boldsymbol{\pi}}}_0={{\mathbf{h}}}({\hat{{{\boldsymbol{\alpha}}}}},{\hat{{{\boldsymbol{\eta}}}}}({{\mathbf{x}}}))\}\nonumber\\
&=& \left\{ [-0.6715, 0.1365]'\right\}= \left\{ {{\mathbf{x}}}_0\right\},\ say,\end{aligned}$$ which is a singleton set. In our example we have two categories and three regression parameters in each category, thus $q=2$ and $p_j=3$. For obtaining $95\%$ confidence regions for ${{\mathbf{S}}}_{{{\boldsymbol{\pi}}}_0}({{\boldsymbol{\delta}}})$, we choose $\tau'=q\tau=0.05$, which gives $\tau=0.025$ and $\chi^2_{p_j,(1-\tau)}=9.35,\,j=E,T$.
Since ${{\mathbf{S}}}_{{{\boldsymbol{\pi}}}_0}({\hat{{{\boldsymbol{\delta}}}}})=\left\{ {{\mathbf{x}}}_0\right\}$, the approximate $95\%$ conservative confidence region for ${{\mathbf{S}}}_{{{\boldsymbol{\pi}}}_0}({{\boldsymbol{\delta}}})$ using method 1 (Section \[crm1\]) is given by $$\begin{aligned}
\{{{\mathbf{x}}}=[x_1,x_2]'\in R^2: {{\mathbf{P}}}_L({{\mathbf{x}}}_0) &\leq& {{\mathbf{h}}}[{\hat{{{\boldsymbol{\alpha}}}}},{\hat{{{\boldsymbol{\eta}}}}}({{\mathbf{x}}})]
\leq {{\mathbf{P}}}_U({{\mathbf{x}}}_0)\}.\end{aligned}$$ For computing the above region we first need to find the intervals for $\eta_E$ and $\eta_T$. Using equation (\[ljuj\]), we have the intervals $[-4.6794,-1.7894]$ and $[1.0720,1.7006]$ for $\eta_E$ and $\eta_T$, respectively. For calculating the intervals we use $N=108$ (total number of observations), ${\hat{{\boldsymbol{\beta}}}}_j$ and $\hat{{{\boldsymbol{\Sigma}}}}_j$ from Tables \[mlegl\] and \[varb\] respectively. To get ${{\mathbf{C}}}({{\mathbf{x}}}_0)$ we take the cartesian products of the intervals of $\eta_E$ and $\eta_T$. The next step is to compute ${{\mathbf{P}}}_L({{\mathbf{x}}}_0)=[P_{L,1}({{\mathbf{x}}}_0),P_{L,2}({{\mathbf{x}}}_0)]'$ and ${{\mathbf{P}}}_U({{\mathbf{x}}}_0)=[P_{U,1}({{\mathbf{x}}}_0),P_{U,2}({{\mathbf{x}}}_0)]'$ where $P_{L,j}({{\mathbf{x}}}_0)=\min_{{{\boldsymbol{\xi}}}\in\mathcal{C}({{\mathbf{x}}}_0)}h_j({\hat{{{\boldsymbol{\alpha}}}}},{{\boldsymbol{\xi}}})$ and $P_{U,j}({{\mathbf{x}}}_0)=\max_{{{\boldsymbol{\xi}}}\in\mathcal{C}({{\mathbf{x}}}_0)}h_j({\hat{{{\boldsymbol{\alpha}}}}},{{\boldsymbol{\xi}}})$, for $j=E,T$. For computing the minimum and maximum of the function $h_j({\hat{{{\boldsymbol{\alpha}}}}},{{\boldsymbol{\xi}}})$ over ${{\mathbf{C}}}({{\mathbf{x}}}_0)$ we use a MATLAB program called MCS [@1999_Huyer] and we get ${{\mathbf{P}}}_L({{\mathbf{x}}}_0)=[0.6319,0.1182]'$ and ${{\mathbf{P}}}_U({{\mathbf{x}}}_0)=[0.8414, 0.3113]'$. Hence, the $95\%$ conservative confidence region for ${{\mathbf{S}}}_{{{\boldsymbol{\pi}}}_0}({{\boldsymbol{\delta}}})$ by method 1 is given by $$\begin{aligned}
\{{{\mathbf{x}}}=[x_1,x_2]'\in R^2: [0.6319,0.1182]'\leq {{\mathbf{h}}}[{\hat{{{\boldsymbol{\alpha}}}}},{\hat{{{\boldsymbol{\eta}}}}}({{\mathbf{x}}})]\leq [0.8414, 0.3113]'\}\label{rbm1}\end{aligned}$$
\
From equation (\[crlr\]), the $95\%$ confidence region of ${{\mathbf{S}}}_{{{\boldsymbol{\pi}}}_0}({{\boldsymbol{\delta}}})$ using the LR test (Section \[lrt\]) is given by $$\{{{\mathbf{x}}}=[x_1,x_2]'\in R^2: L({{\mathbf{x}}})\leq 5.99\},$$ since for $\tau=0.05$, $\chi^2_{2,1-\tau}= 5.99$. The $95\%$ confidence region of ${{\mathbf{S}}}_{{{\boldsymbol{\pi}}}_0}({{\boldsymbol{\delta}}})$ using the score test (see equation (\[crs\])) is, $$\{{{\mathbf{x}}}=[x_1,x_2]'\in R^2: s({{\mathbf{x}}})\leq 5.99\},$$ where $s({{\mathbf{x}}})$ is defined as in Section \[crsc\].
The confidence regions for the percentiles using the three methods are graphically shown in Figure \[figcr\]. For plotting the confidence regions we choose $21$ values of $x_1$ from the interval $[-1,1]$ at steps of 0.1. For each chosen $x_1$, 1000 values of $x_2$ are chosen randomly from $[-0.7132,1.2868]$. Let $S_{x_1}$ be the set of the selected $x_2$ values. To determine the confidence region by method 1, for each value of $x_1$ we compute $$\begin{aligned}
L_{x_2}(x_1)&=& min\{x_2\in S_{x_1}: [x_1,x_2]' \text{ is in region (\ref{rbm1})} \}\nonumber\\
&& \text{ and }\nonumber\\
U_{x_2}(x_1)&=& max\{x_2\in S_{x_1}: [x_1,x_2]' \text{ is in region (\ref{rbm1})}\}\end{aligned}$$ Then, by plotting $L_{x_2}(x_1)$ and $U_{x_2}(x_1)$ against $x_1$ we get the lower and upper bounds of the region, respectively for method 1. We use the same methodology as above to plot the confidence regions by the LR and the score test. From Figure \[figcr\], we observe that the confidence regions found by using LR test is narrower than the other two confidence regions, while the score test gives the widest region.
Concluding Remarks
==================
In this article, we have introduced a family of link functions which is location and scale invariant and provides local orthogonality between regression and link parameters for multinomial response models. Using a numerical example we are able to show that parametric link function provides a better fit over multivariate logistic link function. We also discussed three different methods for constructing $100(1-\tau)\%$ confidence regions for the ${{\boldsymbol{\pi}}}$th percentile.
The percentile estimation methods for multinomial models discussed in this article can be used in clinical trials which are involved in determining dose levels having desired probabilities of both toxicity and efficacy, namely Phase I/II trials [@1994_gooley; @1998_thall; @2001_hughes]. By applying the above interval estimation methods experimenters will be able to find confidence regions of dose levels with tolerable toxicity and the desired efficacy.
There has been a recent rise of interest among researchers to find designs for logistic regression models which are robust to link misspecification. @2006_biedermann and @russell_2006 propose robust designs by considering a finite set of plausible link functions while [@adewale_2010] uses the family of link functions of [@1981_aranda] in their approach. In the future we plan to use the family of link functions proposed in this article to determine designs for multinomial models which are robust to an incorrectly assumed link function.
\[conclusion\]
[^1]: [Corresponding author. Email]{}: [email protected] : 912225767495
|
---
abstract: 'It is shown that the families of generalized matrix ensembles recently considered which give rise to an orthogonal invariant stable Lévy ensemble can be generated by the simple procedure of dividing Gaussian matrices by a random variable. The nonergodicity of this kind of disordered ensembles is investigated. It is shown that the same procedure applied to random graphs gives rise to a family that interpolates between the Erdös-Renyi and the scale free models.'
author:
- 'O. Bohigas$^{1}$, J. X. de Carvalho$^{2,3}$ and M. P. Pato$^{1,2}$'
title: Disordered ensembles of random matrices
---
The classes of random matrix ensembles introduced by Wigner in the 50s have found a great sucess partly after being connected with quantum manifestations of chaos in physical systems[@Boh82]. In turn this success generated a great activity and extensions and generalizations of those ensembles have occurred. In obtaining the Gaussian ensembles, Wigner adapted the Wishart ensembles well known to statistitians. Some of the extensions of the Gaussian ensembles can also be considered as applications of known processes in statistics. For instance, models to describe symmetry breaking have been constructed by adding two random matrices, one block diagonal and the other its complement[@Guhr]. Here we consider a random process in which a new random quantity is generated by taking not the sum but the ratio or the product of two other independent ones.
In a previous paper[@Bertuola], an alternative to Shannon information entropy, namely Tsallis-Renyi information[@Tsallis] was used to introduce a new family of generalized matrix ensembles (see also [@Raul]). One of the main features of this ensemble is the power-law characteristic of its statistical properties. In particular, it was shown that individual matrix elements behave like the elements of the so-called Lévy matrices[@Cizeau] (after the publication of Ref. [@Bertuola], Klauder and Muttalib obtained an even more general family[@Klauder] on similar lines).
One of the purposes of this note is to show that all these families can be obtained, in fact, by the following simple procedure. Let $ H_G (\alpha) $ be a random matrix of dimension $N$ and variance $1/2\alpha ^2$ and let its probability distribution be
$$P_{G} (H ;\alpha )=\left(\frac{\beta\alpha}{\pi}\right)^{f/2}
\exp\left(-\alpha\beta \mbox{tr} H^{2}\right) , \label{12}$$
The matrices of the Gaussian ensemble are specified by $\alpha.$ In (\[12\]), $f$ is the number of independent matrix elements $f=N+\beta N(N-1)/2$ and $\beta$ is the Dyson index $\beta=1,2,4$ for GOE, GUE and GSE (here and in what follows the subindex $G$ indicates Gaussian). The distribution is normalized with respect to the measure $dH=\prod_{1}^{N}dH_{ii}\prod_{j>i}\prod_{k=1}^{\beta}\sqrt{2}dH^{k}_{ij}.$
Take now a positive random variable $\xi$ with a normalized density probability distribution $w (\xi )$ with average $ \bar{\xi}$ and variance $\sigma_{\xi}^2 $ and introduce a new matrix ensemble by the following relation (product of random variables has been considered in the context of covariace matrices[@Biroli])
$$H(\alpha, \xi )= \frac{ H_G (\alpha)}{\sqrt{\xi/\bar \xi }} . \label{1}$$
In this way, an external source of randomness is superimposed to the fluctuations of the Gaussian matrix $H_G (\alpha).$ A random process in which there is a competition between two types of random variables is typical of disordered systems or, in the case of Ising models, spin glasses[@Mezard]. As the two types of randomness are independent, one can be kept frozen, quenched in technical terms, while the fluctuations of the other continue to operate. Here the disorder is represented by $\xi$ which is the quenched variable in opposition to the randomness of the Gaussian matrices. We may refer to (\[1\]) as a disordered ensemble.
From (\[1\]), we deduce that the joint distribution of a set of $n\le f$ matrix elements is given by
$$p(h_1 , h_2 ,..., h_n ;\alpha )=
(\frac{\beta\alpha}{\pi\bar{\xi}})^{n/2}
\int d\xi w (\xi ) \xi^{n/2}\exp\left(-\frac{\beta\alpha\xi}
{\bar{\xi}}\sum_{i=1}^{n} h_i^2 \right) \label{6}$$
where $h_i =H_{ij}$ for the diagonal and $h_i =\sqrt{2}H_{ij}$ for the off-diagonal elements. Eq. (\[6\]) shows that matrix elements are correlated. As a particular case, for $n=f,$ (\[6\]) leads to the ensemble distribution
$$P (H ;\alpha )=
\int d\xi w ( \xi)
\left(\frac{\beta\alpha\xi}{\pi\bar{\xi}}\right)^{f/2}
\exp\left(-\frac{\beta\alpha\xi}{\bar{\xi}}\mbox{tr} H^2 \right) \label{9}$$
where the term after $w(\xi)$ is just (\[12\]) with $\alpha$ replaced by $\alpha \xi/\bar{\xi}. $ Expressions like (\[9\]) are being considered as instances of superstatistics[@Abul].
The relation (\[1\]) makes straightforward to do numerical simulations in terms of Gaussian matrices. However, it may also be useful to directly generate matrices of the ensemble (taking into account the corrrelations among their elements). This can be done through the identity
$$p(h_{1},...,h_{f})=p(h_{1}) \prod_{n=2} ^{f}
\frac{p(h_{1},...h_{n})}{p(h_{1},...h_{n-1})}, \label{514}$$
where each fraction gives the conditional probability for the $n$th element once the $n-1$ previous ones are given. This equation provides a way to sequentially generate all the matrix elements. At each step, a new element, say the $n$th, is sorted using Eq. (\[6\]) that implies
$$h_n= \frac{ h_G (\alpha)}{\sqrt{\xi_n /\bar \xi }}, \label{11}$$
where $h_G $ is a Gaussian variable and $\xi_n$ is another random variable sorted from the distribution
$$w_n (\xi)= w (\xi ) \xi^{(n-1)/2}\exp\left(-\frac{\beta\alpha\xi}
{\bar{\xi}}\sum_{i=1}^{n-1} h_i^2 \right)/
\int d\xi w (\xi ) \xi^{(n-1)/2}\exp\left(-\frac{\beta\alpha\xi}
{\bar{\xi}}\sum_{i=1}^{n-1} h_i^2 \right), \label{5}$$
which is univariate since all the previous $n-1$ elements have already been determined.
By generating matrices fixing, in the process, a set of values $\xi_1 , \xi_2 ,...,\xi_f $ we are, in the language of the disordered systems, quenching the disorder. The differences among matrices generated with different sets of $\xi$ depend on the width of the distribution $w(\xi)$ and one can expect that for wide $w(\xi)$ the large spread among the matrices will give rise to a nonergodic behavior.
Turning now to eigenvalues and eigenvectors, we observe that we have an ensemble invariant under unitary transformation in which, as it occurs with the Gaussian ensembles, the joint distribution of eigenvalues and eigenvector factorizes. The eigenvectors behave as those of the Gaussian ensembles and we can integrate them out to obtain for the eigenvalues the joint distribution
$$P\left( E_{1},...E_{N};\alpha \right ) =
\int d\xi w \left( \xi \right)(\alpha\xi/\bar{\xi}) ^{\frac{N}{2}}
P_{G}\left( x_{1},...x_{N};\frac{\beta}{2}\right) ,
\label{15}$$
where $x_i =\sqrt{\alpha\xi/\bar{\xi} }E_i$ and
$$P_{G} (x_{1},...x_{N}; \frac{\beta}{2} )=
K_{N}^{-1}\exp\left(-\frac{\beta}{2}\sum_{k=1}^{N} x_{k}^{2}\right)
\prod_{j>i} \left| x_{j}-x_{i}\right|^{\beta} ,
\label{26}$$
with $K_{N}$ being a normalization constant.
From (\[15\]), measures of the generalized family can be calculated by weighting the corresponding measures of the Gaussian ensembles with the $w(\xi )$ distribution. Integrating for instance (\[15\]) over all eigenvalues but one and multiplying by $N,$ the eigenvalue density is expressed in terms of the Wigner’s semi-circle law[@Meht] as
$$\rho \left( E;\alpha \right) =\frac{\sqrt{2\alpha}} {\pi}
\int d\xi w( \xi) (\xi/\bar{\xi})^{\frac{1}{2}}\sqrt{2N-2\alpha
\xi E^{2}/\bar{\xi} }. \label{126}$$
where the condition $\alpha \xi E^{2}<N$ on $\xi$ has to be satisfied.
As previously stated, the introduction of the disorder represented by the variable $\xi,$ breaks in principle the ergodicity of the Gaussian ensembles. Let $N(L)= \int_{ E -L/2 }^{ E +L/2} dE^{\prime} \rho(E^{\prime})$ be the average number of eigenvalues in the interval $[ E -L/2, E + L/2]$ for an ensemble with eigenvalue density $\rho(E).$ The variance $ \Sigma^2 (L) $ of the number of eigenvalues in that interval can be expressed in terms of the two-point correlation function $R(E_1 ,E_2)$ by
$$\Sigma^2 (L) = \int_{ E -L/2 }^{ E +L/2} dE_1
\int_{ E -L/2}^{ E +L/2}
dE_2 R(E_1 ,E_2) +N(L) - N^2 (L).$$
Ergodicity implies[@Pandey] the vanishing of
$$\mbox{Var} \rho = [\rho( E)]^2 \Sigma^2 (L) /L^2. \label{16}$$
when $L\rightarrow \infty.$ For the disordered ensemble we have
$$\Sigma^2 \left( L \right) =\int d\xi w
\left(\xi \right) \left[
\Sigma^2_G (L) - N_G (L)+ N_G ^2 (L) \right]+
N(L) - N^2 (L).\label{37}$$
with $N_G (L)$ calculated with the Gaussian density. In (\[37\]), nonergodicity will result if the quadratic terms do not cancel. Indeed, in this case, a parabolic contribution for large $L$ survives and the variance of the density fluctuations given by Eq. (\[16\]) does not asymptotically vanish.
Consider now a particular choice of the distribution $w(\xi).$ Note that the factor multiplying the Gaussian matrices in Eq. (\[1\]) acts on the variance of the Gaussian ensembles. In order to investigate ensembles showing heavy-tailed densities it is convenient to choose $w(\xi)$ to be the gamma distribution
$$w(\xi)=
\exp(-\xi) \xi^{\bar{\xi} -1} /\Gamma(\bar{\xi}) \label{18}$$
that becomes a $\chi^2$ distribution for integer $2\bar{\xi}$. From (\[18\]) $\sigma_{\xi}=\sqrt{\bar{\xi}},$ showing that $\bar{\xi}$ controls the behavior of the distribution $w(\xi).$ It becomes more localized when $\bar{\xi}$ increases and we should then expect to recover the Gaussian ensembles. However, for smaller values of $\bar{\xi},$ departures from the Gaussian case will be observed. Indeed, by substituting (\[18\]) in (\[9\]) we find
$$P(H;\alpha ,\bar{\xi} )=\left(\frac{\beta\alpha}{\pi\bar{\xi}}
\right)^{\frac{f}{2}}\frac
{\Gamma \left( \frac{1}{q-1}\right)}{\Gamma \left( \bar{\xi} \right) }
\left(1+\frac{\beta\alpha}{\bar{\xi}}
\mbox{tr} H^{2}\right) ^{\frac{1}{1-q}} \label{22}$$
for the ensemble density distribution, where
$$\frac{1}{q-1}=\bar{\xi}+\frac{f}{2} , \text{ with } q>1.$$
Eq. (\[22\]) is just Eq. (4) of [@Bertuola]. In [@Bertuola] it was derived using a generalized maximum entropy principle[@Tsallis] with $q$ being identified with the Tsallis entropic parameter.
Substituting (\[18\]) in (\[6\]) for $n=1$[@Burda]
$$p(h;\alpha,\bar{\xi})=\left(\frac{\beta\alpha}{\pi\bar{\xi}}
\right)^{\frac{1}{2}}\frac{\Gamma \left(\bar{\xi} +1/2 \right) }
{\Gamma \left( \bar{\xi}\right)}
\left(1+\frac{\beta\alpha}{\bar{\xi}}
h^{2}\right) ^{-\bar{\xi}-1/2} \label{280}$$
for the density distribution of a given matrix element. Since for large $\left|h\right|,$ $p_{\beta}(h;\alpha,\bar{\xi}) \sim 1/\left|h\right|^{2\bar{\xi}+1},$ (\[280\]) exhibits the power-law character of the distribution. It is important to remark that, apart from the lack of independence, the marginal distribution of the matrix elements have the same kind of distribution, namely one with an asymptotic power-law behavior, as the i.i.d. ones of the ensemble of Lévy matrices[@Cizeau]
In Fig. 1 the eigenvalue density for three realizations of the ensemble generated using the above random process with $\bar{\xi}=1/2$ is histogrammed and compared with the semi-circle law. We recall that for $\bar{\xi} =1/2$ the matrix elements are Cauchy, $\frac{1}{\pi}\frac{1}{1+x^2},$ distributed(see Eq. (\[280\])). It is seen that the individual matrices of large sizes are Gaussian ensemble matrices as they should. As a comparison, in Fig. 2, it is shown the eigenvalue density of just one Lévy matrix of large size whose matrix elements also follow the Cauchy distribution. We can see that although individual matrix elements of the two ensembles are identically distributed, their eigenvalue density behaves in a completely different way. While individual Lévy matrices of large sizes do not depart from the ensemble average, matrices generated according to (\[11\]) show large fluctuations.
Of course, the result shown in Fig. 1 indicates strong nonergodicity. This is confirmed by the ensemble number variances shown in Fig. 3. The parabolic behavior seems to persist even for large values of the parameter $\bar{\xi}, $ showing that the ensemble is nonergodic. Consequently, averages performed running along one spectrum do not coincide with averages over the ensemble of matrices.
Other systems in which nonergodicity may play an important role are networks and their associated graphs. We now show how the present approach can be applied in random graph theory[@Albert]. A graph is an array of points (nodes) connected by edges. It is completely defined by its adjacency matrix $A$ whose elements $A_{ij}$ have value $1$($0$) if the pair $(ij)$ of nodes is connected (disconnected). The diagonal elements are taken equal to zero, i.e. $A_{ii}=0.$ Adjacency matrices of graphs in which the connections are randomly set, are real symmetric random matrices. The classical random graph model proposed by Erdös-Renyi (ER) is simply defined by giving a fixed probability $p$ that a given pair of nodes is connected, independently of the others[@Renyi].
We start by showing that the ER model can be considered as the equivalent in random graph theory to the Wigner model of Gaussian matrices. In fact, the joint matrix element distribution of its adjacency matrix $A$ can be written as
$$P_{ER}(A,\alpha)=\left[1+\exp(-\alpha)\right]^{-f}
\exp\left(-\frac{\alpha}{2} \mbox{tr} A^{2}\right) \label{17}$$
where $f=\frac{N(N-1)}{2}$ with $N,$ the size of matrix, being equal to the number of nodes. Eq. (\[17\]) is just the defining equation (\[12\]) of the GOE ($\beta=1$) ensemble with the constraint that the matrix elements can only take the values $0$ and $1$ imposed by the measure
$$dH=\prod_{1}^{N}dH_{ii}\delta(H_{ii})\prod_{j>i}
\sqrt{2}dH_{ij}\left[\delta(H_{ij})+\delta(1-H_{ij})\right]. \label{516}$$
From (\[17\]) it follows that the marginal distribution of a given matrix element, say $A_{ij }$, is
$$P_{ER}(A_{ij},\alpha)=\frac{\exp\left(-\alpha A_{ij}\right)}{1+\exp(-\alpha)}
=\left\{
\begin{array}{rl}
\frac{\exp(-\alpha)}{1+\exp(-\alpha)} ,& \text{if } A_{ij} = 1 \\
\frac{1}{1+\exp(-\alpha)}, & \text{if } A_{ij} = 0 ,
\end{array}
\right.$$
which means that the probability $p$ that defines the ER model is connected to the parameter $\alpha$ by the relation
$$\alpha=\ln(\frac{1}{p}-1).$$
Since the probability $p$ is defined in the interval $[0,1],$ the domain of variation of $\alpha$ is $]\infty,-\infty [.$ This suggests that the statistical properties of the ER model must show a symmetry with respect to the point $\alpha=0$ (or $p=1/2$).
It is important to remark that although Eq. (\[17\]) has the same structure as Eq. (\[12\]) there are striking differences between the two models. Despite the presence of the trace in (\[17\]), the discrete nature of matrix elements imposed by the measure, Eq. (\[516\]), destroys the rotational invariance and prevents the factorization of the joint distribution of eigenvalues and eigenvectors. The parameter $\alpha$ is just a scaling parameter in the Gaussian case. In contrast, the properties of ER model depend strongly on the value of the probability $p,$ and here $\alpha$ plays an essential role. Notice also that, contrarily to the Gaussian cases, the adjacency matrices form an ensemble with a finite number of matrices. It is convenient in the study of the graphs, to introduce the scaling $p \sim N^{-z}$ ($z>0$). For instance, connectivity properties of the graph are characterized by $z.$
An analytical expression of the spectral density for arbitrary values of the probability $p$ and matrix size $N$ is an unsolved problem [@Leticia]. However, when $p$ is fixed and $N$ is very large, the density can be deduced in the following way. $A$ is a symmetric non-negative matrix with maximum principal eigenvalue, $E_1,$ its value is close to the nonzero eigenvalue of the constant matrix $<A>$ with elements equal to the average of the $A$-elements, i.e. $<A>_{ij}=p.$ As the only nonzero eigenvalue of a constant matrix is equal to the product of its size by the element, we conclude that $E_1 =pN.$ Because of this linear dependence with $N,$ for fixed $p$ the largest eigenvalue grows faster than the others as the matrix size increases. In this case, for very large matrices the other eigenvalues have asymptotically the same eigenvalue density of the eigenvalues of the matrix $A-<A>.$ This density can be obtained from the moments of the trace of the powers of the matrix and one finds that it obeys the Wigner semi-circle law[@Albert]
$$\rho_{ER}(E,\alpha)=\left\{
\begin{array}{rl}
\frac{1}{2\pi \sigma^2}\sqrt{4N\sigma^2-E^{2}}, &\mbox{if }
|E|<\sqrt{4N\sigma^2}\\
0, &\mbox{if } |E|>\sqrt{4N\sigma^2}
\end{array}
\right.$$
where $\sigma^2$ is the variance of the matrix elements given by
$$\sigma^2=p(1-p)=\frac{1}{4\cosh^2(\alpha/2)}.$$
The above argument fails if $p\sim 1/N $ ($z\sim 1$) in which case deviations from the semi-circle appear[@Leticia; @Farkas].
We now introduce a disordered model of random graphs by defining an adjacency matrix with a distribution
$$P(A;\alpha)=\int d \xi
w(\xi)\frac{\exp\left(-\frac{\alpha\xi}{2} \mbox{tr} A^{2}\right)}
{\left[1+\exp(-\alpha\xi)\right]^{f}}. \label{515}$$
Therefore this generalized model is a superposition of Erdös-Renyi random graphs with distribution $P(A,\alpha \xi)$ weighted with $w(\xi )$ exactly as in (\[9\]) for the disordered Gaussian ensembles. Again the width of the distribution of $w(\xi )$ is a controling parameter and as remarked before the parameter $\alpha$ also plays an essential role. In particular, for $\alpha =0$ the ensemble is just the ER with $p=1/2.$
From Eq. (\[515\]) we can derive the probability distribution for a set of matrix elements and use Eq. (\[514\]) to define a random process entirely equivalent to the one used to generate matrices of the disordered Gaussian ensemble. As before, a set of probabilities $p_n$ with $n=1,2,3...,f$ is sequentially generated and, from them, each new matrix element is obtained taking into account those already determined. This means that Eq. (\[515\]) defines a model of a disordered correlated graph in which new attachments depend on the ones already existing.
As in the case of the Gaussian ensembles, statistics of the averaged graph (our model) are averages over the ER statistics. For instance, the eigenvalue density is
$$\rho(E;\alpha)=\frac{2}{\pi}\int^{\xi_m}_{0}d\xi
w(\xi)\cosh(\frac{\alpha\xi}{2})
\sqrt{N-\cosh^2(\frac{\alpha\xi}{2})E^{2}} \label{518}$$
where
$$\xi_m=\frac{2}{\alpha}\cosh^{-1} (\frac{\sqrt{N}}{E}).$$
We now make for $w(\xi)$ the same choice as before, namely Eq. (\[18\]). As before we expect for large values of $\bar{\xi}$ small fluctuations around ER, whereas for small values they will become large and will govern the asymptotics.
In Fig. 4 we display the density of eigenvalues of the adjacency matrices. When going from $z$ close to 1 to $z$ close to $0,$ the density goes from a highly picked density with heavy tails towards a Wigner semi-circle, showing a crossover which is reminiscent from a scale-free to an ER graph.
In summary, we have discussed a new method to introduce matrix ensembles which preserve unitary invariance presenting distribution with heavy tails. The price to pay to preserve unitary invariance is i) to abandon the statistical independence of the matrix elements ii) to abandon the ergodic property (equivalence of spectral and ensemble averages). There are cases, however, in which only ensemble averages make sense. Consider, for instance, the behavior of individual eigenvalues. Recently, extreme eigenvalues have been a matter of great interest due to the discovery that the distributions they follow, the so-called Tracy-Widom[@TW] in the case of the Gaussian ensembles, show universality and have wide applications[@TW1]. The same authors have found growing systems in which an external source induces the extreme values to have a behavior in which there is a competition between their distribution and a Gaussian[@Widom]. In a paper in preparation, we show that the disordered ensemble can be a useful model for this kind of systems.
Let us finally mention that the method discussed here (Eq. (\[1\]) with the choice Eq. (\[18\]) for the probability density function $w(\xi)$) was intended to rederive and to give new insight on models previously studied. By making other choices for $w(\xi)$ new models preserving orthogonal invariance may be introduced (see also [@Klauder]).
We thank L. Pastur and W. F. Wreszinski for fruitful discussions. This work is supported in part by the Brazilian agencies CNPq and FAPESP.
[99]{}
O. Bohigas, M. J. Giannoni, and C. Schmit, Phys. Rev. Lett. [**52**]{}, 1 (1984); M. Sieber and K. Richter, Physica Scripta T [**90**]{}, 128 (2001); S. Müller, S. Heusler, P. Braun, F. Haake and A. Altland, Phys. Rev. E [**72**]{}, 046207 (2005); S. Heusler, S. Müller, A. Altland, P. Braun, F. Haake, Phys. Rev. Lett. [**98**]{}, 044103 (2007).
T. Guhr and, H.A. Weidenmüller. Ann. Phys. (NY), [**199**]{}, 412 (1990); M. S. Hussein and M. P. Pato, Phys. Rev. Lett. [**70**]{}, 1089 (1993).
A. C. Bertuola, O. Bohigas, and M. P. Pato, Phys. Rev. E [**70**]{}, 065102(R) (2004).
C. Tsallis, R. S. Mendes and A. R. Plastino, Physica A [**261**]{}, 534 (1998).
F. Toscano, R.O. Vallejos, and C. Tsallis, Phys. Rev. E. [**69**]{}, 066131 (2004); A. Y. Abul-Magd, Phys. Rev. E. [**71**]{}, 066207 (2005).
P. Cizeau and J. P. Bouchaud, Phys. Rev. E [**50**]{}, 1810 (1994); Z. Burda, R. A. Janik, J. Jurkiewicz, M. A. Nowak, G. Papp, and I. Zahed, Phys. Rev. E [**65**]{}, 021106 (2002); N. S. Witte and P. J. Forrester, Nonlinearity [**13**]{}, 1965 (2000).
K.A. Muttalib and J.R. Klauder, Phys. Rev. E. [**71**]{}, 055101(R) (2005).
G. Biroli, J. P. Bouchaud and M. Potters, arXiv:0710.0802v1 \[cond-mat.stat-mech\].
M. Mézard, G. Parisi and M. Virasoro, [*Spin glasses theory and beyond*]{} (World Scientific, Singapore, 1987).
C. Beck and E. G. D. Cohen, Physica A [**322**]{}, 267 (2003); A. Y. Abul-Magd, Phys. Rev. E [**72**]{}, 066114 (2005).
M. L. Mehta, [*Random Matrices*]{} (Elsevier Academic Press, 3nd Ed., 2004).
A. Pandey, Ann. Phys. [**119**]{}, 119 (1979).
Z. Burda, A. T. Görlich,and B. Waclaw, Phys. Rev. E [**74**]{}, 041129 (2006).
R. Albert and A.-L. Barabási, Rev. Mod. Phys. [**74**]{}, 47 (2002).
P. Erdös and A. Renyi, Publ. Math. Debrecen [**6**]{}, 290 (1959).
G. Semerjian and L. F. Cugliandolo, J. Phys. A:Math. Gen. [**35**]{}, 4837 (2002).
I. J. Farkas, I. Derényi, A.-L. Barábasi, and T. Vicsek, Phys. Rev. E [**64**]{}, 026704 (2001).
C. A. Tracy and H. Widom, Commun. Math. Phys. [**159**]{}, 151 (1994) and [**177**]{}, 727 (1996).
C. A. Tracy and H. Widom, Proceedings of the ICM, Beijing 2002, vol. 1, 587–596.
J. Gravner, C. A. Tracy and H. Widom, Ann. of Prob. [**30**]{}, 1340 (2002); Commun. Math. Phys. [**229**]{}, 433 (2002); K. Johansson, Prob. Theo. and Rel. Fields [**138**]{}, 75 (2007).
[**Figure Captions**]{}
Fig. 1 The eigenvalue density of three matrices of size $N=300$ generated using Eqs. (\[11\]) and (\[18\]) with $\bar{\xi}=1/2$ compared with Wigner’s semi-circle law.
Fig. 2 The eigenvalue density of one Lévy matrix of size $N=600$ whose elements are Cauchy distributed compared to a Cauchy distribution.
Fig. 3 Full lines: the number variances calculated with Eq. (\[37\]) for the values $\bar{\xi} =5,10,20,50$ and $200$ as indicated in the figure; dashed lines: the linear Poisson number variance and the GOE number variance.
Fig. 4 The eigenvalue density of the disordered random graph model calculated with Eqs. (\[518\]) and (\[18\]) with $\bar{\xi} =1/2$ and for values $ 0.2,$ $ 0.3$ and $0.8$ of the scaling parameter $z.$
|
---
abstract: |
We present results from an analysis of using 232 million decays collected with the detector at the 2 asymmetric-energy $B$ Factory at SLAC. We measure the longitudinal polarization fraction $\ptrue = 0.978
\pm 0.014 {\ensuremath{\mathrm{(stat)}}\xspace}\,^{+0.021}_{-0.029} {\ensuremath{\mathrm{(syst)}}\xspace}$ and the -violating parameters ${\slong} = -0.33 \pm 0.24 {\ensuremath{\mathrm{(stat)}}\xspace}^{+0.08}_{-0.14} {\ensuremath{\mathrm{(syst)}}\xspace}$ and $\clong = -0.03\pm 0.18 {\ensuremath{\mathrm{(stat)}}\xspace}\pm 0.09 {\ensuremath{\mathrm{(syst)}}\xspace}$. Using an isospin analysis of $B\rightarrow \rho\rho$ decays we determine the unitarity triangle parameter $\alpha$. The solution compatible with the Standard Model is $\alpha =
(100 \pm 13)^\circ$.
author:
- 'B. Aubert'
- 'R. Barate'
- 'D. Boutigny'
- 'F. Couderc'
- 'Y. Karyotakis'
- 'J. P. Lees'
- 'V. Poireau'
- 'V. Tisserand'
- 'A. Zghiche'
- 'E. Grauges'
- 'A. Palano'
- 'M. Pappagallo'
- 'A. Pompili'
- 'J. C. Chen'
- 'N. D. Qi'
- 'G. Rong'
- 'P. Wang'
- 'Y. S. Zhu'
- 'G. Eigen'
- 'I. Ofte'
- 'B. Stugu'
- 'G. S. Abrams'
- 'A. W. Borgland'
- 'A. B. Breon'
- 'D. N. Brown'
- 'J. Button-Shafer'
- 'R. N. Cahn'
- 'E. Charles'
- 'C. T. Day'
- 'M. S. Gill'
- 'A. V. Gritsan'
- 'Y. Groysman'
- 'R. G. Jacobsen'
- 'R. W. Kadel'
- 'J. Kadyk'
- 'L. T. Kerth'
- 'Yu. G. Kolomensky'
- 'G. Kukartsev'
- 'G. Lynch'
- 'L. M. Mir'
- 'P. J. Oddone'
- 'T. J. Orimoto'
- 'M. Pripstein'
- 'N. A. Roe'
- 'M. T. Ronan'
- 'W. A. Wenzel'
- 'M. Barrett'
- 'K. E. Ford'
- 'T. J. Harrison'
- 'A. J. Hart'
- 'C. M. Hawkes'
- 'S. E. Morgan'
- 'A. T. Watson'
- 'M. Fritsch'
- 'K. Goetzen'
- 'T. Held'
- 'H. Koch'
- 'B. Lewandowski'
- 'M. Pelizaeus'
- 'K. Peters'
- 'T. Schroeder'
- 'M. Steinke'
- 'J. T. Boyd'
- 'J. P. Burke'
- 'N. Chevalier'
- 'W. N. Cottingham'
- 'M. P. Kelly'
- 'T. Cuhadar-Donszelmann'
- 'C. Hearty'
- 'N. S. Knecht'
- 'T. S. Mattison'
- 'J. A. McKenna'
- 'D. Thiessen'
- 'A. Khan'
- 'P. Kyberd'
- 'L. Teodorescu'
- 'A. E. Blinov'
- 'V. E. Blinov'
- 'A. D. Bukin'
- 'V. P. Druzhinin'
- 'V. B. Golubev'
- 'V. N. Ivanchenko'
- 'E. A. Kravchenko'
- 'A. P. Onuchin'
- 'S. I. Serednyakov'
- 'Yu. I. Skovpen'
- 'E. P. Solodov'
- 'A. N. Yushkov'
- 'D. Best'
- 'M. Bondioli'
- 'M. Bruinsma'
- 'M. Chao'
- 'I. Eschrich'
- 'D. Kirkby'
- 'A. J. Lankford'
- 'M. Mandelkern'
- 'R. K. Mommsen'
- 'W. Roethel'
- 'D. P. Stoker'
- 'C. Buchanan'
- 'B. L. Hartfiel'
- 'A. J. R. Weinstein'
- 'S. D. Foulkes'
- 'J. W. Gary'
- 'O. Long'
- 'B. C. Shen'
- 'K. Wang'
- 'L. Zhang'
- 'D. del Re'
- 'H. K. Hadavand'
- 'E. J. Hill'
- 'D. B. MacFarlane'
- 'H. P. Paar'
- 'S. Rahatlou'
- 'V. Sharma'
- 'J. W. Berryhill'
- 'C. Campagnari'
- 'A. Cunha'
- 'B. Dahmes'
- 'T. M. Hong'
- 'A. Lu'
- 'M. A. Mazur'
- 'J. D. Richman'
- 'W. Verkerke'
- 'T. W. Beck'
- 'A. M. Eisner'
- 'C. J. Flacco'
- 'C. A. Heusch'
- 'J. Kroseberg'
- 'W. S. Lockman'
- 'G. Nesom'
- 'T. Schalk'
- 'B. A. Schumm'
- 'A. Seiden'
- 'P. Spradlin'
- 'D. C. Williams'
- 'M. G. Wilson'
- 'J. Albert'
- 'E. Chen'
- 'G. P. Dubois-Felsmann'
- 'A. Dvoretskii'
- 'D. G. Hitlin'
- 'I. Narsky'
- 'T. Piatenko'
- 'F. C. Porter'
- 'A. Ryd'
- 'A. Samuel'
- 'S. Yang'
- 'R. Andreassen'
- 'S. Jayatilleke'
- 'G. Mancinelli'
- 'B. T. Meadows'
- 'M. D. Sokoloff'
- 'F. Blanc'
- 'P. Bloom'
- 'S. Chen'
- 'W. T. Ford'
- 'U. Nauenberg'
- 'A. Olivas'
- 'P. Rankin'
- 'W. O. Ruddick'
- 'J. G. Smith'
- 'K. A. Ulmer'
- 'J. Zhang'
- 'A. Chen'
- 'E. A. Eckhart'
- 'J. L. Harton'
- 'A. Soffer'
- 'W. H. Toki'
- 'R. J. Wilson'
- 'Q. Zeng'
- 'B. Spaan'
- 'D. Altenburg'
- 'T. Brandt'
- 'J. Brose'
- 'M. Dickopp'
- 'E. Feltresi'
- 'A. Hauke'
- 'V. Klose'
- 'H. M. Lacker'
- 'E. Maly'
- 'R. Nogowski'
- 'S. Otto'
- 'A. Petzold'
- 'G. Schott'
- 'J. Schubert'
- 'K. R. Schubert'
- 'R. Schwierz'
- 'J. E. Sundermann'
- 'D. Bernard'
- 'G. R. Bonneaud'
- 'P. Grenier'
- 'S. Schrenk'
- 'Ch. Thiebaux'
- 'G. Vasileiadis'
- 'M. Verderi'
- 'D. J. Bard'
- 'P. J. Clark'
- 'W. Gradl'
- 'F. Muheim'
- 'S. Playfer'
- 'Y. Xie'
- 'M. Andreotti'
- 'V. Azzolini'
- 'D. Bettoni'
- 'C. Bozzi'
- 'R. Calabrese'
- 'G. Cibinetto'
- 'E. Luppi'
- 'M. Negrini'
- 'L. Piemontese'
- 'A. Sarti'
- 'F. Anulli'
- 'R. Baldini-Ferroli'
- 'A. Calcaterra'
- 'R. de Sangro'
- 'G. Finocchiaro'
- 'P. Patteri'
- 'I. M. Peruzzi'
- 'M. Piccolo'
- 'A. Zallo'
- 'A. Buzzo'
- 'R. Capra'
- 'R. Contri'
- 'M. Lo Vetere'
- 'M. Macri'
- 'M. R. Monge'
- 'S. Passaggio'
- 'C. Patrignani'
- 'E. Robutti'
- 'A. Santroni'
- 'S. Tosi'
- 'S. Bailey'
- 'G. Brandenburg'
- 'K. S. Chaisanguanthum'
- 'M. Morii'
- 'E. Won'
- 'R. S. Dubitzky'
- 'U. Langenegger'
- 'J. Marks'
- 'S. Schenk'
- 'U. Uwer'
- 'W. Bhimji'
- 'D. A. Bowerman'
- 'P. D. Dauncey'
- 'U. Egede'
- 'J. R. Gaillard'
- 'G. W. Morton'
- 'J. A. Nash'
- 'M. B. Nikolich'
- 'G. P. Taylor'
- 'M. J. Charles'
- 'G. J. Grenier'
- 'U. Mallik'
- 'A. K. Mohapatra'
- 'J. Cochran'
- 'H. B. Crawley'
- 'V. Eyges'
- 'W. T. Meyer'
- 'S. Prell'
- 'E. I. Rosenberg'
- 'A. E. Rubin'
- 'J. Yi'
- 'N. Arnaud'
- 'M. Davier'
- 'X. Giroux'
- 'G. Grosdidier'
- 'A. Höcker'
- 'F. Le Diberder'
- 'V. Lepeltier'
- 'A. M. Lutz'
- 'T. C. Petersen'
- 'M. Pierini'
- 'S. Plaszczynski'
- 'S. Rodier'
- 'P. Roudeau'
- 'M. H. Schune'
- 'A. Stocchi'
- 'G. Wormser'
- 'C. H. Cheng'
- 'D. J. Lange'
- 'M. C. Simani'
- 'D. M. Wright'
- 'A. J. Bevan'
- 'C. A. Chavez'
- 'J. P. Coleman'
- 'I. J. Forster'
- 'J. R. Fry'
- 'E. Gabathuler'
- 'R. Gamet'
- 'K. A. George'
- 'D. E. Hutchcroft'
- 'R. J. Parry'
- 'D. J. Payne'
- 'C. Touramanis'
- 'C. M. Cormack'
- 'F. Di Lodovico'
- 'C. L. Brown'
- 'G. Cowan'
- 'R. L. Flack'
- 'H. U. Flaecher'
- 'M. G. Green'
- 'P. S. Jackson'
- 'T. R. McMahon'
- 'S. Ricciardi'
- 'F. Salvatore'
- 'D. Brown'
- 'C. L. Davis'
- 'J. Allison'
- 'N. R. Barlow'
- 'R. J. Barlow'
- 'M. C. Hodgkinson'
- 'G. D. Lafferty'
- 'M. T. Naisbit'
- 'J. C. Williams'
- 'C. Chen'
- 'A. Farbin'
- 'W. D. Hulsbergen'
- 'A. Jawahery'
- 'D. Kovalskyi'
- 'C. K. Lae'
- 'V. Lillard'
- 'D. A. Roberts'
- 'G. Blaylock'
- 'C. Dallapiccola'
- 'S. S. Hertzbach'
- 'R. Kofler'
- 'V. B. Koptchev'
- 'T. B. Moore'
- 'S. Saremi'
- 'H. Staengle'
- 'S. Willocq'
- 'R. Cowan'
- 'K. Koeneke'
- 'G. Sciolla'
- 'S. J. Sekula'
- 'F. Taylor'
- 'R. K. Yamamoto'
- 'H. Kim'
- 'P. M. Patel'
- 'S. H. Robertson'
- 'A. Lazzaro'
- 'V. Lombardo'
- 'F. Palombo'
- 'J. M. Bauer'
- 'L. Cremaldi'
- 'V. Eschenburg'
- 'R. Godang'
- 'R. Kroeger'
- 'J. Reidy'
- 'D. A. Sanders'
- 'D. J. Summers'
- 'H. W. Zhao'
- 'S. Brunet'
- 'D. Côté'
- 'P. Taras'
- 'B. Viaud'
- 'H. Nicholson'
- 'N. Cavallo'
- 'G. De Nardo'
- 'F. Fabozzi'
- 'C. Gatto'
- 'L. Lista'
- 'D. Monorchio'
- 'P. Paolucci'
- 'D. Piccolo'
- 'C. Sciacca'
- 'M. Baak'
- 'H. Bulten'
- 'G. Raven'
- 'H. L. Snoek'
- 'L. Wilden'
- 'C. P. Jessop'
- 'J. M. LoSecco'
- 'T. Allmendinger'
- 'G. Benelli'
- 'K. K. Gan'
- 'K. Honscheid'
- 'D. Hufnagel'
- 'P. D. Jackson'
- 'H. Kagan'
- 'R. Kass'
- 'T. Pulliam'
- 'A. M. Rahimi'
- 'R. Ter-Antonyan'
- 'Q. K. Wong'
- 'J. Brau'
- 'R. Frey'
- 'O. Igonkina'
- 'M. Lu'
- 'C. T. Potter'
- 'N. B. Sinev'
- 'D. Strom'
- 'E. Torrence'
- 'F. Colecchia'
- 'A. Dorigo'
- 'F. Galeazzi'
- 'M. Margoni'
- 'M. Morandin'
- 'M. Posocco'
- 'M. Rotondo'
- 'F. Simonetto'
- 'R. Stroili'
- 'C. Voci'
- 'M. Benayoun'
- 'H. Briand'
- 'J. Chauveau'
- 'P. David'
- 'L. Del Buono'
- 'Ch. de la Vaissière'
- 'O. Hamon'
- 'M. J. J. John'
- 'Ph. Leruste'
- 'J. Malclès'
- 'J. Ocariz'
- 'L. Roos'
- 'G. Therin'
- 'P. K. Behera'
- 'L. Gladney'
- 'Q. H. Guo'
- 'J. Panetta'
- 'M. Biasini'
- 'R. Covarelli'
- 'M. Pioppi'
- 'C. Angelini'
- 'G. Batignani'
- 'S. Bettarini'
- 'F. Bucci'
- 'G. Calderini'
- 'M. Carpinelli'
- 'F. Forti'
- 'M. A. Giorgi'
- 'A. Lusiani'
- 'G. Marchiori'
- 'M. Morganti'
- 'N. Neri'
- 'E. Paoloni'
- 'M. Rama'
- 'G. Rizzo'
- 'G. Simi'
- 'J. Walsh'
- 'M. Haire'
- 'D. Judd'
- 'K. Paick'
- 'D. E. Wagoner'
- 'J. Biesiada'
- 'N. Danielson'
- 'P. Elmer'
- 'Y. P. Lau'
- 'C. Lu'
- 'J. Olsen'
- 'A. J. S. Smith'
- 'A. V. Telnov'
- 'F. Bellini'
- 'G. Cavoto'
- 'A. D’Orazio'
- 'E. Di Marco'
- 'R. Faccini'
- 'F. Ferrarotto'
- 'F. Ferroni'
- 'M. Gaspero'
- 'L. Li Gioi'
- 'M. A. Mazzoni'
- 'S. Morganti'
- 'G. Piredda'
- 'F. Polci'
- 'F. Safai Tehrani'
- 'C. Voena'
- 'S. Christ'
- 'H. Schröder'
- 'G. Wagner'
- 'R. Waldi'
- 'T. Adye'
- 'N. De Groot'
- 'B. Franek'
- 'G. P. Gopal'
- 'E. O. Olaiya'
- 'F. F. Wilson'
- 'R. Aleksan'
- 'S. Emery'
- 'A. Gaidot'
- 'S. F. Ganzhur'
- 'P.-F. Giraud'
- 'G. Graziani'
- 'G. Hamel de Monchenault'
- 'W. Kozanecki'
- 'M. Legendre'
- 'G. W. London'
- 'B. Mayer'
- 'G. Vasseur'
- 'Ch. Yèche'
- 'M. Zito'
- 'M. V. Purohit'
- 'A. W. Weidemann'
- 'J. R. Wilson'
- 'F. X. Yumiceva'
- 'T. Abe'
- 'M. T. Allen'
- 'D. Aston'
- 'R. Bartoldus'
- 'N. Berger'
- 'A. M. Boyarski'
- 'O. L. Buchmueller'
- 'R. Claus'
- 'M. R. Convery'
- 'M. Cristinziani'
- 'J. C. Dingfelder'
- 'D. Dong'
- 'J. Dorfan'
- 'D. Dujmic'
- 'W. Dunwoodie'
- 'S. Fan'
- 'R. C. Field'
- 'T. Glanzman'
- 'S. J. Gowdy'
- 'T. Hadig'
- 'V. Halyo'
- 'C. Hast'
- 'T. Hryn’ova'
- 'W. R. Innes'
- 'S. Kazuhito'
- 'M. H. Kelsey'
- 'P. Kim'
- 'M. L. Kocian'
- 'D. W. G. S. Leith'
- 'J. Libby'
- 'S. Luitz'
- 'V. Luth'
- 'H. L. Lynch'
- 'H. Marsiske'
- 'R. Messner'
- 'D. R. Muller'
- 'C. P. O’Grady'
- 'V. E. Ozcan'
- 'A. Perazzo'
- 'M. Perl'
- 'B. N. Ratcliff'
- 'A. Roodman'
- 'A. A. Salnikov'
- 'R. H. Schindler'
- 'J. Schwiening'
- 'A. Snyder'
- 'A. Soha'
- 'J. Stelzer'
- 'J. Strube'
- 'D. Su'
- 'M. K. Sullivan'
- 'J. M. Thompson'
- 'J. Va’vra'
- 'S. R. Wagner'
- 'M. Weaver'
- 'W. J. Wisniewski'
- 'M. Wittgen'
- 'D. H. Wright'
- 'A. K. Yarritu'
- 'C. C. Young'
- 'P. R. Burchat'
- 'A. J. Edwards'
- 'S. A. Majewski'
- 'B. A. Petersen'
- 'C. Roat'
- 'M. Ahmed'
- 'S. Ahmed'
- 'M. S. Alam'
- 'J. A. Ernst'
- 'M. A. Saeed'
- 'M. Saleem'
- 'F. R. Wappler'
- 'W. Bugg'
- 'M. Krishnamurthy'
- 'S. M. Spanier'
- 'R. Eckmann'
- 'J. L. Ritchie'
- 'A. Satpathy'
- 'R. F. Schwitters'
- 'J. M. Izen'
- 'I. Kitayama'
- 'X. C. Lou'
- 'S. Ye'
- 'F. Bianchi'
- 'M. Bona'
- 'F. Gallo'
- 'D. Gamba'
- 'M. Bomben'
- 'L. Bosisio'
- 'C. Cartaro'
- 'F. Cossutti'
- 'G. Della Ricca'
- 'S. Dittongo'
- 'S. Grancagnolo'
- 'L. Lanceri'
- 'P. Poropat'
- 'L. Vitale'
- 'G. Vuagnin'
- 'F. Martinez-Vidal'
- 'R. S. Panvini'
- 'Sw. Banerjee'
- 'B. Bhuyan'
- 'C. M. Brown'
- 'D. Fortin'
- 'K. Hamano'
- 'R. Kowalewski'
- 'J. M. Roney'
- 'R. J. Sobie'
- 'J. J. Back'
- 'P. F. Harrison'
- 'T. E. Latham'
- 'G. B. Mohanty'
- 'H. R. Band'
- 'X. Chen'
- 'B. Cheng'
- 'S. Dasu'
- 'M. Datta'
- 'A. M. Eichenbaum'
- 'K. T. Flood'
- 'M. Graham'
- 'J. J. Hollar'
- 'J. R. Johnson'
- 'P. E. Kutter'
- 'H. Li'
- 'R. Liu'
- 'B. Mellado'
- 'A. Mihalyi'
- 'Y. Pan'
- 'R. Prepost'
- 'P. Tan'
- 'J. H. von Wimmersperg-Toeller'
- 'J. Wu'
- 'S. L. Wu'
- 'Z. Yu'
- 'M. G. Greene'
- 'H. Neal'
title: ' [**Improved Measurement of the CKM Angle [$\alpha$]{} Using Decays.** ]{}'
---
hep-ex/[0503049]{}\
[^1]
[^2]
In the Standard Model, -violating effects in the -meson system arise from a single phase in the Cabibbo-Kobayashi-Maskawa (CKM) quark-mixing matrix [@CKM]. Interference between direct decay and decay after $\Bz\Bzb$ mixing in results in a time-dependent decay-rate asymmetry that is sensitive to the angle $\alpha \equiv
\arg\left[-V_{td}^{}V_{tb}^{*}/V_{ud}^{}V_{ub}^{*}\right]$ in the unitarity triangle of the CKM matrix . This decay proceeds mainly through a $\b \to \u\ubar \d$ tree diagram. The presence of penguin loop contributions introduces additional phases that shift the experimentally measurable parameter $\alpha_{\mathrm{eff}}$ away from the value of $\alpha$. However, measurements of the $\Bp \to \rho^+\rho^0$ branching fraction and the upper limit for $\Bz
\to \rho^0 \rho^0$ [@recentrhorho; @PRLrho0rho0] show that the penguin contribution in $\B \to \rho \rho$ is small with respect to the leading tree diagram, and $\delta\alpha_{\rho\rho} = \alpha_{\mathrm{eff}} - \alpha$ is constrained at $\pm 11^\circ$ at $1\sigma$ [@PRLrho0rho0]. This Letter presents an update of the time-dependent analysis of and measurement of the CKM angle $\alpha$ reported in [@ref:us].
The analysis of $B$ decays to $\rho^+\rho^-$ is complicated by the presence of a mode with longitudinal polarization and two with transverse polarizations. The longitudinal mode is even, while the transverse modes contain -even and -odd states. Empirically, the decay is observed to be dominated by the longitudinal polarization [@ref:us], with a fraction $\ptrue$ defined by the fraction of the helicity zero state in the decay. The angular distribution is $$\begin{aligned}
&&\frac{d^2\Gamma}{\Gamma d\cos\theta_1 d\cos\theta_2}= \label{eqn:one} \\ \nonumber
&& \frac{9}{4}\left[f_L \cos^2\theta_1 \cos^2\theta_2 + \frac{1}{4}(1-\ptrue) \sin^2\theta_1 \sin^2\theta_2 \right]\end{aligned}$$ where $\theta_{i=1,2}$ is the angle between the momentum and the direction opposite the $B^0$ in the $\rho$ rest frame, and we have integrated over the angle between the $\rho$ decay planes.
The analysis reported here is improved over our earlier publication [@ref:us] by a change in selection requirements resulting in an increased signal efficiency; introduction of a signal time dependence that accounts for possible misreconstruction; and use of a more detailed background model. This measurement uses 232 million decays collected with the [@ref:babar] detector at the 2asymmetric-energy $B$ Factory at SLAC.
We reconstruct candidates ($B_{\rm rec}$) from combinations of two charged tracks and two candidates. We require that both tracks have particle identification information inconsistent with the electron, kaon, and proton hypotheses. The candidates are formed from pairs of photons each of which has a measured energy greater than $50~{\ensuremath{\mathrm{\,Me\kern -0.1em V}}\xspace}$. The reconstructed mass must satisfy $0.10 < m_{\gamma\gamma} < 0.16~{\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace}$. The mass of the $\rho$ candidates must satisfy $0.5 < \mv < 1.0~{\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace}$. When multiple candidates can be formed, we select the one that minimizes the sum of $( m_{\gamma\gamma} - m_{\piz} )^2$ where $m_{\piz}$ is the true mass. If more than one candidate has the same mesons, we select one at random.
Combinatorial backgrounds dominate near $|\coshel|=1$, and backgrounds from $B$ decays tend to concentrate at negative values of $\coshel$. We reduce these backgrounds with the requirement $-0.90 < \coshel < 0.98$.
Continuum $\epem \to \qqbar$ ($q = u,d,s,c$) events are the dominant background. This background is reduced by requiring that $|\cos\B_{TR}|<0.8$, where $\B_{TR}$ is the angle between the $B$ thrust axis and that of the rest of the event, ROE. The thrust axis of the is the direction which maximizes the longitudinal momenta of the particles in the candidate. To distinguish signal from continuum we use a neural network () to combine ten discriminating variables: the event shape variables that are used in the Fisher discriminant in Ref [@pipiBabar]; the cosine of the angle between the direction of the and the collision axis ($z$) in the center-of-mass (CM) frame; the cosine of the angle between the thrust axis and the $z$ axis, $|\cos\B_{TR}|$; the decay angle of each (defined in analogy to the $\rho$ decay angle, $\theta_i$); and the sum of transverse momenta in the ROE relative to the $z$ axis.
Signal events are identified kinematically using two variables, the difference between the CM energy of the candidate and $\sqrt{s}/2$, and the beam-energy-substituted mass $\mes = \sqrt{(s/2 + {\mathbf {p}}_i\cdot {\mathbf {p}}_B)^2/E_i^2- {\mathbf {p}}_B^2}$, where $\sqrt{s}$ is the total CM energy. The momentum ${\mathbf {p}_B}$ and four-momentum of the initial state $(E_i, {\mathbf
{p}_i})$ are defined in the laboratory frame. We accept candidates that satisfy $5.23 < \mes <5.29~{\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace}$ and $-0.12<\DeltaE<0.15~{\ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace}$. The asymmetric selection reduces background from higher-multiplicity decays.
To study the time-dependent asymmetry one needs to measure the proper-time difference, , between the two decays in the event, and to determine the flavor of the other meson ($B_{\rm tag}$). We calculate from the measured separation between the $B_{\rm rec}$ and $B_{\rm tag}$ decay vertices [@prdsin2b]. We determine the $B_{\rm rec}$ vertex from the two charged-pion tracks in its decay. The $B_{\rm tag}$ decay vertex is obtained by fitting the other tracks in the event, with constraints from the $B_{\rm rec}$ momentum and the beam-spot location. The RMS resolution on $\deltat$ is 1.1 PS. . We only use events that satisfy $|\deltat|<20 \, \ps$ and for which the error on less than $2.5 \, \ps$. The flavor of the $B_{\rm tag}$ meson is determined with a multivariate technique [@pipiBabar] that has a total effective tagging efficiency of $(29.9\pm 0.5)$%.
Signal candidates may pass the selection requirement even if one or more of the pions assigned to the $\rho^+\rho^-$ state belongs to the other $B$ in the event. These self-cross-feed (SCF) candidates constitute 50% (26%) of the accepted signal for $\fL=1$ ($\fL=0$). The majority of SCF events have both charged pions from the $\rho^+\rho^-$ final state, and unbiased information (correct-track SCF). There is a SCF component (14% of the signal) where at least one track in $B_{\rm rec}$ is from the rest of the event. These wrong track events have biased information, and are treated separately for the result. The probability density function (PDF) describing wrong track events is used only in determining the signal yield and polarization. A systematic error is assigned to the results from this type of signal event.
We obtain a sample of 68703 events that enter a maximum-likelihood fit. These events are dominated by backgrounds: roughly $92$% from and $7$% from events. The remaining 1% of events is signal. We distinguish the following candidate types: (i) correctly reconstructed signal; (ii) SCF signal, split into correct and wrong track parts; (iii) charm $\Bpm$ background ($b\to c$); (iv) charm $\Bz$ background ($b\to c$); (v) charmless $B$ backgrounds; and (vi) continuum background. The dominant charmless backgrounds are decays to $\rho\pi$, $(a_1 \pi)^\pm$, $(a_1 \pi)^0$, and longitudinally polarized $a_1\rho$ final states. For these decays we use the inclusive branching fractions (in units of $10^{-6}$), $34 \pm 4$ [@rhopi], $42\pm 42$, $42\pm 6$ [@aonepi] and $100\pm 100$, respectively. The corresponding expected number of events in the sample are $82 \pm 13$, $87\pm 87$, $65\pm 9$, and $202\pm 202$. We also account for contributions from higher kaon resonances ($112 \pm 112$ events) and $\rho^+\rho^0$ ($82\pm19$ events). In addition we expect $2551 \pm 510$ ($1316 \pm 263$) charged (neutral) decays to final states containing charm mesons. The -background decays are included as separate components in the fit.
Each candidate is described with the eight $B_{\rm rec}$ kinematic variables: , , the and values of the two $\rho$ mesons, , and . For each fit component, we construct a PDF that is the product of PDFs for these variables, neglecting correlations. This introduces a fit bias that is corrected with the use of Monte Carlo (MC) simulation. The continuum-background yield and its PDF parameters for , , , and are floated in the fit to data. The continuum distribution is described by a Breit-Wigner and polynomial shape, and is derived from and data sidebands. For all other fit components the PDFs are extracted from high-statistics MC samples. The distributions for the background are described by a non-parametric (NP) PDF derived from the MC samples, as the detector acceptance and selection modify the known vector-meson decay distribution. The true signal distribution is given by Eq. \[eqn:one\] multiplied by an acceptance function determined from signal MC samples, whereas SCF signal is modeled using NP PDFs.
The signal decay-rate distribution for both polarizations $f_+ (f_-)$ for $B_{\rm tag}$= () is given by $$\begin{aligned}
f_{\pm}(\deltat) = \frac{e^{-\left|\deltat\right|/\tau}}{4\tau} [1
\pm S\sin(\deltamd\deltat) \mp \C\cos(\deltamd\deltat)]\,, \nonumber\end{aligned}$$ where $\tau$ is the mean lifetime, is the mixing frequency, and $S$ = or and $C$ = or are the -asymmetry parameters for the longitudinally and transversely polarized signal. The parameters $S$ and $C$ describe -mixing induced and direct violation, respectively. $S$ and $C$ for the longitudinally polarized wrong-track signal are fixed to zero. The PDF takes into account incorrect tags and is convolved with the resolution function described below. Since is approximately $1$, the fit has no sensitivity to either or . We set these parameters to zero and vary them in the evaluation of systematic uncertainties.
The signal resolution function consists of three Gaussians ($\sim$$90\%$ core, $\sim$9$\%$ tail, $\sim$1$\%$ outliers), and takes into account the per-event error on $\deltat$ from the vertex fit. The resolution is parameterized using a large sample of fully reconstructed hadronic decays [@prdsin2b]. For wrong-track SCF we replace the -meson lifetime by an effective lifetime obtained from MC simulation to account for the difference in the resolution. The nominal distribution for the backgrounds is a NP representation of the MC samples; in the study of systematic errors we replace this model with the one used for signal. The resolution for continuum background is described by the sum of three Gaussian distributions whose parameters are determined from data.
We perform an unbinned extended maximum likelihood fit. The results of the fit are $617 \pm 52$ signal events, after correction of a $68$ event fit bias, with $\fL = 0.978 \pm 0.014$, $\slong = -0.33 \pm 0.24$ and $\clong = -0.03 \pm 0.18$. The measured signal yield, polarization, and parameters are in agreement with our earlier publication [@ref:us], with significantly improved precision. Figure \[fig:plots\] shows distributions of , , and for the highest purity tagged events with a loose requirement on . The plot of contains 14% of the signal and 1.5% of the background. For the other plots there is an added constraint that $\mes > 5.27 {\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace}$; these requirements retain 11.5% of the signal and 0.4% of the background. Figure \[fig:dtplots\] shows the distribution for and tagged events. The time-dependent decay-rate asymmetry $[ N (\deltat) - \overline{N}(\deltat) ] / [ N (\deltat) + \overline{N}(\deltat) ]$ is also shown, where $N$ $(\overline{N})$ is the decay-rate for () tagged events.
We have studied possible sources of systematic uncertainties on , and . The dominant uncertainties for come from floating the background yields ($\,^{+0.00}_{-0.02}$), non-resonant events (0.015) and fit bias (0.01). The dominant systematic uncertainty on the results comes from the uncertainty in the -background branching ratios. This results in a shift on (), as large as $\,^{+0.00}_{-0.12}$ ($\,^{+0.008}_{-0.003}$). Additional uncertainties on the results come from possible violation in the background, calculated as in Ref. [@ref:us]. We allow for a asymmetry up to 20% in decays to final states with charm, resulting in an uncertainty of 0.027 (0.045) on (). Allowing for possible violation in the transverse polarization results in an uncertainty of 0.02 ($\,^{+0.002}_{-0.016}$) on (). We estimate the systematic error on our results from neglecting the interference between and other $4\pi$ final states: $\B \to a_1\pi$, $\rho \pi\pi^0$ and $\B\to\pi\pi\piz\piz$. Strong phases and content of the interfering states are varied between zero and maximum using uniform prior distributions, and the RMS deviation of the parameters from nominal is taken as the systematic error; this is found to be 0.02 on and . Other contributions that are large include knowledge of the vertex detector alignment 0.034 (0.005) on (), and possible violation in the doubly-Cabibbo-suppressed decays on the tag side of the event [@ref:dcsd]. We allow violation in the wrong-track SCF to vary between $-1$ and $+1$, which results in changes of 0.007 (0.012) in (). The nominal fit does not account for non-resonant background. If we add a non-resonant component of $\B\to\rho \pi\pi^0$ events to the likelihood, we fit $83 \pm 59$ non-resonant events and observe only a $(6 \pm 4)$% drop in signal yield. This effect is included in our total systematic uncertainty. Possible contributions from $\sigma(400) \pi^0\pi^0$ decays are neglected due to the small reconstruction efficiency ($0.4\%$). Our results are
$$\begin{aligned}
\ptrue &=& 0.978 \pm 0.014 {\ensuremath{\mathrm{(stat)}}\xspace}\,^{+0.021}_{-0.029} {\ensuremath{\mathrm{(syst)}}\xspace}, \nonumber\\
{\slong} &=& -0.33 \pm 0.24 {\ensuremath{\mathrm{(stat)}}\xspace}^{+0.08}_{-0.14} {\ensuremath{\mathrm{(syst)}}\xspace}, \nonumber\\
\clong &=& -0.03 \pm 0.18 {\ensuremath{\mathrm{(stat)}}\xspace}\pm 0.09 {\ensuremath{\mathrm{(syst)}}\xspace}, \nonumber\end{aligned}$$
where the correlation between and is $-0.042$.
We constrain the CKM angle $\alpha$ from an isospin analysis [@grossmanquinn] of $B \to \rho\rho$. The inputs to the isospin analysis are the amplitudes of the -even longitudinal polarization of the $\rho\rho$ final state, as well as the measured values of and for . We use the measurements of , and presented here; the branching fraction of $\B^0\to\rhop\rhom$ from [@ref:us], which uses information from [@ref:rhorhoprd]; the combined branching fraction and for $\B\to\rhop\rhoz$ from Ref. [@recentrhorho]; the central value corresponding to the upper limit of ${ \cal B}(\B\to\rhoz\rhoz)$ from Ref. [@PRLrho0rho0]. We ignore electroweak penguins and possible $I=1$ amplitudes [@falk].
To interpret our results in terms of a constraint on $\alpha$ from the isospin relations, we construct a $\chi^2$ that includes the measured quantities expressed as the lengths of the sides of the isospin triangles and we determine the minimum $\chi^2_0$. As the isospin triangles do not close with the current central values of the branching ratios, we have adopted a toy MC techniques to compute the confidence level (CL) on $\alpha$; our method is similar to the approach proposed in Ref. [@FC98]. For each value of $\alpha$, scanned between $0$ and $180^\circ$, we determine the difference $\Delta \chi^2_{{\rm DATA}}(\alpha)$ between the minimum of $\chi^2(\alpha)$ and $\chi^2_0$. We then generate MC experiments around the central values obtained from the fit to data with the given value of $\alpha$ and we apply the same procedure. The fraction of these experiments in which $\Delta \chi^2_{{\rm MC}}(\alpha)$ is smaller than $\Delta \chi^2_{{\rm DATA}}(\alpha)$ is interpreted as the CL on $\alpha$. Figure \[fig:alpha\] shows $1-{\rm CL}$ for $\alpha$ obtained from this method. Selecting the solution closest to the CKM combined fit average [@ref:ckmbestfit; @ref:utfit] we find $\alpha =
100^\circ\pm13^\circ$, where the error is dominated by $\delta\alpha_{\rho\rho}$ which is $\pm 11^\circ$ at $1\sigma$. The 90% CL allowed interval for $\alpha$ is between $79^\circ$ and $123^\circ$.
![ CL on $\alpha$ obtained from the isospin analysis with the statistical method described in [@ref:ckmbestfit]. The dashed lines correspond to the 68% (top) and 90% (bottom) CL intervals. []{data-label="fig:alpha"}](FCAlphaCLPRL.eps){width="48.00000%"}
In summary we have improved the measurement of the -violating parameters and in using a data-sample 2.6 times larger than that in Ref. [@ref:us]. We do not observe mixing-induced or direct violation. We derive a model-independent measurement of the CKM angle $\alpha$, which is the most precise to date.
We are grateful for the excellent luminosity and machine conditions provided by our 2 colleagues, and for the substantial dedicated effort from the computing organizations that support . The collaborating institutions wish to thank SLAC for its support and kind hospitality. This work is supported by DOE and NSF (USA), NSERC (Canada), IHEP (China), CEA and CNRS-IN2P3 (France), BMBF and DFG (Germany), INFN (Italy), FOM (The Netherlands), NFR (Norway), MIST (Russia), and PPARC (United Kingdom). Individuals have received support from CONACyT (Mexico), A. P. Sloan Foundation, Research Corporation, and Alexander von Humboldt Foundation.
[99]{}
N. Cabibbo, [[Phys. Rev. Lett.]{} [**10**]{}]{}, 531 (1963); M. Kobayashi and T. Maskawa, [[Prog. Theor. Phys. [**49**]{}]{}]{}, 652 (1973).
Collaboration, B. Aubert [*et al.*]{}, [[Phys. Rev. Lett.]{} [**91**]{}]{}, 171802 (2003); Belle Collaboration, J. Zhang [*et al.*]{}, [[Phys. Rev. Lett.]{} [**91**]{}]{}, 221801 (2003).
Collaboration, B. Aubert [*et al.*]{}, [[Phys. Rev. Lett.]{} [**94**]{}]{}, 131801 (2005).
Collaboration, B. Aubert [*et al.*]{}, [[Phys. Rev. Lett.]{} [**93**]{}]{}, 231801 (2004).
Collaboration, B. Aubert [*et al.*]{}, [[Nucl. Instr. Methods Phys. Res., Sect. A]{} [**479**]{}]{}, 1 (2002).
Collaboration, B. Aubert [*et al.*]{}, [[Phys. Rev. Lett.]{} [**89**]{}]{}, 281802 (2002).
Collaboration, B. Aubert [*et al.*]{}, [[Phys. Rev.]{} D [**66**]{}]{}, 032003 (2002).
Belle Collaboration, A. Gordon [*et al.*]{}, [[Phys. Lett.]{} B [**542**]{}]{}, 183 (2002); Collaboration, B. Aubert [*et al.*]{}, [[Phys. Rev. Lett.]{} [**91**]{}]{}, 201802 (2003); Collaboration, B. Aubert [*et al.*]{}, [[Phys. Rev. Lett.]{} [**93**]{}]{}, 051802 (2004); Belle Collaboration, J. Zhang [*et al.*]{}, [[Phys. Rev. Lett.]{} [**94**]{}]{}, 031801 (2005).
Collaboration, B. Aubert [*et al.*]{}, hep-ex/0408021 (SLAC-PUB-10597).
O. Long [*et al.*]{}, [[Phys. Rev.]{} D [**68**]{}]{}, 034010 (2003).
M. Gronau, D. London, [[Phys. Rev. Lett.]{} [**65**]{}]{}, 3381 (1990).
Collaboration, B. Aubert [*et al.*]{}, [[Phys. Rev.]{} D [**69**]{}]{}, 031102 (2004).
A. Falk [*et al.*]{}, [[Phys. Rev.]{} D [**69**]{}]{}, 011502 (2004).
G. Feldman and R. Cousins, [[Phys. Rev.]{} D [**57**]{}]{}, 3873 (1998).
The CKMfitter Group (Charles [*et al.*]{}), [[Eur. Phys. Jour.]{} C [**41**]{}]{} , 1-131, 2005
M. Bona [*et al.*]{}, hep-ph/0501199 (submitted to JHEP).
[^1]: Deceased
[^2]: Deceased
|
---
abstract: 'We investigate in detail the flavor structure of the minimal 331 model and its implications for several flavor changing neutral current (FCNC) processes. In this model, where the weak $SU(2)_L$ gauge group of the Standard Model is extended to a $SU(3)_L$, the by far dominant new contributions come from an additional neutral $Z''$ gauge boson, that can transmit FCNCs at tree-level. At the same time, electroweak precision observables receive new contributions only at the loop level and do not constrain the model very strongly. In our analysis, we take into account new CP violating effects that have been neglected in earlier analyses, and account for a general flavor structure without reference to a certain parameterization of the new mixing matrix. We begin by studying the bounds obtained from quantities such as $\Delta M_K$, $\epsilon_K$, $\Delta M_{d/s}$ as well as $\sin 2 \beta|_{J/\psi K_S}$, and go on to explore the implications for several clean rare decay channels, namely the decays $\kpn$, $\klpn$, $B_{d/s}\to \mu^+\mu^-$ and $K_L \to \pi^0 l^+ l^-$. We find sizeable effects in all these decays, but the most interesting quantity turns out to be the $B_s^0 - \bar B_s^0$ mixing phase $\beta_s$, as measured in the mixing induced CP asymmetry of $B_s^0 \to J/\psi \phi$, which can be large. In general, we find effects in purely hadronic channels to be larger than in (semi-)leptonic ones, due to a suppression of the $Z''$-lepton couplings.'
---
TUM-HEP-658/07\
UAB-FT/624\
hep-ph/yymmnnn
[**Christoph Promberger${}^a$, Sebastian Schatt${}^a$ and Felix Schwab${}^{b}$**]{}
${}^a$ [*Physik Department, Technische Universität München, D-85748 Garching, Germany*]{}\
${}^b$ [*Departament de Física Teòrica, IFAE, UAB, E-08193 Bellaterra, Barcelona, Spain*]{}
Introduction {#sec:intro}
============
The Standard Model of Particle Physics (SM) describes at present most of the observed phenomena in nature, with the exception of a consistent inclusion of gravitational effects. Still, there are several open questions remaining in this model, concerning, among others, the matter of electroweak symmetry breaking, as well as the explicit particle content of the model, where there are three generations for both quarks and leptons. This latter question can be answered in the context of the 331 models [@Frampton:1992wt; @Pisano:1991ee], where anomaly cancellation and the asymptotic freedom of QCD require the number of generations to be precisely three. In order to do so, the $SU(2)_L$ doublet of the weak interactions is extended to a triplet with additional heavy quarks, and, additionally, the third generation transforms as an anti-triplet under the $SU(3)_L$.
In the breaking process of this new, enlarged gauge group to the SM and, subsequently, its electromagnetic $U(1)_{em}$, additional gauge bosons are encountered, among these a neutral $Z'$ boson, which is naturally heavier than the SM gauge bosons, since its mass arises from the larger VEV that breaks the $SU(3)_L$ at a high scale. Similarly, there are heavy charged and doubly charged gauge bosons, as well as additional heavy, exotically charged (in the minimal 331 model) quarks that constitute the third member of the $SU(3)_L$ triplet. In the leptonic sector these third triplet members are just given by the charge conjugated counterpart of the charged SM lepton.
While the charged gauge bosons can appear for low energy processes involving quarks only at loop level, since they always couple also the heavy quark, the neutral $Z'$ can transmit flavor changing neutral currents (FCNC) at tree level. Therefore, these processes can place rather stringent bounds on the mass of this heavy gauge boson, and there have been several analyses of certain FCNC observables in the literature [@Liu:1993gy; @GomezDumm:1994tz; @Rodriguez:2004mw]. In addition, the FCNC processes involving down type quarks are also affected by the unitary quark mixing matrix used do diagonalize the down type Yukawa coupling, while those involving up type quarks appear with the corresponding up type mixing matrix. Thus, there is the possibility of new CP violating phases, which have, however, been neglected in all previous analyses of this type (see, on the other hand, [@Langacker:2000ju], where the most general type of $Z'$ coupling is analyzed in a model independent manner).
Also, it has been repeatedly pointed out in the literature [@Liu:1993gy; @GomezDumm:1994tz; @Rodriguez:2004mw], that the most stringent FCNC constraints arise from parameters involving flavor mixing, in particular the mass differences in the neutral $K$ and $B$ meson systems, and the new measurement of $\Delta M_s$, the mass difference in the $B^0_s$ system is expected to have a significant impact here. In view of these two points we find it interesting to reanalyze in a complete manner the most important FCNC observables within the minimal 331 model, where we include also an analysis of several rare decay processes, which have not been analyzed before. We would also like to point out, that FCNCs, which can provide lower bounds on the $Z'$ mass, are complementary to the corresponding [*upper*]{} bounds stemming from the fact that the model produces a Landau pole above a certain scale. However, these lower bounds are always obscured by some lack of knowledge of the mixing matrix elements. Therefore, we will pursue in our analysis a route that is somewhat complementary to a standard FCNC analysis: We will not attempt to place lower bounds on the $Z'$ mass, but rather set its mass at several fixed values and will try to gain some information on the structure of the appearing quark mixing matrix. In addition, we will investigate the implications of the bounds obtained from well-measured observables such as $\Delta M_K$, $\varepsilon_K$, $\Delta M_{d/s}$ and $\sin 2 \beta$ for several clean rare decays, where we can give upper bounds on the corresponding branching fractions depending on the $Z'$ mass. Let us finally point out that the study of FCNC processes in these models are particularly interesting as they occur at tree level, while the usual electroweak precision (EWP) observables, that strongly constrain most beyond SM models, occur only at the loop level, which actually makes the bounds from FCNC processes more stringent than those from EWP measurements. The most recent study of electroweak precision observables can be found in [@Long:1999yv].
Our paper is organized as follows: In Section \[sec:model\], we introduce the minimal 331 model, thereby also setting our conventions. In addition, we give the FCNC vertices and a convenient parameterization of the corresponding quark mixing matrix in order to reduce the number of parameters appearing. Next, in Section \[sec:obs\] we give the additional $Z'$ contributions to several observables, which we evaluate numerically in Sect. \[sec:numerics\]. Among these observables, there are the mass differences in the neutral meson systems, as noted above, as well as several CP violating quantities, from which some information on the phase structure of the model can be obtained. During this numerical analysis, we compare our work several times to a recent, similar analysis of the Little Higgs model with T-parity (LHT) performed in [@Blanke:2006eb], since both models share the feature of introducing new CP violating phases while keeping the operator basis the same as in the SM. Finally, Section \[sec:conclusions\] contains our conclusions.
The Minimal 331 Model {#sec:model}
=====================
Let us begin by introducing the particle content of the minimal 331 model. Many details of this model have been first worked out in [@Ng:1992st], to which we refer the reader for some more information. The model consists of a gauge group $SU(3)_C \times SU(3)_L \times U(1)$, which is broken down in two steps: $$SU(3)_C \times SU(3)_L \times U(1)_X \stackrel{v_{\sigma}}{\Rightarrow} SU(3)_C \times SU(2)_L \times U(1)_Y
\stackrel{v_{\eta},v_{\rho}}{\Rightarrow} U(1)_{em}$$ Here, in contrast to the SM, two Higgs multiplets are required for the breaking of $SU(2)\times U(1)$ in order to give masses to all quarks[^1]. The additional VEV $v_{\sigma}$ is much larger than the two others. The charge assignment for the Higgs multiplets is as follows: $$\sigma = \frac{1}{\sqrt{2}}\pmatrix{\sigma_1^{++} \cr \sigma_2^+ \cr v_{\sigma}+\xi_{\sigma}+i \zeta_{\sigma}} :(1,3,1) ,\quad
\
\rho = \frac{1}{\sqrt{2}}\pmatrix{\rho_1^+ \cr v_{\rho}+\xi_{\rho} +i \zeta_{\rho} \cr \rho_2^-} : (1,3,0),$$ $$\eta = \frac{1}{\sqrt{2}}\pmatrix{v_{\eta}+\xi_{\eta}+i~\zeta_{\eta} \cr \eta_1^- \cr \eta_2^{--} \cr} : (1,3,-1)$$ where the $\xi_i$ and $\zeta_i$ denote the real (scalar) and imaginary (pseudoscalar) fluctuations around the appropriate VEVs. In analogy, the fermion content of the minimal model is given by $$\begin{aligned}
\psi_{1,2,3} = \pmatrix {e\cr -\nu_e\cr e^c\cr} \: , \pmatrix {\mu \cr -\nu_\mu
\cr \mu^c \cr} \: , \pmatrix {\tau \cr -\nu_\tau \cr \tau^c \cr}
\qquad &\mathbin:& \qquad (1, \: 3^\ast, \: 0) \ , \\
Q_{1,2} = \pmatrix {u\cr d\cr D\cr} \: , \pmatrix {c\cr s\cr S\cr}
\qquad &\mathbin:& \qquad (3, \: 3, \: -\textstyle\frac{1}{3} ) \ , \\
Q_3 = \pmatrix {b\cr -t\cr T\cr} \qquad &\mathbin:& \qquad (3, \: 3^\ast, \:
\textstyle\frac{2}{3} ) \ , \\
d_R, \: s_R, \: b_R \qquad &\mathbin:& \qquad \ -\textstyle\frac{1}{3} \ , \\
u_R, \: c_R, \: t_R \qquad &\mathbin:& \qquad \textstyle\frac{2}{3} \ , \\
D_R, \: S_R \qquad &\mathbin:& \qquad \ -\textstyle\frac{4}{3} \ , \\
T_R \qquad &\mathbin:& \qquad \textstyle \frac{5}{3} \,\end{aligned}$$ where the numbers in brackets correspond to the $SU(3)_C$, $SU(3)_L$ and $U(1)_X$ quantum numbers. For the right-handed fields, we give only the $U(1)$ number. From these, the electric charge can be obtained by $$Q=T_3+\sqrt 3 T_8 + X$$ in our normalization of the charge $X$. In order to cancel anomalies, one generation of quarks has to transform as a $3^*$ under the $SU(3)_L$, and we choose this to be the third generation, but the explicit distinction only makes a difference once a specific structure of the mixing matrix is assumed. The factor $\sqrt 3$ can, in principle, be replaced by any number, thereby distinguishing the different 331 models. Setting $\beta=-1/\sqrt{3}$, for example, requires a different fermion structure, and with it the introduction of right-handed neutrinos [@Foot:1994ym]. This 331 model with right-handed neutrinos has also been under quite some discussion [@Montero:1992jk; @Long:1999ij; @Gutierrez:2004sb], while analyses of models with general or at least various different values of $\beta$ have been performed in [@Diaz:2004fs]. Also, there have been slight modifications added to the leptonic sector in some models in order to generate neutrino masses [@Okamoto:1999cf; @Kitabayashi:2000nq; @Tully:2000kk; @Montero:2001ts; @Cortez:2005cp], as well as supersymmetric versions of the model [@Duong:1993zn; @Montero:2000ng; @Montero:2004uy].
Let us next briefly summarize the gauge boson content of the model. The physical sector consists of three neutral gauge bosons, $A,Z$ and $Z'$, which arise as mass eigenstates from the diagonalization of the gauge boson mass matrix and are composed of the gauge eigenstates as $$\begin{aligned}
\label{Zmix} Z &=& + \cos \theta_W \, W_3 - \sin \theta_W \left( \sqrt{3} \tan \theta_W \,
W_8 + \sqrt{1\!-\!3 \tan^2 \theta_W} \, B \right) \ , \\
\label{Zpmix} Z' &=& - \sqrt{1\!-\!3 \tan^2\theta_W} \, W_8 + \sqrt{3} \tan
\theta_W \, B \ , \\
\label{Amix} A &=& + \sin \theta_W \, W_3 + \cos \theta_W \left( \sqrt{3} \tan \theta_W
\, W_8 + \sqrt{1\!-\!3 \tan^2\theta_W} \, B \right)\end{aligned}$$ In these formulae, the ratio between the $U(1)_X$ coupling $g_X$ and the $SU(3)_L$ coupling $g$ has already been expressed through the Weinberg-angle $\theta_W$: $$\label{WW}
\frac{g_X^2}{g^2} = \frac{6 \, \sin^2 \theta_W}{1 \!-\!4 \: \sin^2 \theta_W} \,.$$ In addition, there are the SM-like $W^{\pm}$ bosons, as well as another singly charged $Y^{\pm}$ boson, which transmits transitions from the second to third element of the triplets and a doubly charged bilepton $Y^{++}$, which transmits transitions from the first to the third element. We will mainly be concerned with the neutral sector in the following, and the corresponding masses are $$\begin{aligned}
M^2_Z &=& \frac{1}{4} \: \frac{g^2}{\cos^2 \theta_W} \: (v_{\rho}^2+v_{\eta}^2) \, \\
M^2_{Z'} &=& \frac{1}{3} \: g^2 \left( \frac{\cos^2 \theta_W}{1-4 \sin^2 \theta_W} v_{\sigma}^2 {} \right. \nonumber\\
& & {}\left. \quad +\frac{1-4 \sin^2 \theta_W}{4 \cos^2 \theta_W } v_{\rho}^2 + \frac{(1+2 \sin^2 \theta_W)^2}{4 \cos^2 \theta_W (1-4 \sin^2 \theta_W)} v_{\eta}^2 \right) \, \\
M^2_A &=& 0\end{aligned}$$ which leaves indeed one massless photon, a $Z$ of the order of the weak scale as well as a heavier $Z'$. In principle, there can also be mixing between the $Z$ and the $Z'$, but it is constrained to be small, see, e.g. [@Liu:1993fw]. Finally, the scalar sector of this model has been analyzed in [@Anh:2000bs; @Diaz:2003dk], with the result that there is one light neutral Higgs, corresponding to the SM Higgs, three additional neutral heavy Higgs Fields as well as two singly charged and one doubly charged Higgs. In principle, these Higgs Fields should also transmit FCNCs, but these are suppressed by small Yukawa couplings of the external quarks and leptons in all processes we are studying. Therefore, we will focus on the effects of the additional $Z'$, since these are expected to dominate and refer the reader to Refs. [@Ng:1992st; @Liu:1993gy; @Liu:1994rx; @GomezDumm:1994tz; @Diaz:2004fs] for a more detailed analysis of the Yukawa coupling terms. Note also, that the relation (\[WW\]) between the coupling constants imposes additional constraints on the symmetry breaking scale $v_{\sigma}$ (and, in analogy, on the $Z'$ mass), in order to avoid the Landau pole that arises if $\sin^2 \theta_W=1/4$. A careful analysis [@LP] shows that this scale can be several TeV. To be explicit, we take $5 \mathrm{TeV}$ as an upper bound, which is close to the number given, for the case when exotically charged quarks are included, as we are doing here.
The fact that the third quark family transforms differently under the $SU(3)_L$ leads to a flavor dependent $Z'$ coupling, as shown in Table \[TABLEFermions\], where we have collected the neutral quark - gauge boson vertices in the weak eigenstate basis, writing $s_W \equiv \sin \theta_W$ and $c_W \equiv \cos \theta_W.$ In addition, we give also the coupling of the $Z'$ to leptons, which will also be required later on. This table is inspired by the similar table given in [@Perez:2004jc], which is, however formulated in terms of vector and axial vector couplings. The complete Lagrangian for the neutral currents, given in terms of these couplings, then reads: $$\begin{aligned}
\label{LFergesgamma}
{\cal L}_{\rm Fermion}^{\rm NC}= & & i e \sum_f Q_f (\overline{f} \gamma_\mu f) A^\mu \nonumber\\
& &+ i \sum_f \left( \overline{f} \gamma_\mu (g_{l.h.}^{f Z} \gamma_L + g_{r.h.}^{f Z} \gamma_R) f Z^\mu + \overline{f} \gamma_\mu (g_{l.h.}^{f Z'} \gamma_L + g_{r.h.}^{f Z'} \gamma_R) f {Z'}^\mu \right) \,,\end{aligned}$$ with $\gamma_{L/R}=\frac{1}{2}(1\mp\gamma_5)$. Note that the lepton coupling is suppressed by a factor of $\sqrt{1-4s^2_W}$, which enhances the quark vertices, where it appears in the denominator. Therefore, the $Z'$ in the minimal 331 model has a somewhat leptophobic nature, which will become apparent in our numerical analysis.
The difference between the first two and the third generation induces FCNCs transmitted by the $Z'$ boson at tree level. The structure of these couplings arises, when the couplings of all quarks are collected into one universal neutral current, where unitary mixing transformations drop out, as in the case of the SM neutral current. If this is done, one additional term remains, containing only third generation quarks and describing the difference of the couplings between the third and the first two generations. Transforming these left-over terms to the mass eigenstate basis yields a flavor changing interaction of the form $$\label{FCNC}
\mathcal{L}_{FCNC}=(g^{b,Z'}_{l.h.}-g^{d,Z'}_{l.h.})[\overline{u}\gamma_\mu\gamma_L
U_L^\dagger\pmatrix{0&&\cr&0&\cr&&1}U_Lu+\overline{d}\gamma_\mu\gamma_L
\tilde V_L^\dagger\pmatrix{0&&\cr&0&\cr&&1}\tilde V_Ld] {Z'}^{\mu}\,.$$ The matrices $U_L$ and $\tilde V_L$ diagonalize the up and down - type Yukawa couplings respectively and then obviously obey $$U_L^{\dagger} \tilde V_L = V_{CKM}.
\label{CKM}$$ We have added the tilde to distinguish between the SM CKM matrix and the mixing matrix for the down type quarks and will omit the subscript $L$ in what follows.
Next, the charged current vertices in this basis are then $$\label{CCWvert}
J_{W^+}^\mu=\overline{u}\gamma^\mu\gamma_L U_L^{\dagger} \tilde V d =\overline{u}\gamma^\mu\gamma_LV_{CKM}d$$ $$\begin{aligned}
\label{CCvert}
J_{Y^+}^\mu&=&\overline{d}\gamma^\mu\gamma_L \tilde V^\dagger
\pmatrix{1&0\cr0&1\cr0&0}D+\overline{T}\gamma^\mu\gamma_L\pmatrix{0&0&1}
U_Lu\nonumber\\
J_{Y^{++}}^\mu&=&\overline{u}\gamma^\mu\gamma_LU_L^\dagger
\pmatrix{1&0\cr0&1\cr0&0}D-\overline{T}\gamma^\mu\gamma_L\pmatrix{0&0&1}\tilde V d \, .
\label{eq:qcc}\end{aligned}$$ The corresponding charged currents in the leptonic sector are given as Feynman Rules in the App. \[sec:FR\], where we also give the explicit expression of the Feynman Rules for the FCNC vertices. We follow [@Liu:1994rx], in that we show these couplings in a basis in which the heavy $D$ and $S$ quarks are mass as well as gauge eigenstates. This fact explains the absence of an explicit mixing matrix for these heavy quarks. We have also combined them into a doublet, denoted simply as $D$ in the above formulae, and put the heavy $T$ into a separate singlet to simplify the notation.
In contrast, the left handed part of the neutral current coupling to the $Z$ boson is given by $$\mathcal{L}_Z = \frac{g}{\cos^2 \theta_W} (T_3- Q_f \sin^2 \theta_W) \bar q_L \gamma^{\mu} q_L Z_{\mu}\,,$$ as in the SM and does not discriminate between generations, so that these vertices remain flavor conserving. To find a sensible parameterization for the matrix $\tilde V$, we should first count the number of additional parameters that appear in this matrix. Looking at all the possible interaction terms, one finds that, after the phase transformations of the up and down-type quarks have been used to simplify the CKM matrix, there are three more possible phases that arise from transformations in the $D,S,T$ quarks as seen in (\[CCvert\]), which leaves one with 6 additional parameters, namely three mixing angles and three phases. However, from (\[FCNC\]) it is obvious that only the $\tilde V_{3j}$ elements are required when calculating FCNCs, and it is possible to find a parameterization that further reduces the number of parameters appearing there. It reads $$\begin{aligned}
\label{eq:param}
\tilde V&=&\pmatrix{\tilde V_{1d} & \tilde V_{1s} & \tilde V_{1b} \cr \tilde V_{2d} & \tilde V_{2s} & \tilde V_{2b} \cr \tilde V_{3d} & \tilde V_{3s} & \tilde V_{3b}} \\
&=& \pmatrix{c_{12} c_{13} & s_{12} c_{23} e^{i \delta_3}-c_{12} s_{13} s_{23} e^{i(\delta_1- \delta_2)} & c_{12} c_{23} s_{13} e^{i \delta_1} + s_{12} s_{23} e^{i(\delta_2 + \delta_3)} \cr
-c_{13} s_{12} e^{-i \delta_3} & c_{12} c_{23} + s_{12} s_{13} s_{23} e^{i(\delta_1- \delta_2 - \delta_3)} & -s_{12} s_{13} c_{23} e^{i(\delta_1- \delta_3)} - c_{12} s_{23} e^{i \delta_2} \cr
-s_{13} e^{-i \delta_1} & -c_{13} s_{23} e^{-i \delta_2} & c_{13} c_{23} } \nonumber\ ,\end{aligned}$$ where only two additional CP violating quantities $\delta_1$ and $\delta_2$ appear, that are responsible for the additional CP violating effects to be discussed below. Note, that these CP violating phases have been neglected in all previous analyses of FCNCs in 331 models. Note also, that the mixing angle $\theta_{12}$ does not appear in the relevant matrix elements. In choosing the parameterization of the matrix in such a way, one has to be careful to choose one that can actually be achieved by rotating the heavy $D,S$ and $T$ quarks, and a general unitary matrix with the correct number of parameters may not necessarily be allowed. However, we have checked that the parameterization (\[eq:param\]) is. A similar parameterization, sharing several features but ignoring weak phases, can be found in [@Liu:1994rx].
Let us finally also comment on the corresponding vertices in the up-type sector of the model. In this case, there are no further phase transformations that can be performed, so that the matrix $U_L$ can be just any arbitrary unitary matrix with, correspondingly, nine parameters, i.e. three angles and six phases, subject to the constraints from (\[CKM\]). Additionally, the observables associated with $D$ mixing and decay are afflicted with rather large uncertainties coming from long distance QCD effects. Therefore, we will not investigate these quantities any further in the course of this work.
Fermion $Q_f$ $g^{f,Z}_{l.h.}$ $g^{f,Z}_{r.h.}$ $g^{f,Z'}_{l.h.}$ $g^{f,Z'}_{r.h.}$
--------- ---------------- -------------------------------- ---------------------------- ------------------------------------------------------- ---------------------------------------------------
$l^-$ $-1$ $-\frac{g (1-2s^2_W)}{2 c_W}$ $\frac{g s_W^2}{c_W}$ $\frac{g \sqrt{1-4s^2_W}}{2 \sqrt{3} c_W}$ $\frac{g \sqrt{1-4s^2_W}}{\sqrt{3} c_W}$
$\nu_l$ $0$ $\frac{g}{2 c_W}$ $0$ $\frac{g \sqrt{1-4s^2_W}}{2\sqrt{3} c_W}$ $0$
$u,c$ $+\frac{2}{3}$ $\frac{g (3-4s^2_W)}{6 c_W}$ $-\frac{2 g s_W^2}{3 c_W}$ $-\frac{g (1-2s^2_W)}{2\sqrt{3}c_W \sqrt{1-4s^2_W}}$ $\frac{2 g s^2_W}{\sqrt{3}c_W\sqrt{1-4s^2_W}}$
$d,s$ $-\frac{1}{3}$ $ -\frac{g (3-2s^2_W)}{6 c_W}$ $\frac{g s_W^2}{3 c_W}$ $ -\frac{g (1-2 s_W^2)}{2\sqrt{3}c_W\sqrt{1-4s^2_W}}$ $-\frac{g s_W^2}{\sqrt{3}c_W\sqrt{1-4s^2_W}}$
$D,S$ $-\frac{4}{3}$ $\frac{4 g s^2_W}{3 c_W}$ $\frac{4 g s^2_W}{3 c_W}$ $\frac{g (1-5s^2_W)}{\sqrt{3}c_W\sqrt{1-4s^2_W}}$ $-\frac{4 g s_W^2}{\sqrt{3} c_W \sqrt{1-4s^2_W}}$
$b$ $-\frac{1}{3}$ $-\frac{g (3-2 s^2_W)}{6 c_W}$ $\frac{g s_W^2}{3 c_W}$ $\frac{g}{2\sqrt{3}c_W \sqrt{1-4s^2_W}}$ $-\frac{g s^2_W}{\sqrt{3}c_W\sqrt{1-4s^2_W}}$
$t$ $+\frac{2}{3}$ $\frac{g (3-4s^2_W)}{6 c_W}$ $-\frac{2 g s_W^2}{3 c_W}$ $\frac{g}{2\sqrt{3}c_W\sqrt{1-4s^2_W}}$ $\frac{2 g s^2_W}{\sqrt{3}c_W \sqrt{1-4s^2_W} }$
$T$ $+\frac{5}{3}$ $-\frac{5 g s^2_W}{3 c_W}$ $-\frac{5 g s^2_W}{3 c_W}$ $-\frac{g (1-6 s^2_W)}{\sqrt{3}c_W\sqrt{1-4s^2_W}}$ $\frac{5 g s_W^2}{\sqrt{3} c_W \sqrt{1-4s^2_W}}$
: \[TABLEFermions\] List of couplings for the neutral currents in the minimal $331$ model. In the corresponding Feynman Rules, an additional factor $i$ will appear. We abbreviate $s_W \equiv \sin \theta_W$ and $c_W \equiv \cos \theta_W.$
Formulae for Observables {#sec:obs}
========================
In this section, we will collect the theoretical expressions for all observables relevant to our analysis. In particular, we give the $Z'$ contributions that modify the SM amplitudes. These will be investigated numerically in Sect \[sec:numerics\].
Modifications in Meson Mixing Amplitudes
----------------------------------------
We will first be concerned with observables related to $B_{d/s}^0 - \bar B_{d/s}^0$ and $K^0 - \bar K^0$ mixing. These are the mass differences $\Delta M_K$, $\Delta M_d$ and $\Delta M_s$, as well as the CP violating quantities $\epsilon_K$, $A^{mix}_{CP}(B_d^0 \to J/\psi K_S)$ and $A^{mix}_{CP}(B_s^0 \to J/\psi \phi)$. In all cases, we will concentrate on the contribution from the heavy $Z'$ bosons, while the heavier charged gauge bosons appear only at the one loop level. They can be probed, for example, in the inclusive decay $b \to s \gamma$, where the tree level terms remain absent [@Agrawal:1995vp], or similarly through decays such as $Z \to b \bar b$ [@Perez:2004jc; @Gonzalez-Sprinberg:2005zd]. On the other hand, there are contributions to muon decay from these heavy charged gauge bosons. Since the coupling of these heavy bosons is exactly the same as the $W^\pm$ coupling to the leptons, this new piece can just be absorbed into a redefinition of the coupling constant as follows: $G_F = G_F^{\mu}/(1+(M_W/M_Y)^2)$, where $G_F^{\mu}$ is the coupling constant measured in muon decay, while $G_F$ is the “true” coupling $G_F$, obeying $G_F/\sqrt{2}=g^2/(8 M_W^2)$, with $g$ the $SU(3)_L$ gauge coupling. To reduce the number of parameters appearing, we will assume that both the $Y^\pm$ and $Z'$ are given entirely by those contributions stemming from the heaviest VEV, and express the $Y^\pm$ mass through $M_{Y^\pm}=3 (1-4 \sin^2 \theta_W)/(4 \cos^2 \theta_W) M_{Z'}$. This procedure leads, for example, to $G_F/G_F^{\mu}=0.92$ for $M_{Z'}=1 \mathrm{TeV}$. Note, that these effects appear only in the lepton sector, since here the third particle of the triplet is again a SM particle. In the quark sector, however, there are no new tree-level contributions from the new charged gauge bosons, since these always couple to a heavy quark. Let us finally quote [@Ng:1992st], where a lower bound of $M_{Y^\pm}>270 {\, {\rm GeV}}$ is found from muon decay. Since, in our approximation of the $Y^\pm$ mass, this charged gauge boson is about 3 times lighter than the $Z'$, we shall also use $1 \: \mathrm{TeV}$ as a lower bound for $M_{Z'}$ in our analysis. A similar bound on $M_{Y^\pm}$ has been obtained from EWP tests in [@Long:1999yv].
From the FCNC Lagrangian and the neutral current couplings given above, we find the tree-level effective Hamiltonian for $\Delta F=2$ transitions, where $F=S$: $$\label{Heff}
H^{eff}_{\Delta S=2}= \frac{G_F}{\sqrt 2} \frac{1}{3} \frac{\cos^4 \theta_W}{1-4 \sin^2 \theta_W} \left( \frac{M_Z}{M_{Z'}} \right)^2 (\tilde V_{31}
\tilde V_{32}^*)^2 (\bar s d)_{V-A} (\bar s d)_{V-A} \, ,$$ while, in the $F=B$ case, the vertex factors are replaced by $\tilde V_{3q} \tilde V_{33}^*$ with $q=1,2$ for down and strange quarks, respectively. Since the $Z'$ induces FCNCs left-handedly, no new operators are generated, while, in general, there are obviously new sources of flavor and CP violation in the Matrix $\tilde V$, so that the model does go beyond the usual minimal flavor violating (MFV) scenarios (see [@Buras:2003jf] for a review and a discussion of the several definitions of MFV that are being used).
Next, we need to take into account the different nature of $B$ and $K$ mixings: While $B_{d/s}^0 - \bar B_{d/s}^0$ mixing proceeds through the absolute value of the corresponding matrix elements, $K^0 - \bar K^0$ mixing is described by the real part only (this distinction has been missed in the literature, note that this is not even correct in the case of vanishing CP violation in $ \tilde V$ because of the phase in $V_{td}$). Therefore, we have $$\begin{aligned}
\Delta M_K^{Z'} &=& \frac{G_F}{\sqrt 2} \frac{8}{9} \frac{\cos^4 \theta_W}{1-4 \sin^2 \theta_W} \left( \frac{M_Z}{M_{Z'}} \right)^2
\mathrm{Re} [ (\tilde V_{31} \tilde V_{32}^*)^2] \hat B_{K}F_{B_K}^2 m_{K} \,, \\\label{DeltaMq}
\Delta M_{q}^{331} &=&\left| \Delta M_q^{SM} e^{-i 2 \beta}+ \frac{G_F}{\sqrt 2} \frac{8}{9} \frac{\cos^4 \theta_W}{1-4 \sin^2 \theta_W}
\left( \frac{M_Z}{M_Z'} \right)^2 (\tilde V_{3q} \tilde V_{33}^*)^2 \hat B_{B_q}F_{B_q}^2 m_{B_q} \right| \,.\end{aligned}$$ where we have given only the $Z'$ contribution in the case of $\Delta M_K$, but the complete expression containing the SM as well as the new contribution in the case of $\Delta M_{d/s}$. The corresponding SM contributions are (see [@Buras:2005xt] for a review) $$\begin{aligned}
\Delta M_K^{SM} &=& \frac{G_F^2}{6 \pi^2} \hat B_{K}F_{B_K}^2 m_{K} M_W^2 \mathrm{Re} \left[ \eta_1 S_0(x_c) (V_{cs}^* V_{cd})^2+\eta_2 S_0(x_t)
(V_{ts}^* V_{td})^2 + \right.\\ \nonumber && \left.2 \eta_3 V_{cs}^* V_{cd} V_{ts}^* V_{td} S_0(x_c,x_t)\right] \\
\Delta M_{q}^{SM} &=& \frac{G_F^2}{6 \pi^2} \eta_B \hat B_{B_q}F_{B_q}^2 m_{B_q} M_W^2 S_0(x_t) |V_{tq}|^2\label{DeltaMqSM}\end{aligned}$$ where, in the SM prediction, $\eta_1=1.32\pm0.32$, $\eta_2=0.57\pm0.01$, $\eta_3=0.47\pm0.05$ and $\eta_B=0.55 \pm 0.01$ are the NLO QCD corrections and the $S_0(x_i)$ are the leading order Inami Lim Functions that describe the charm and top box diagrams.
The contribution to the kaon CP violating parameter $\epsilon_K$ can also easily be calculated from the effective Hamiltonian (\[Heff\]). It is $$\epsilon_K^{Z'}=\exp{(i \pi/4)} \frac{G_F}{9} \frac{2~ M_K}{\Delta M_K} \frac{\cos^4 \theta_W}{1-4 \sin^2 \theta_W} \mathrm{Im} \left[ (\tilde V_{32}^* \tilde V_{31})^2 \right] \hat B_{K}F_{K}^2 \, ,$$ where we use the experimental value for $\Delta M_K$ in our numerical analysis. Note that the new contributions to both $\epsilon_K$ as well as $\Delta M_K$ are simply added to the SM contributions, i.e. there are no interference terms, while this is true in the case of $\Delta M_{d/s}$ only if the new contribution comes with the same phase as the SM contributions, as can be seen from (\[DeltaMq\]). Let us also here give the SM expression, reading $$\label{epsilonKSM}
\epsilon_K^{SM}=e^{i \frac{\pi}{4}} \frac{G_F^2}{12\pi^2} \frac{M_K}{\sqrt{2}\Delta M_K} M_W2 [{\lambda_c^{\ast}}^2 \eta_1 S_0(x_c) + {\lambda_t^{\ast}}^2 \eta_2 S_0(x_t) + 2 \lambda_c^\ast \lambda_t^\ast \eta_3 S_0(x_c,x_t)] \hat B_{K}F_{K}^2 \,.$$
Next, before we turn to the analysis of CP violating $B$ decay asymmetries, let us give the contributions that modify the $B_{d}^0 - \bar B_{d}^0$ mixing phase, which is equal to $2 \beta$ in the SM, where $\beta$ is one of the angles of the unitarity triangle. Including the additional contributions from the $Z'$, we find $$\begin{aligned}
\label{Phidcorrection}
\Phi_d^{331}&=&-\arg \left( M_{12}^{SM}+M_{12}^{Z'} \right) \\ \nonumber
&=&-\arg \left( \frac{G_F^2}{6 \pi^2} \eta_B M_W^2 S_0(x_t) |V_{td}|^2 e^{-i 2 \beta}+
\frac{G_F}{\sqrt 2} \frac{8}{9} \frac{\cos^4 \theta_W}{1-4 \sin^2 \theta_W} (\tilde V_{33}^* \tilde V_{31})^2 \left( \frac{M_Z}{M_{Z'}} \right)^2 \right)
$$ In addition, there are also new contributions to decay amplitudes, in particular also to the amplitude of the decay $B \to J/\psi K_S$. In the SM, this decay proceeds through a tree diagram topology with no additional CP violating phase, so that the mixing induced CP asymmetry is given by $\sin 2\beta$. In a general model, $\beta$ is replaced by a value $\beta_{eff}$, which is given as $2 \beta_{eff}=\Phi_d+\Phi_{decay}$. Unfortunately, the $Z'$ couples also right-handedly to the charm quark pair, and we can therefore not simply add the coefficient of the new tree diagram to the SM contribution. We have then estimated the projection onto the left-handed SM operator, which is entirely negligible, and we therefore consider it a good approximation to omit these terms. Analogous modifications occur in the asymmetry of $B_s \to J/\psi \phi$, which in the SM is given by $\sin 2 \beta_s$ with $\beta_s=-2 ^{\circ}$. Including the new contribution, $$\Phi_s^{331}=-\arg \left( \frac{G_F^2}{6 \pi^2} \eta_B M_W^2 S_0(x_t) |V_{ts}|^2 e^{-i 2 \beta_s}+
\frac{G_F}{\sqrt 2} \frac{8}{9} \frac{\cos^4 \theta_W}{1-4 \sin^2 \theta_W} (\tilde V_{33}^* \tilde V_{32})^2 \left( \frac{M_Z}{M_{Z'}} \right)^2 \right) \,.
$$
We note, finally, that the observables discussed in this subsection are, in principle, sufficient to determine all the parameters appearing in our parameterization of the mixing matrix (\[eq:param\]). Also, the experimental situation for these observables will be summarized when we perform our numerical analysis in Section \[sec:numerics\].
Modification in Rare Decay Amplitudes
-------------------------------------
The observable quantities listed so far are all related to meson mixing, and have also all been measured (with the exception of $\beta_s$). Therefore, we will use them in the next section to constrain the parameter space of the model. Then, we will be interested in the implications of the bounds obtained in that analysis on several rare decay amplitudes. Most of the corresponding branching fractions have not yet been measured, but the measurements will tell us quite a lot about the new physics contributions, since the theoretical expressions for these decays are extremely clean. The rare decays which we will study are $K^+ \to \pi^+ \nu \bar \nu$, $K_L \to \pi^0 \nu \bar \nu$, $B_{d/s} \to \mu^+ \mu^-$ and $K_L \to \pi^0 l^+ l^-$, where $l$ can be a muon or an electron.
Let us then begin this subsection with some general remarks: The rare decays in question are governed by electroweak- and photon-penguins as well as leptonic box diagram contributions. These are described in the Standard Model by the corresponding Inami Lim Functions $C_0(x_t)$, $D_0(x_t)$ and $B_0(x_t)$. In the expressions for decay amplitudes, these always appear in the gauge invariant combinations $X_0(x_t)$, $Y_0(x_t)$ and $Z_0(x_t)$ [@Buchalla:1990qz], defined as: $$C_0(x_t)-4B_0(x_t)=X_0(x_t)$$ $$C_0(x_t)-B_0(x_t)=Y_0(x_t)$$ $$C_0(x_t)+{1\over4}D_0(x_t)=Z_0(x_t).$$ In models of minimal flavor violating type, the new contributions to decay amplitudes can often be absorbed into a universal redefinition of these functions. On the other hand, these functions will be process-dependent in models that go beyond minimal flavor violation, as explicitely discussed for the Littlest Higgs model with T-parity in [@Blanke:2006eb]. We will see later that the situation is even slightly more complicated in the minimal 331 model. In the following, we will, whenever possible, give the appropriate redefinition of $X(x_t)$, $Y(x_t)$ and $Z(x_t)$ (the functions without the subscript 0 always refer to the NLO Functions, while those where it is included are only the LO ones) as $$\begin{aligned}
X´_i(x_t)&=&X^{SM}(x_t)+\Delta X_i\,, \\
Y´_i(x_t)&=&Y^{SM}(x_t)+\Delta Y_i\,,\\\,
Z´_i(x_t)&=&Z^{SM}(x_t)+\Delta Z_i\,.\end{aligned}$$
We begin with the cleanest rare decays, i.e. $K\to \pi \nu \bar \nu$, and $B_{d/s} \to \mu^+ \mu^-$. For the decay $K\to \pi \nu \bar \nu$ there exists a charged and a neutral counterpart, $\kpn$ and $\klpn$ [@Buras:2004uu]. Both decays are theoretically extremely clean, since the leading QCD matrix element can be extracted from the well measured tree-level decay $K^+\to\pi^0 e^+\nu$ and additional long-distance QCD effects are rather well under control [@Isidori:2005xm]. The effective Hamiltonian consists of contributions from both charm and top-loops, and is then given by: $$H^{\rm SM}_{\rm eff}={G_{\rm F} \over{\sqrt 2}}{\alpha\over 2\pi
\sin^2\theta_W}
\sum_{l=e,\mu,\tau}\left( V^{\ast}_{cs}V_{cd} X^l_{\rm NL}+
V^{\ast}_{ts}V_{td} X(x_t)\right)
(\bar sd)_{V-A}(\bar\nu_l\nu_l)_{V-A} \,. \\$$ Defining $\lambda_i=V^*_{is}V_{id}$ and collecting the charm contributions in $P_c(X)=0.41\pm0.05$[@Buras:2006gb; @Isidori:2005xm], the branching fraction for $\klpn$ and $\kpn$ can then be derived as $$\begin{aligned}
\label{bkpnn}
\mbox{BR}(K^+)&\equiv&\mbox{BR}(K^+\to\pi^+\nu\bar\nu) \\ \nonumber
&=&\kappa_+\cdot
\left[\left({\rm Im}\left(\frac{\lambda_t}{\lambda^5}X(x_t)\right)\right)^2+
\left({\rm Re}\left(\frac{\lambda_c}{\lambda}P_c(X)\right)+
{\rm Re}\left(\frac{\lambda_t}{\lambda^5}X(x_t)\right)\right)^2\right],\end{aligned}$$ $$\label{kapp}
\kappa_+=r_{K^+}\frac{3\alpha^2 \mbox{BR}(K^+\to\pi^0 e^+\nu)}{
2\pi^2\sin^4\theta_W}\lambda^8=(5.26\pm 0.06)\cdot 10^{-11}
\left[\frac{\lambda}{0.225}\right]^8.$$ and $$\label{bklpn}
\mbox{BR}(K_L) \equiv \mbox{BR}(K_L\to\pi^0\nu\bar\nu)=\kappa_L\cdot
\left({\rm Im} \left(\frac{\lambda_t}{\lambda^5}X(x_t)\right)\right)^2$$ $$\label{kapl2}
\kappa_L=\kappa_+ \frac{r_{K_L}}{r_{K^+}}
\frac{\tau(K_L)}{\tau(K^+)}=
(2.29\pm 0.03)\cdot 10^{-10}\left[\frac{\lambda}{0.225}\right]^8$$ The numbers for $r_{K_L}$ and $r_{K^+}$, describing the isospin breaking effects to the $K_{l3}$ decay, have recently been updated in [@Isidori:2006qy]. Due to the absence of the charm contribution, $\klpn$ is theoretically even cleaner than $\kpn$. Turning now to the contributions from new physics, we find that in both cases the leading term stems from a tree diagram transmitted by the $Z'$ boson. For the effective Hamiltonian, this leads to a new term of the form $$H_{eff}^{Z'}= \sum_{l=e,\mu,\tau} \frac{G_F}{\sqrt{2}} \frac{\tilde V_{32}^* \tilde V_{31}}{3} \left( \frac{M_Z c_W}{M_{Z'}} \right)^2 (\bar s d)_{V-A}
(\bar \nu_l \nu_l)_{V-A}\,.$$ This can be absorbed into the modification of the function $X(x_t)$ as $$\Delta X_{K \pi \nu \nu}=\frac{s_W^2 c_W^2}{\alpha} \frac{2 \pi}{3} \frac{\tilde V_{32}^* \tilde V_{31} }{V_{ts}^* V_{td}} \left( \frac{M_Z}{M_Z'} \right)^2\,.$$ We have already written (\[bkpnn\]) and (\[bklpn\]) in such a way that using the thus modified function $X(x_t)$ gives the correct branching ratio in the 331 model.
The present experimental situation of these decays can be summarized as follows [@Anisimovsky:2004hr; @Ahn:2006uf]: $$\mbox{BR}(\kpn)=(14.7^{+13.0}_{-8.9})\cdot 10^{^{-11}}, \qquad \mbox{BR}(\klpn)<2.1 \cdot 10^{-7} \quad (90\% \mbox{CL}) \,,$$ while the SM can be quoted as [@Buras:2006gb] $$\mbox{BR}(\kpn)=(8.0 \pm 1.1)\cdot 10^{^{-11}}, \qquad \mbox{BR}(\klpn)=(2.9 \pm 0.4) \cdot 10^{-11} \,.$$
Turning next to $B_{d/s} \to \mu^+ \mu^-$, the SM effective Hamiltonian is given by $$H_{eff}^{B_{d/s}\mu \mu}= -\frac{G_F}{\sqrt{2}}\frac{\alpha}{2 \pi s_W^2} (V^*_{tb}V_{td/s}) Y(x_t) (\bar b q)_{V-A}(\bar \mu \mu)_{V-A}\,,$$ which leads to the following formulae for the branching fractions: $$\label{bblls}
\mbox{BR}(B_q\to \mu^+\mu^-)=
\tau_{B_q} \frac{G_F^2}{\pi} m_{B_q}
\left(\frac{\alpha F_{B_q} m_{\mu}}{4 \pi \sin^2 \theta_W} \right)^2 \sqrt{1-4 \frac{m_{\mu}^2}{m_{B_q}^2}}
|V^\ast_{tb}V_{tq} Y(x_t)|^2$$ Due to the uncertainties in the decay constants, these decays are theoretically slightly less clean than the $K \to \pi \nu \bar \nu$ decays. Similarly to the $K \to \pi \nu \bar \nu$ decays, the new contribution to $B_{d/s} \to \mu^+ \mu^-$ is given by: $$\begin{aligned}
H_{eff}^{Z'}&=& \frac{G_F}{\sqrt{2}} \frac{\tilde V_{33}^* \tilde V_{31/32}}{3} \left( \frac{M_Z c_W}{M_{Z'}} \right)^2 (\bar b q)_{V-A}(\bar \mu \mu)_{V-A}+ \\
& & \frac{G_F}{\sqrt{2}} \frac{2 \tilde V_{33}^* \tilde V_{31/32}}{3} \left( \frac{M_Z c_W}{M_{Z'}} \right)^2 (\bar b q)_{V-A}(\bar \mu \mu)_{V+A}\end{aligned}$$ Since only the axial-vector component in the lepton current contributes to the decay, we can project the $V+A$ contribution onto the $V-A$ one to arrive at $$H_{eff}^{Z'}= -\frac{G_F}{\sqrt{2}} \frac{\tilde V_{33}^* \tilde V_{31/32}}{3} \left( \frac{M_Z c_W}{M_{Z'}} \right)^2 (\bar b q)_{V-A}(\bar \mu \mu)_{V-A}$$ and the modification in $Y(x_t)$ is $$\Delta Y_{B\mu \mu}=\frac{s_W^2 c_W^2}{\alpha} \frac{2 \pi}{3} \frac{\tilde V_{33}^* \tilde V_{31/32} }{V_{tb}^* V_{td/ts}} \left( \frac{M_Z}{M_Z'}
\right)^2 \,.$$ Again, (\[bblls\]) is written in such a way that the modification of $Y(x_t)$ leads to the correct result in the 331 model. The experimental bounds on these decays read as [@Bmumu] $$\mbox{BR}(B_s\to \mu^+\mu^-)< 1\cdot 10^{-7}\qquad \mbox{BR}(B_d\to \mu^+\mu^-)<3 \cdot 10^{-8} \quad (90\% \mbox{CL}) \,,$$ where the most recent SM predictions are [@Blanke:2006ig] $$\mbox{BR}(B_s\to \mu^+\mu^-)=(3.35\pm0.32)\cdot 10^{-9} \qquad \mbox{BR}(B_d\to \mu^+\mu^-)=(1.03 \pm 0.09) \cdot 10^{-10} \,.$$
Finally, we give also the contributions to the decay $K_L \to \pi^0 e^+ e^-$. In the SM, the short-distance CP violating part of the effective Hamiltonian is given at tree-level (of the matrix elements) by: $$H_{eff}^{K\pi ll}=-\frac{G_F}{\sqrt{2}} V_{ts}^* V_{td}(y_{7V} Q_{7V}+y_{7A} Q_{7A}) \,,$$ where $Q_{7V}=(\bar s d)_{V-A} \bar e\gamma^{\mu}e$ and $Q_{7A}=(\bar sd)_{V-A}\bar e\gamma^{\mu} \gamma^5e$ are the vector- and axial-vector operators contributing, while the matching conditions of the Wilson coefficients $y_{7V}$ and $y_{7A}$ are $$y_{7V}=\frac{\alpha}{2 \pi} \left( \frac{Y_0(x_t)}{s_W^2} -4 Z_0(x_t)+P_0\right)$$ $$y_{7A}=-\frac{\alpha}{2 \pi} \frac{Y_0(x_t)}{s_W^2}$$ Here we have followed the normalizations in [@Buchalla:2003sj; @Isidori:2004rb], $P_0=2.89\pm0.06$ and have neglected a small term $P_E.$
In principle, the NP amplitude here is given just as in the case of $K \to \pi \nu \bar \nu$ by a tree-level $Z'$ exchange, but, in this case there is also a right-handed contribution, leading the complete amplitude to be $$H_{eff}^{Z'}=\frac{G_F}{\sqrt{2}} \left( \frac{M_Z c_W}{M_Z'} \right)^2 \left(Q_{7V}+\frac{1}{3}Q_{7A} \right) (\tilde V_{32}^* \tilde V_{31})\,.$$ Instead of absorbing these new contributions into modifications of the Inami Lim Functions, we will here absorb them into the matching conditions of the Wilson coefficients[^2], i.e. $$\begin{aligned}
\Delta y_A &=& -\frac{1}{3} \left( \frac{M_Z c_W}{M_Z'} \right)^2 \frac{(\tilde V_{32}^* \tilde V_{31})}{V_{ts}^* V_{td}}\\
\Delta y_V &=& - \left( \frac{M_Z c_W}{M_Z'} \right)^2 \frac{(\tilde V_{32}^* \tilde V_{31})}{V_{ts}^* V_{td}}\end{aligned}$$ We refrain from giving the complete formula for the branching ratios since these are rather lengthy, and refer the reader to [@Mescia:2006jd], for the explicit expressions, including also the long-distance indirectly CP-violating terms and their interference with the short-distance contributions. Finally, let us also here quote the corresponding SM predictions and current experimental limits. They are [@Mescia:2006jd] $$\mbox{BR}(K_L \to \pi^0 e^+ e^-)=(3.54^{+0.98}_{-0.85}) \cdot 10^{-11} \,, \qquad \mbox{BR}(K_L \to \pi^0 \mu^+ \mu^-)= (1.41^{+0.28}_{-0.26}) \cdot 10^{-11} \,,$$ and [@Alavi-Harati:2003mr; @Alavi-Harati:2000hs] $$\mbox{BR}(K_L \to \pi^0 e^+ e^-)< 28 \cdot 10^{-11} \,, \qquad \mbox{BR}(K_L \to \pi^0 \mu^+ \mu^-)< 38 \cdot 10^{-11} \quad (90\% \mbox{CL}) \,,$$ respectively, where the SM prediction corresponds to positive interference between mixed and direct CP violation, which is favored [@Buchalla:2003sj; @Friot:2004yr].
Numerical Analysis {#sec:numerics}
==================
General Remarks
---------------
In this section, we analyze numerically the expressions given in the previous section. Before we do so, let us briefly review the framework and give the input we use. The CKM matrix is constructed from the measurements of tree level dominated decays, namely the experimental values of the Unitarity Triangle (UT) side $R_b$ as determined from the measured values of $|V_{ub}|$ and $|V_{cb}|$, as well as $|V_{us}|$ and the UT angle$\gamma$. As we have seen above, all further constraints may be polluted by new contributions from $Z'$ exchange[^3]. To be specific, the tree level extraction for $\gamma$ from $B \to D^{(*)} K$ decays leads to $$\gamma=(71 \pm 16)^{\circ}, \qquad \gamma=-(109\pm 16)^{\circ} \,,$$ where there is a two-fold ambiguity in this determination of $\gamma$, with the second solution in contradiction to the SM. This solution is disfavored from the combination of $\cos (2 \beta +\phi_d)$ and the semileptonic asymmetries $A^{d/s}_{SL}$ [@Bona:2006sa]. Therefore, we will work only with the first solution and construct our unitarity triangle from it. The further input values are collected in Table \[tab:input\] [@Blanke:2006eb]. The values of $|V_{ub}|$ and $|V_{cb}|$ are obtained from an average of both inclusive and exclusive determinations. Note, that obtaining the “SM-predictions” in the following by setting the $X(x_t)$ and $Y(x_t)$ functions to their SM values leads to different predictions for the decay rates than the SM predictions quoted above. This is due to the different CKM factors used, since we are working with only the tree-level input parameters, while the earlier SM predictions used all input available in the UT fit.
We perform the subsequent numerical analysis in two steps:
- In the first step, we consider the observables $\Delta M_K$, $\Delta M_{d/s}$, $\varepsilon_K$ and $\sin 2\beta$. All these quantities are related in some way to $K^0 - \bar K^0$ and $B^0_q - \bar B^0_q$ mixing, and have been measured to a significant precision. Therefore, we can use them to constrain the parameter space of the minimal 331 model. In this context, we also study the $B_s^0 - \bar B_s^0$ mixing phase $\beta_s$, that can, in principle, be measured through $A^{\rm mix}_{CP}(B^0_s \to J/\psi \phi)$, but is, as of yet, unknown.
- In the second step, we study the implications of these bounds for several rare decays, in particular the decays $\kpn$, $\klpn$ and $B_{d/s} \to \mu^+\mu^-$. In this context, we are mainly interested in obtaining potential upper bounds for these decays, as well as in finding correlations that would allow an unambiguous test of the model
Constraints from $\Delta M_K$, $\varepsilon_K$ and $B_q^0 - \bar B_q^0$ Mixing
------------------------------------------------------------------------------
In this subsection, we focus on the bounds on the model that can already be obtained by studying well-measured quantities. If one considers the theoretical expressions, there are always several parameters appearing in the corresponding bounds, i.e. the mass of the $Z'$ boson, as well as the corresponding combination of mixing-matrix elements. Therefore, one can now pursue two possible analyses: The first possibility, which has been followed repeatedly in the literature [@GomezDumm:1994tz; @Rodriguez:2004mw], is to assume a certain texture of the mixing matrix (in most cases this has been assumed to have a Fritzsch-type structure, while another texture has been used in [@Diaz:2004fs]), which then allows to obtain bounds on the $Z'$ mass. Several times, this has led to bounds that are potentially conflicting with the upper bounds obtained from the Landau Pole. On the other hand [@Liu:1993gy], one can set $M_{Z'}$ onto this upper bound and thereby obtain some information on the size of the corresponding mixing matrix elements. In order to be able to deal with the most general situation, we prefer not to make use of any specific parameterization of the mixing matrix, but rather follow the second possible approach, in a somewhat more general manner when considering the implications for rare decays. For the moment, we fix the $Z'$ mass to $M_{Z'}=1 ~\mathrm{TeV}$ and $M_{Z'}=5 ~\mathrm{TeV}$ as two representative values which give us bounds on the real and imaginary parts of $(V_{31} V_{32}^*)^2$ if we consider the bounds from $\Delta M_K$ and $\varepsilon_K$, respectively. On the technical level, we proceed in a manner that is inspired by the analysis [@Blanke:2006eb] of the Littlest Higgs model with T-parity, where the uncertainties in the theoretical input are absorbed into a generously assigned experimental error. We use, as possible deviations from the central value, $40\%$ for $\Delta M_{d/s}$ as well as for $\varepsilon_K$, $50 \%$ for $\Delta M_K$ and $4^{\circ}$ for $\beta$. These $4^{\circ}$ correspond to an uncertainty of about $8\%$ in $\sin 2 \beta$, as in [@Blanke:2006eb][^4]. A slight modification of $\sin 2 \beta$ would certainly be welcome in view of small discrepancy between the value of $\sin 2 \beta$ from $B \to J/\psi K_S$ and the one obtained from a UT fit without this input. This discrepancy can be attributed to a small experimental number $\sin 2 \beta |_{J/\psi K_S}$ or a large value of $|V_{ub}/V_{cb}|$. We also keep the CKM parameters fixed at their central values since we are mainly interested in the effects that are induced by new physics, not those that arise from parameter variation.
We find (taking $M_{Z'}=5 ~\mathrm{TeV}$ for definiteness - a similar pattern shows for other values) $\mathrm{Re}[(V_{31} V_{32}^*)^2]<9.2 \cdot 10^{-6}$ and $\mathrm{Im}[(V_{31} V_{32}^*)^2]<4.8 \cdot 10^{-8}$, from which we can conclude that the imaginary part of this amplitude is much stronger constrained than the real part. Therefore, if we would like to saturate the bounds, we should consider an entirely real or entirely imaginary value of $V_{31} V_{32}^*$. There is then, from $\Delta M_K$, a bound on $|V_{32}|$ which depends on $|V_{31}|$, as shown in Fig. \[V32MK\]. Notice, that both elements $|V_{31}|$ and $|V_{32}|$ can not be simultaneously large. This is true for both of the chosen values of $M_{Z'}$ that we are showing.
![\[V32MK\] An upper bound on $|V_{32}|$ coming from $\Delta M_K$ for $M_{Z'}=1~\mathrm{TeV}$(dotted) and $M_{Z'}=5~\mathrm{TeV}$(solid).](V31V32final.eps)
The corresponding bounds from $B^0_q - \bar B^0_q$ mixing are somewhat more subtle to deal with, since the new contributions are not simply added here, so that also interference terms are important. An estimate of the bounds can be obtained by assuming
- That the new contributions and the SM one are directly aligned, where then the deviation from the SM corresponds directly to the new contributions, or
- That the new contribution is constructed in such a way that it is perpendicular to the SM contributions in the complex plain, i.e. comes with a phase $\beta/\beta_s \pm 90^{\circ}.$
In the first case, the absolute value of the new contribution is minimal, while in the second it is maximal. On the other hand, taking an aligned contribution allows one to circumvent the bounds coming from $\sin 2 \beta |_{J/\psi K_S}$, which is much more stringent than the one from $\Delta M_d$. To show the complementarity of the two bounds, we plot, in Fig.\[V33dMBd\], the bound coming from $\Delta M_d$ in the case of aligned contributions and the bound coming from $\sin 2\beta$ in the case of orthogonal contributions. We find that the bound from $\sin 2\beta$ is stronger than the one from $\Delta M_d$ so that a contribution that is aligned with the SM one can be larger.
![\[V33dMBd\]The upper bound on $|V_{33}|$ coming from $\Delta M_d$ assuming that the SM and NP contributions are aligned (red), and the complementary bound from $\sin 2 \beta$ if they are perpendicular (black). Both are given for $M_{Z'}=1~\mathrm{TeV}$ (dotted) and $M_{Z'}=5~\mathrm{TeV}$(solid).](V31V33final.eps)
On the other hand, the mixing phase $\beta_s$ has not been measured, and can not be used to constrain the combination $V_{32} V_{33}^*$. This has two implications:
- First, we can have, in this case, a “mirror solution” of $\Delta M_s$ in which the new contribution is antiparallel to the SM, but twice as large. At present, this situation can not be excluded with the observables we are studying. It is, however, possible, that the large absolute value of the this new new contribution would violate bounds from $b \to s~ \gamma$. The new contributions to $b \to s~ \gamma$ are, however, loop suppressed, not only through the arising couplings, but additionally by heavy propagators, so that we expect the influence to be only marginal.
- There is no strong bound on an “orthogonal contribution”, which means that the phase $\beta_s$, as measured through $A_{CP}^{\rm mix}(B_s^0 \to J/\psi \phi)$, may be rather large. In fact, we find that the present range of $\beta_s$ is entirely unconstrained, since there is an allowed range connecting the mirror solution with the SM-like ones. Clearly, a measurement of this phase would severely constrain the available parameter space.
Finally, we point out that both $\Delta M_s$ and $\Delta M_d$ can be equally well enhanced or suppressed, since the sign of the new contributions can be simply switched by a change of sign in the mixing matrix $\tilde V_L$, so that no preferred behavior of the prediction can be obtained. On the other hand, if the data in either process should indicate an enhancement or suppression, it could always be satisfied within the minimal 331 model.
Implications for Rare Decays
----------------------------
Let us now study the implications of the bounds derived above for the modification in rare decay amplitudes. The strategy of the analysis will be to saturate the bounds by fixing the corresponding combination $(V_{ij} V_{kl})/M^2_{Z'}$, thereby leaving $M_{Z'}$ as the only variable left in the expressions for the rare decays. In this way, we find upper bounds for the rare decays as a function of the mass of the $Z'$ boson. For an earlier study of $\kpn$ in the 331 model, see [@Long:2001bc]. Our analysis goes beyond that one in that we consider not only the tree-level process but also the one-loop SM amplitude and the interference between the two. This is definitely appropriate, since the SM is expected to be the main contribution in most FCNC processes.
Beginning with the rare $K$ decays, we can use the information obtained from the previous section, that the real part of $((V_{ij} V_{kl})/M_{Z'})^2$ is much less constrained than the imaginary part, so that we set: $$\mathrm{Re}[(V_{ij} V_{kl})^2]=(\mathrm{Re}[V_{ij} V_{kl}])^2\,,$$ which effectively amounts to setting the imaginary part to zero. Alternatively, one could set $\mathrm{Re}[V_{ij} V_{kl}]=0$, where then the new contribution is purely imaginary. We will discuss this setup when we look at $\klpn$ in more detail. For $\kpn$, however, we will indeed be concerned only with the purely CP conserving case.
Proceeding in this manner, we find an upper bound on $K^+ \to \pi^+ \bar \nu \nu$ as shown in Fig. \[KplMZpr\]. In addition, we give also the central value of the experimental result . This experimental measurement is above the SM prediction, but well compatible within theoretical and experimental uncertainties. We find, that only rather low values of the $Z'$ mass can reach this central number.
![\[KplMZpr\] Upper bound on the decay $K^+ \to \pi^+ \bar \nu \nu$ taking into account the constraints from $\Delta M_K$. The SM value is denoted with a dashed line, while the present experimental central value is given by the red line.](KplMZfinal.eps){height="6cm"}
Concerning the decay $K_L \to \pi^0 \nu \bar \nu$, we find it most instructive to show the upper bound that is obtained in the case of a purely CP violating $Z'$ contribution. In this case, the bound for $\mathrm{Im}(V_{31} V_{32})$ is also given by the bound in $\Delta M_K$, and is shown in Fig. \[KLMZpr\]. Again, we find that large enhancements are, in principle, possible, in particular for values of the $Z'$ mass that lie beneath about $2 \mathrm{TeV}$ . Therefore, it is clear that visible signals in both $K \to \pi \bar \nu \nu $ decays can still be expected. In particular, values such as the current experimental central value of $\kpn$ are entirely possible.
We have, in both cases, not shown the possibility of a suppression of both branching fractions.
![\[KLMZpr\] Upper bound on the decay $K_L \to \pi^0 \bar \nu \nu$ taking into account the constraints from $\Delta M_K$ in the case of a purely CP violating $Z'$ contribution. In case of CP conserving $Z'$ couplings, this bound becomes more stringent.](KLMZfinal.eps){height="6cm"}
In addition, as discussed in a slightly different context in [@Buras:2004ub], a measurement of both decays is sufficient to find both the absolute value and phase of the unknown quantity $A\equiv (\tilde V_{31} \tilde V_{32})/M_{Z'}^2$, along the lines of Fig \[KLKP\]. Here, the dashed circles correspond to variations of the phase for various values of $A$, while the colored rays correspond to fixed values of the phase $\delta_{12} \equiv \delta_2-\delta_1$. We show here only a restricted area of the possible branching fractions, but it is clear that a measurement of both decays uniquely fixes all parameters in question.
In this context, it is interesting to see, which values of the branching fractions are actually allowed through the bounds coming from $\Delta M_K$ and $\varepsilon_K$. Therefore, we now show again the $\klpn$-$\kpn$ plane in Fig. \[KLKPscat\] with those areas cut out, which are ruled out by the respective bounds. Here, the red star corresponds to $M_Z'=5~\mathrm{TeV}$, while the blue star shows those values that are allowed for $M_Z'=1~\mathrm{TeV}$. Notice that there are, similar to the pattern seen in the Littlest Higgs model with T-parity [@Blanke:2006eb], several branches in this plane that are allowed. This is due exactly to the effect mentioned above, namely that the bound on $\varepsilon_K$ is stronger than the one on $\Delta M_K$, and the branches correspond to those areas, where the phase of the new contributions is such that it doesn’t modify $\varepsilon_K$ strongly. This is nicely seen in the comparison of both Figures in the $\kpn-\klpn$ plane, and should actually be a general effect of any model. Notice, that, due to the leptophobic character of the $Z'$ boson, the possible modifications of both branching fractions are not very large in comparison to the LHT model. Also, this figure nicely demonstrates how the allowed region decreases as the $Z'$ mass is increased.
![\[KLKP\]A projection onto the $\klpn$-$\kpn$ plane. Measuring both branching fraction allows to unambiguously determine both the phase as well as the magnitude of the new physics contribution. We vary $A\equiv (\tilde V_{31} \tilde V_{32})/M_{Z'}^2$ as in steps of $5 \cdot 10^{-11}$.](KLKPfinal.eps){height="7cm"}
![\[KLKPscat\]A projection onto the $\klpn$-$\kpn$ plane including the upper bounds from $\Delta M_K$ and $\epsilon_K$ for $M_{Z'}=5~\mathrm{TeV}$ (red) and $M_{Z'}=1~\mathrm{TeV}$ (blue).](KplKLscat2final.eps){height="7cm"}
Let us now turn to the decays $B_{d/s}\to \mu^+ \mu^-$. Here, we can use the bounds coming from $\Delta M_d$ and from $\sin 2 \beta$ to obtain an upper bound on the branching ratio $B_{d}\to \mu^+ \mu^-$. The result of this exercise is shown in Fig. \[bdmumu\]. Interestingly, this result makes a suppression of the branching ratio seem much more likely than an enhancement, and, in any case, a strong enhancement of this branching ratio would unambiguously rule out the minimal 331 model. A similar result can be obtained for $B_{s}\to \mu^+ \mu^-$ using the corresponding bounds from $\Delta M_s$, which we have added in Figure \[bdmumu\]. Also in this case, we find that there is not much room for a significant enhancement. Investigating now also the implications for a suppression of these branching fractions, we find that these can be larger, but a very large effect here is also excluded.
Finally, let us comment on the relation between $B_d \to \mu^+ \mu^-$ and $B_s \to \mu^+ \mu^-$ derived in [@Buras:2003td]. Here, one finds: $$\frac{\mathrm{BR}(B_s \to \mu^+ \mu^-)}{\mathrm{BR}(B_d \to \mu^+ \mu^-)}=\frac{\hat B_d}{\hat B_s} \frac{\tau(B_s)}{\tau(B_d)} \frac{\Delta M_s}{\Delta M_d} r$$ This relation has the advantage that the form factors $F_{B_q}$ drop out, and that therefore the uncertainties are reduced significantly. It is valid with $r=1$ in the SM and any extension that has an MFV structure. In our model, however, we can expect significant departures from this relation, i.e. a value of $r$ that is not necessary unity. Exploring the possible violation of this relation, we are, of course interested in the range that $r$ can obtain. For this we scan over the entire allowed parameter range to obtain all possible values of $r$. The result of this investigation is shown in Figure \[BmumuDMq\], where the constraints from $\Delta M_d$, $\Delta M_s$ and $\sin 2 \beta$ are all included. We find that the SM relation can be broken by about $50\%$, with $r$ ranging from $r \approx 0.5-2$, while this range seems to be rather independent of the $Z'$ mass. It is clear that we could have expected these strong modifications, since the mass differences are significantly more sensitive to the new contributions due to the leptophobic structure. As a general conclusion to this section, we can therefore state just this: In the minimal 331 model, we expect there to be stronger modifications in any quantity, in which leptons are not involved, i.e. in particular in the CP-violating asymmetries measuring $\beta$ and $\beta_s$, as well as the mass differences.
![\[bdmumu\]Allowed range for the branching ratio $B_d \to \mu^+ \mu^-$ obtained from $\Delta M_d$ and $\sin 2 \beta$ and for the branching ratio $B_s \to \mu^+ \mu^-$ from $\Delta M_s$ .](Bdtomumufinal.eps "fig:"){height="4.7cm"} ![\[bdmumu\]Allowed range for the branching ratio $B_d \to \mu^+ \mu^-$ obtained from $\Delta M_d$ and $\sin 2 \beta$ and for the branching ratio $B_s \to \mu^+ \mu^-$ from $\Delta M_s$ .](Bstomumufinal.eps "fig:"){height="4.7cm"}
![\[BmumuDMq\] Deviation from unity of the factor $r$ introduced in the text, as depending on $M_{Z'}$. ](rOverMZpfinal.eps){height="5cm"}
We conclude our numerical analysis with the compilation of Table \[Obs\], where we have collected the possible enhancements and suppressions in several observable quantities scanning the input parameters in a manner similar to the analysis of $r$ above. We observe, that the value of $\sin 2 \beta$, as obtained from the combined $K \to \pi \nu \bar \nu$ decays [@Buchalla:1994tr] can receive significant modifications, as well as both leptonic decays $K \to \pi^0 l^+ l^-$ which may also be rather strongly modified by the new contributions. Note, that in all these cases, there are, in particular, very strict lower bounds, valid for all of the $Z'$ mass range, that can not be circumvented. The stronger enhancement of the $K_L \to \pi^0 e^+ e^-$ branching fraction as compared to the one of $K_L \to \pi^0 \mu^+ \mu^-$ is a reflection of the fact that $\Delta y_{7V}$ is larger than $\Delta y_{7A}$ by a factor of 3. Also, a general feature of many models is that the decay $K_L \to \pi^0 e^+ e^-$ is subject to weaker modifications than the $\klpn$ decay, which is clearly not the case in the minimal 331 model. We therefore show the contour in the observable plane of these two decays in Fig. \[KeeKnunu\], which displays this feature rather nicely. Also, this contour allows an immediate test of the 331 model, if both decays are measured. The same is true also for the correlation of $K_L \to \pi^0 e^+ e^-$ and $K_L \to \pi^0 \mu^+ \mu^-$, which we add in Fig. \[KeeKmumu\], in the spirit of [@Isidori:2004rb].
![\[KeeKnunu\]Contour in the $K_L \to \pi^0 e^+ e^-$-$\klpn$ plane.](KpieeVsKL.eps){height="6cm"}
![\[KeeKmumu\]Analog to Fig. \[KeeKnunu\] in the $K_L \to \pi^0 e^+ e^-$-$K_L \to \pi^0 \mu^+ \mu^-$ plane. A measurement of any two decays tests the minimal 331 model.](BKpimumuVsBKpiee.eps){height="6cm"}
Conclusions {#sec:conclusions}
===========
We have analyzed in detail the flavor structure of the minimal 331 model, including for the first time explicitely the effects of new CP violating phases, as well as the new data for $\Delta M_s$. This allowed us to analyze a larger set of observable quantities than has been done before. Here, we have concentrated on the contributions from the exchange of the new $Z'$ gauge boson, which transmits FCNC processes at tree level. We have used the experimentally measured quantities $\Delta M_K$, $\varepsilon_K$, $\Delta M_{d/s}$ and $\sin 2 \beta |_{J/\psi K_S}$ to constrain the size of the new mixing matrix elements, depending on the mass of the $Z'$ boson. We have then taken these results to obtain bounds for several very clean rare decay processes, i.e. the decays $\kpn$, $\klpn$, $K_L \to \pi^0 l^+ l^-$ and $B_{d/s}\to \mu^+ \mu^- $. These upper bounds depend on the $Z'$ mass and can be used to exclude the minimal 331 model, or at least certain ranges of the $Z'$ mass. Let us summarize the results of the different steps in our analysis as follows
- FCNC processes are very well suited to constrain and explore the minimal 331 model, since the new contributions to EWP observables appear only at loop level, while the new FCNC effects appear already at tree level and are thus more significant.
- In the mixing sector of the neutral kaon system, we find that the imaginary part of $(\tilde V_{32}^* \tilde V_{31})^2$ is much stronger constrained than the real part. Therefore, if we would like to saturate these bounds, we can take a purely real or imaginary $(\tilde V_{32}^* \tilde V_{31})$.
- Concerning $\Delta M_{d/s}$, we find that modifications to both observables can take place as enhancements or suppressions in an equal manner, and that the measurements already significantly constrain the respective mixing matrix elements. We find, however that the bounds from $\sin 2 \beta$ are somewhat stronger than those from $\Delta M_d$, depending on the relative phase of the new contributions. Additionally, the phase $\phi_s$, as measured in the mixing induced asymmetry of $B^0_s \to J/\psi \phi$ can be large, since it is basically unconstrained, as of now. At the same time, the new contributions could solve a potential discrepancy between the measured values of $\sin 2 \beta$ and $|V_{ub}|$, in case the corresponding discrepancy persists.
- There are potentially significant modifications in both $K \to \pi \nu \bar \nu$ effects, depending on the phase structure of the new mixing matrix and, of course, the $Z'$ mass. In fact, measuring both decays allows one to unambiguously determine the new phase as well as the absolute value of the combination $\tilde V^*_{32} \tilde V_{31}/M_{Z'}^2$. The present experimental central value for $\mathrm{BR}(\kpn)$ can be reached, but only for rather low values of $M_{Z'}$. Also, we point out, that there are two “branches” in the $\klpn-\kpn$ plane, similar to the signature in the Littlest Higgs model with T-parity, but the possible enhancements are significantly smaller than in that model. On the other hand, the signature in $K_L \to \pi^0 e^+ e^-$ is stronger than in the LHT model, in particular in relation to the enhancement of $\klpn$. This difference is due to the fact that vector and axial-vector contributions partially cancel out in the $V-A$ difference, to which $\klpn$ is sensitive, while the individually large modification of the vector component affects $K_L \to \pi^0 e^+ e^-$.
- Next, we have then analyzed the impact of the bounds from $\Delta M_{d/s}$ on the decays $B_{d/s}\to \mu^+ \mu^-$. Here, we find that large enhancements seem impossible, while significant suppressions of both branching ratios can be obtained.
- Finally, we have briefly investigated some correlations and relations between several decays that hold in the SM, but are expected to be violated in the minimal 331 model, in particular, if new CP violating phases are present. For example, we find that the relation [@Buras:2003td] between $\Delta M_{d/s}$ and $\mathrm{BR}(B_{d/s}\to \mu^+ \mu^-)$ can be rather strongly violated by up to $50\%$.
- The most general conclusion to draw from this analysis is that, in general, we expect stronger modifications in those observables, that do not involve leptonic couplings, since these are suppressed in comparison to the quark coupling. In this context, the phase $\beta_s$ as measured in the mixing induced asymmetry $B^0_s \to J/\psi \phi$ becomes extremely interesting.
Finally, we would like to point out that the minimal 331 model is only one example of a model with an additional $Z'$ boson, but has many features that any such model should share, such as the correspondence of the bounds from $\Delta M_i$ to effects in the rare decays, which will stay the same in any such model, subject to small modifications from different lepton couplings. Here, we would again like to point out that the lepton coupling to the $Z'$ is suppressed in our model by a factor of $\sqrt{(1-4 \sin \theta_W)}$, so that stronger effects should be expected in a general model. On the other hand, the illustrated patterns in the rare decay sector remain the same, in particular the possibility of obtaining information on phase structure and absolute values from measurements of both $K \to \pi \nu \bar \nu $ decays. The same is true for the correlation between $\Delta M_{d/s}$ and $\sin 2 \beta_{d/s}$, implicitly stated in Eqs. (\[DeltaMq\]) and (\[Phidcorrection\]). Therefore, our analysis of the minimal 331 model can also serve as an example-analysis of this more general situation.
[**Acknowledgments**]{}\
We would like to thank A.J. Buras for valuable discussions and C. Hagedorn for several valuable comments on the manuscript. F.S. acknowledges financial support from the Deutsche Forschungsgemeinschaft (DFG) and from the “Bundesministerium für Bildung und Forschung (BMBF)" under contract 05HT6WOA.
Feynman Rules for Vertices {#sec:FR}
==========================
In this section, we list all the Feynman Rules relevant to our calculation. We define $P_L \equiv \frac{\gamma^{\mu}}{2} (1-\gamma^5)$ and $P_R \equiv \frac{\gamma^{\mu}}{2} (1+\gamma^5)$.
### Quark - $Z'$ - vertices {#quark---z---vertices .unnumbered}
[ll]{}
(150,70)(0,-10) (10,0)(60,0)[3]{}[4]{} (0,0)\[c\][$Z'_{\mu}$]{} (60,0)(110,0) (120,0)\[c\][$u_j$]{} (60,50)(60,0) (65,45)\[l\][$u_i$]{} (60,0)[2]{}
&
[ll]{}
(150,70)(0,-10) (10,0)(60,0)[3]{}[4]{} (0,0)\[c\][$Z'_{\mu}$]{} (60,0)(110,0) (120,0)\[c\][$u_i$]{} (60,50)(60,0) (65,45)\[l\][$u_i$]{} (60,0)[2]{}
&
[ll]{}
(150,70)(0,-10) (10,0)(60,0)[3]{}[4]{} (0,0)\[c\][$Z'_{\mu}$]{} (60,0)(110,0) (120,0)\[c\][$d_j$]{} (60,50)(60,0) (65,45)\[l\][$d_i$]{} (60,0)[2]{}
&
[ll]{}
(150,70)(0,-10) (10,0)(60,0)[3]{}[4]{} (0,0)\[c\][$Z'_{\mu}$]{} (60,0)(110,0) (120,0)\[c\][$d_i$]{} (60,50)(60,0) (65,45)\[l\][$d_i$]{} (60,0)[2]{}
&
[ll]{}
(150,70)(0,-10) (10,0)(60,0)[3]{}[4]{} (0,0)\[c\][$Z'_{\mu}$]{} (60,0)(110,0) (120,0)\[c\][$T$]{} (60,50)(60,0) (65,45)\[l\][$T$]{} (60,0)[2]{}
&
[ll]{}
(150,70)(0,-10) (10,0)(60,0)[3]{}[4]{} (0,0)\[c\][$Z'_{\mu}$]{} (60,0)(110,0) (120,0)\[c\][$D_i$]{} (60,50)(60,0) (65,45)\[l\][$D_i$]{} (60,0)[2]{}
&
### Lepton - $Y^\pm$ - vertices {#lepton---ypm---vertices .unnumbered}
[ll]{}
(150,70)(0,-10) (10,0)(60,0)[3]{}[4]{} (0,0)\[c\][$Y_{\mu}$]{} (60,0)(110,0) (120,0)\[c\][$l^C$]{} (60,50)(60,0) (65,45)\[l\][$\nu_l$]{} (60,0)[2]{}
&
### Lepton - $Y^{\pm\pm}$ - vertices {#lepton---ypmpm---vertices .unnumbered}
[ll]{}
(150,70)(0,-10) (10,0)(60,0)[3]{}[4]{} (0,0)\[c\][$Y_{\mu}$]{} (60,0)(110,0) (120,0)\[c\][$l^C$]{} (60,50)(60,0) (65,45)\[l\][$l$]{} (60,0)[2]{}
&
[999]{}
P. H. Frampton, Phys. Rev. Lett. [**69**]{} (1992) 2889.
F. Pisano and V. Pleitez, Phys. Rev. D [**46**]{} (1992) 410; R. Foot, O. F. Hernandez, F. Pisano and V. Pleitez, Phys. Rev. D [**47**]{} (1993) 4158. P. Langacker and M. Plumacher, Phys. Rev. D [**62**]{} (2000) 013006.
J. T. Liu and D. Ng, Phys. Rev. D [**50**]{} (1994) 548 D. Gomez Dumm, F. Pisano and V. Pleitez, Mod. Phys. Lett. A [**9**]{} (1994) 1609. J. A. Rodriguez and M. Sher, Phys. Rev. D [**70**]{} (2004) 117702 H. N. Long and T. Inami, Phys. Rev. D [**61**]{} (2000) 075002.
M. Blanke, A. J. Buras, A. Poschenrieder, C. Tarantino, S. Uhlig and A. Weiler, JHEP [**0612**]{} (2006) 003; M. Blanke, A. J. Buras, A. Poschenrieder, S. Recksiegel, C. Tarantino, S. Uhlig and A. Weiler, JHEP [**0701**]{} (2007) 066.
D. Ng, Phys. Rev. D [**49**]{} (1994) 4805. R. Foot, H. N. Long and T. A. Tran, Phys. Rev. D [**50**]{} (1994) 34; H. N. Long, Phys. Rev. D [**53**]{} (1996) 437; H. N. Long, Phys. Rev. D [**54**]{} (1996) 4691;
J. C. Montero, F. Pisano and V. Pleitez, Phys. Rev. D [**47**]{} (1993) 2918. H. N. Long and V. T. Van, J. Phys. G [**25**]{} (1999) 2319;
D. A. Gutierrez, W. A. Ponce and L. A. Sanchez, Eur. Phys. J. C [**46**]{} (2006) 497.
R. A. Diaz, R. Martinez and F. Ochoa, Phys. Rev. D [**72**]{} (2005) 035018; F. Ochoa and R. Martinez, Phys. Rev. D [**72**]{} (2005) 035010; F. Ochoa and R. Martinez, hep-ph/0508082; A. Carcamo, R. Martinez and F. Ochoa, Phys. Rev. D [**73**]{} (2006) 035007.
Y. Okamoto and M. Yasue, Phys. Lett. B [**466**]{} (1999) 267 T. Kitabayashi and M. Yasue, Phys. Rev. D [**63**]{} (2001) 095002. M. B. Tully and G. C. Joshi, Phys. Rev. D [**64**]{} (2001) 011301
J. C. Montero, C. A. De S. Pires and V. Pleitez, Phys. Rev. D [**65**]{} (2002) 095001 N. V. Cortez and M. D. Tonasse, Phys. Rev. D [**72**]{} (2005) 073005. T. V. Duong and E. Ma, Phys. Lett. B [**316**]{} (1993) 307.
J. C. Montero, V. Pleitez and M. C. Rodriguez, Phys. Rev. D [**65**]{} (2002) 035006.
J. C. Montero, V. Pleitez and M. C. Rodriguez, Phys. Rev. D [**70**]{} (2004) 075004.
J. T. Liu and D. Ng, Z. Phys. C [**62**]{} (1994) 693 N. T. Anh, N. A. Ky and H. N. Long, Int. J. Mod. Phys. A [**16**]{} (2001) 541
R. A. Diaz, R. Martinez and F. Ochoa, Phys. Rev. D [**69**]{} (2004) 095009 A. G. Dias, R. Martinez and V. Pleitez, Eur. Phys. J. C [**39**]{} (2005) 101.
J. T. Liu, Phys. Rev. D [**50**]{} (1994) 542 J. Agrawal, P. H. Frampton and J. T. Liu, Int. J. Mod. Phys. A [**11**]{} (1996) 2263 M. A. Perez, G. Tavares-Velasco and J. J. Toscano, Phys. Rev. D [**69**]{} (2004) 115004
G. A. Gonzalez-Sprinberg, R. Martinez and O. Sampayo, Phys. Rev. D [**71**]{} (2005) 115003
A. J. Buras, Acta Phys. Polon. B [**34**]{} (2003) 5615.
A. J. Buras, hep-ph/0505175.
G. Buchalla, A. J. Buras and M. K. Harlander, Nucl. Phys. B [**349**]{} (1991) 1.
A. J. Buras, F. Schwab and S. Uhlig, hep-ph/0405132. G. Isidori, F. Mescia and C. Smith, Nucl. Phys. B [**718**]{} (2005) 319;
A. J. Buras, M. Gorbahn, U. Haisch and U. Nierste, Phys. Rev. Lett. [**95**]{} (2005) 261805; JHEP [**0611**]{} (2006) 002.
G. Isidori, F. Mescia, P. Paradisi, C. Smith and S. Trine, JHEP [**0608**]{} (2006) 064.
V. V. Anisimovsky [*et al.*]{} \[E949 Collaboration\], Phys. Rev. Lett. [**93**]{} (2004) 031801.
J. K. Ahn [*et al.*]{} \[E391a Collaboration\], Phys. Rev. D [**74**]{} (2006) 051105 \[Erratum-ibid. D [**74**]{} (2006) 079901\].
http://www-cdf.fnal.gov/physics/new/bottom/060316.blessed-bsmumu3/.
M. Blanke, A. J. Buras, D. Guadagnoli and C. Tarantino, JHEP [**0610**]{} (2006) 003.
G. Buchalla, G. D’Ambrosio and G. Isidori, Nucl. Phys. B [**672**]{} (2003) 387. G. Isidori, C. Smith and R. Unterdorfer, Eur. Phys. J. C [**36**]{} (2004) 57. F. Mescia, C. Smith and S. Trine, JHEP [**0608**]{} (2006) 088.
A. Alavi-Harati [*et al.*]{} \[KTeV Collaboration\], Phys. Rev. Lett. [**93**]{} (2004) 021805.
A. Alavi-Harati [*et al.*]{} \[KTeV Collaboration\], Phys. Rev. Lett. [**84**]{} (2000) 5279.
M. Bona [*et al.*]{} \[UTfit Collaboration\], Phys. Rev. Lett. [**97**]{} (2006) 151803.
W. M. Yao [*et al.*]{} \[Particle Data Group\], J. Phys. G. [**33**]{} (2006) 1.
E. Barberio [*et al.*]{} \[Heavy Flavor Averaging Group (HFAG)\], hep-ex/0603003. For continuous updates, see [http://www.slac.stanford.edu/xorg/hfag/]{}
S. Friot, D. Greynat and E. De Rafael, Phys. Lett. B [**595**]{} (2004) 301.
H. N. Long, L. P. Trung and V. T. Van, J. Exp. Theor. Phys. [**92**]{} (2001) 548 \[Zh. Eksp. Teor. Fiz. [**119**]{} (2001) 633\] A. J. Buras, R. Fleischer, S. Recksiegel and F. Schwab, Nucl. Phys. B [**697**]{} (2004) 133.
A. J. Buras, Phys. Lett. B [**566**]{} (2003) 115.
G. Buchalla and A. J. Buras, Phys. Lett. B [**333**]{} (1994) 221.
[^1]: Another Higgs, in a 6 representation is required for the lepton masses, but we will ignore it in the following, since it plays no role in our analysis.
[^2]: This is because, due to the existence of right- and left-handed couplings, it is not possible to define one [*universal*]{} $C$ function for this decay.
[^3]: Tree level contributions analogous to the appearing in muon decay are not possible, since we are dealing with charged quark transitions here.
[^4]: While we use a somewhat newer experimental value of $\Delta M_s$ (the number from [@Blanke:2006eb] is $\Delta M_s = 17.7 \pm 0.4$), we choose to retain the assigned percentage of the uncertainty due to the fact that the theoretical error vastly dominates (0.4 are only $2\%$ of 17.4).
|
---
author:
-
date: 'Received 2009 March 10, accepted 2009 August 27'
title: '**A Survey of IUE Spectra of the Active Binary System UX Arietis**'
---
Introduction
============
[UX Ari]{} is a bright non-eclipsing double-lined spectroscopic binary (${\it P}_{orb} = 6.43791$ days) of spectral type G5 V + K0 IV [@carlos71]. The primary star, in this case the K0 IV component, shows chromospheric activity and is responsible for the majority of the activity shown by the system, as in most such [RS CVn]{} systems. Some photometric and orbital characteristics of [UX Ari]{} are summarized in Table 1.
@huen89 summarized the previous observations that had been done by numerous investigators, including photometric, spectroscopic, X-ray, radio and ultraviolet features. They suggested that the excess absorption was due to the mass-transfer activity resulting from the Roche lobe over-flow of the K star and accretion onto G star by analyzing their $H_{\alpha}$ and $H_{\beta}$ observations obtained as fibre-optic, echelle, CCD spectra. @vogt91 derived an accurate measurement of differential rotation with the opposite feature to that of the Sun using Doppler Imaging Technique. They also showed the spot distribution, which was quite complex, and the primary component, which had a large, stable polar spot.
Parameter References
-------------------- ------------------------------------------- ------------
$\alpha_{2000}$ 03$^{\rm h}$26$^{\rm m}$ 35$^{\rm s}$.36 a
$\delta_{2000}$ +28$^{\rm o}$ 42$^{\rm '}$ 55$^{\rm "}$.2 a
Distance \[pc\] 50.23 a
${\it P}_{orb.}$ 6$^{\rm d}$.43791 b
Sp.Type hot : G5 V c
cool : K0 IV
Masses/M$_{\odot}$ hot : $\geq 0.93$ c
cool : $\geq 0.71$
Radii/R$_{\odot}$ hot : 0.93 c
cool : $> 4.7$
M$_{v}$ 2$^{\rm m}$.5 c
[*V*]{} 6$^{\rm m}$.38 c
[*B-V*]{} 0$^{\rm m}$.91 c
[*U-B*]{} 0$^{\rm m}$.48 c
[*i*]{} 60$^{\rm o}$ c
: The characteristics of [UX Ari]{}[]{data-label="tab1"}
$^{(a)}$[The Hipparcos and Tycho Catalogue.]{}\
$^{(b)}$[Carlos & Popper(1971).]{}\
$^{(c)}$[Strassmeier et al.(1993).]{}
@duem01 attempted to improve the orbital measurements of [UX Ari]{} by using the published radial velocities together with their high-accuracy data. They improved the set of orbital parameters and found that the $\gamma$ velocity of the system has a systematic variation with time. They concluded that [UX Ari]{} seemed to be a triple system. The excess emission/absorption in some chromospheric lines (see e.g. Montes et al. 1995a,b for $H_{\alpha}$ emission, and Huenemoerder et al. 1989 for $H_{\alpha}$ absorption) or the excess in continuum levels of various spectral ranges, if they exist, could provide evidence for the source of activity being dependent on rotation (related to magnetic activity), or an accretion stream from Roche-lobe overflow of the primary, or some process that occurs in hot circumstellar gas [see @rhomb77], respectively. Therefore, the examination of the ultraviolet excess would also be advantageous.
@ekm93 studied 65 IUE spectra observed in the 1978-1987 period(with IUESIPS reduction). It was shown that emission-line fluxes vary with the orbital phase and that the dependence of the line fluxes on orbital phase was well correlated with the photometric light variation. This correlation might indicate more active chromospheric regions above the photospheric spot regions. @ekm93 also measured fluxes of the individual IUESIPS emission lines of the component stars of [UX Ari]{} and calculated that the contributions to the activity of the system for G5 and K0 were about 1/4 and 3/4 respectively. Another characteristic of the IUE spectra (with IUESIPS reduction) was an absorption feature observed on the peak of the IUSIPS k profiles of the K0 IV component, which was observed to shift together with the emission profile as the star revolved in its orbit [@ekm93]. Based on this absorption feature, it is suggested that the circumstellar matter around the K0 IV component may be responsible for this absorption.
@nichols96 summarized a new calibration together with new image processing techniques. They pointed out that the wavelength and absolute flux errors in the IUESIPS processing could be corrected by NEWSIPS progressing and that it would be better to re-analyze all the IUE spectra with NEWSIPS reduction. With this aim, in the present paper all 194 images of IUE-NEWSIPS spectra of [UX Ari]{} observed in 1978-1996 period have been analyzed to check the validity of the previous findings of @ekm93. This paper shows that all the integrated emission-line fluxes of short-wavelength low-dispersion spectra have a variation with time and orbital phase, but that the variation with time was not as clear as that with the orbital phase. Examination of the ultraviolet excess shows that some UV excess in [UX Ari]{} exists and varies from 1% up to 24% in time. Comparison of the MgII radial velocities with those of visible spectra (Carlos & Popper 1971;Duemmler & Aarum 2001) showed that the scattering in the UV data is likely to come from chromospheric activity caused by a magnetic dynamo that produced loops in active region.
IUE data and spectral analysis
==============================
The IUE spectra of [UX Ari]{} have been taken from the NASA IUE archieve using the IDL (Interactive Data Language) Program. All the spectra have undergone NEWSIPS reduction. The spectra consist of 22 LWP, 2 LWR and 86 SWP images in low dispersion, and 69 LWP, 12 LWR and 3 SWP images in high resolution. The log of IUE images are given in Table 2. The images studied by other authors in the past were denoted by asteriks in the ’Comment’ column of Table 2. IUE obtained spectra at both low (6 Åresolution) and high dispersion ($\lambda / \delta\lambda \sim 10000$), with the short-wavelength prime (SWP, 1151 - 2000 Å), long wavelength prime (LWP, 1850 - 3400 Å), and long-wavelength redundant (LWR, 1850 - 3400 Å) cameras [@nichols96].
The spectra show emission lines originating in the chromosphere and transition region. The flux in a given line was obtained by computing the area contained in the spectral region above the continuum or background levels near the wings of the line. The emission line fluxes were computed based on Gaussian profile-fitting procedures. The fitting procedures were made by means of the CURFIT program of @beving69. Some results of the Gaussian fits are shown in Figure 1 for low dispersion image, SWP02375, and in Figure 6 for high-resolution images, LWP14085 and LWP14130, in the range that includes the Mg II h and k lines. The overall shapes of the line profiles can be reasonably well matched by 1 or 2 Gaussian components for low dispersion SWP spectra and by 3 Gaussian components for Mg II h and k profiles in high-resolution images. In the case of Si IV or Si II lines there are two Gaussian components for the multiplets in the fitting procedure. The strong emission in these lines originates from the K0 IV star rather than G5 V star of [UX Ari]{} system. In the case of the Mg II h and k profiles, a Gaussian profile has been attributed to each component of the system in the fitting procedures. A third Gaussian absorption profile represents the interstellar absorption component (Figure 6). In the Gaussian fitting procedures, since the effect of interstellar absorption can be removed and this removal does not show any significant effect in comparison with the observational errors on flux and wavelength of the IUE images, the uncertainties in the strength and velocity of Mg II h and k lines can be estimated independently based on the observational errors. Therefore, in this analysis we can be confident that there were no effects of interstellar absorption on determination of the secondary’s contribution to the activity of the system. The orbital phases that correspond to the mid-time of IUE observations were computed with the ephemeris
$$\rm HJD = (2440133.766 + 6.43791\times{\it E})days,$$
for which the zero phase corresponds to conjunction with the primary (K0 IV) component in front [@carlos71].
----------- ------ -------------- ---------- ------------------- ---------- ------ -------------- ---------- ---------
Image Disp HJD(mid) Exp.Time Comment Image Disp HJD(mid) Exp.Time Comment
(sec.) (sec.)
LWR02081 H 2443736.0752 720 \* LWP11752 L 2447068.3635 90
SWP02301 L 2443736.1016 2700 \* LWP11753 L 2447068.4835 90
LWR02082 H 2443736.1354 1800 \* LWP11754 L 2447068.6115 90
LWR02111 H 2443739.8502 1800 \* LWP11755 L 2447068.7315 90
SWP02336 L 2443739.8983 5400 \* LWP11756 H 2447068.7764 3000
LWR02136 H 2443741.9434 1800 \* LWP11757 H 2447068.8277 1500
SWP02351 L 2443741.9853 4200 \* LWP11758 H 2447068.8727 1500
LWR02158 H 2443743.9244 1800 \* LWP11760 L 2447068.9725 90
SWP02375 L 2443743.9643 4200 \* LWP11761 L 2447068.9975 90
LWR03344 H 2443874.5894 1800 \* LWP11762 H 2447069.0424 3000
SWP03766 L 2443874.6273 4200 \* LWP11763 L 2447069.0845 90
SWP03855 L 2443882.7704 1800 \* LWP11764 L 2447069.2335 90
LWR03432 H 2443882.7963 1080 \* LWP11765 L 2447069.3595 90
LWR06261 H 2444207.2292 900 LWP11766 L 2447069.4825 90
SWP07267 L 2444207.2668 4800 \* LWP11767 L 2447069.5875 90
LWR06329S L 2444215.9747 120 LWP11768 L 2447069.6135 90
LWR06329L L 2444215.9784 240 LWP11769 L 2447069.6375 90
SWP07342 L 2444216.0098 4800 \* LWP11770 L 2447069.6635 90
LWR06330 H 2444216.0524 1800 LWP11771 H 2447069.7233 5400
SWP07423 L 2444225.4259 12600 \* LWP14051 H 2447419.8604 3000
LWR10244 H 2444693.3949 1200 \* LWP14052 H 2447419.9127 1500
SWP13612 L 2444693.4234 3000 \* LWP14053 H 2447419.9472 720
SWP15211 H 2444886.6123 27000 \*,noisy,excluded LWP14084 H 2447422.9887 1500
LWR11729 H 2444886.7817 1500 \* LWP14085 H 2447423.0317 1500
SWP15240 H 2444889.5829 24000 \*,noisy,excluded LWP14086 H 2447423.0926 4080
LWR11756 H 2444889.7319 1200 LWP14130 H 2447432.8294 3000
SWP26730 L 2446334.6765 600 \* LWP14131 H 2447432.8817 1500
SWP26730 L 2446334.6875 600 \* LWP14132 H 2447432.9334 3000
SWP26730 L 2446334.7095 600 \* LWP14152 H 2447435.9527 1500
LWP06815 H 2446334.7232 900 LWP14153 H 2447435.9967 1500
SWP26731 L 2446334.7415 600 \* LWP14220 H 2447448.7844 3000
SWP26731 L 2446334.7555 600 \* LWP14221 H 2447448.8367 1500
SWP26731 L 2446334.7675 600 \* LWP14222 H 2447448.8777 1500
LWP06816 H 2446334.7892 900 LWP14263 H 2447451.9177 1500
SWP26732 L 2446334.8095 600 \* LWP14264 H 2447451.9587 1500
SWP26732 L 2446334.8225 600 \* LWP14265 H 2447452.0141 3480
SWP26732 L 2446334.8345 600 \* LWP18569 H 2448116.5273 1080
LWP06817 H 2446334.8552 900 SWP39449 L 2448116.5547 1500
SWP26733 L 2446334.8755 600 \*,noisy,excluded SWP39460 L 2448118.0615 1806
SWP26733 L 2446334.8925 600 \*,noisy,excluded LWP18584 H 2448118.0829 1200
SWP26733 L 2446334.9035 600 \*,noisy,excluded SWP39470 L 2448118.9079 2400
LWP06818 H 2446334.9252 900 LWP18597 H 2448119.8816 1320
SWP26734 L 2446334.9435 600 noisy, excluded LWP18607 H 2448121.0516 1320
SWP26734 L 2446334.9575 600 noisy, excluded SWP39476 L 2448121.0769 2400
SWP26734 L 2446334.9658 144 noisy, excluded SWP42405 L 2448505.9212 2100
LWP06819 H 2446334.9952 900 LWP21171 H 2448505.9433 1080
SWP26735 L 2446335.0145 600 noisy, excluded SWP42416 L 2448507.9249 2400
SWP26735 L 2446335.0265 600 noisy, excluded LWP21187 H 2448507.9485 600
LWP09864 H 2446801.4872 900 LWP21200 H 2448508.9073 1080
SWP30026 L 2446801.5055 600 \* SWP42427 L 2448508.9379 2400
SWP30026 L 2446801.5165 600 \* LWP21208 H 2448509.8913 1080
SWP30026 L 2446801.5285 600 \* SWP42435 L 2448509.9209 2400
LWP09865 H 2446801.5512 900 \* LWP21222 H 2448511.8890 1380
SWP30027 L 2446801.5725 600 \* SWP42448 L 2448511.9179 2400
SWP30027 L 2446801.5825 600 \* LWP21236 H 2448513.8990 1380
SWP30027 L 2446801.5955 600 \* SWP42461 L 2448513.9269 2400 \*
LWP09866 H 2446801.6172 900 \* LWP28935 H 2449585.1275 600
SWP30028 L 2446801.6375 600 \* SWP51857 L 2449585.1502 900
SWP30028 L 2446801.6485 600 \* SWP51858 L 2449585.1842 900
SWP30028 L 2446801.6615 600 \* LWP28940 H 2449585.9735 600
LWP09867 H 2446801.6852 900 \* SWP51866 L 2449585.9952 900
SWP30029 L 2446801.7045 600 \* SWP51867 L 2449586.0272 900
SWP30029 L 2446801.7165 600 \* SWP51872 L 2449586.9082 900
SWP30029 L 2446801.7275 600 \* LWP28943 H 2449586.9205 600
LWP09868 H 2446801.7502 900 \* SWP51873 L 2449586.9432 900
SWP30030 L 2446801.7705 600 \* SWP51884 L 2449587.9022 900
SWP30030 L 2446801.7805 600 \* LWP28950 H 2449587.9145 600
LWP11745 H 2447067.8534 3000 SWP51885 L 2449587.9392 900
LWP11746 H 2447067.9037 1500 SWP51961 L 2449592.8712 900
LWP11747 L 2447067.9365 90 LWP29029 H 2449592.8835 600
LWP11748 L 2447067.9595 90 SWP51962 L 2449592.9092 900
LWP11749 H 2447068.0067 1500 SWP51975 L 2449593.8772 900
LWP11750 H 2447068.0673 4200 LWP29040 H 2449593.8895 600
SWP31952 H 2447068.6157 89280 noisy, excluded SWP51976 L 2449593.9112 900
LWP11751 L 2447068.2375 90 LWP29052 H 2449594.9765 600
----------- ------ -------------- ---------- ------------------- ---------- ------ -------------- ---------- ---------
---------- ------ -------------- ---------- --------- ---------- ------ -------------- ---------- -----------------
Image Disp HJD(mid) Exp.Time Comment Image Disp HJD(mid) Exp.Time Comment
(sec.) (sec.)
SWP51986 L 2449594.9992 900 SWP52056 L 2449602.9102 900
SWP51996 L 2449597.0522 900 LWP29118 H 2449602.9275 600
LWP29070 H 2449597.0695 600 SWP52057 L 2449602.9492 900
SWP51997 L 2449597.0922 900 LWP29127 H 2449604.0545 600
LWP29071 H 2449597.1125 600 SWP52063 L 2449604.0772 900
LWP29077 H 2449598.0515 600 LWP29128 H 2449604.1025 600
SWP52007 L 2449598.0742 900 LWP29137 H 2449605.0615 600
LWP29078 H 2449598.0945 600 SWP52070 L 2449605.0872 900
SWP52008 L 2449598.1135 600 LWP29138 H 2449605.1145 600
SWP52016 L 2449599.0652 900 SWP52078 L 2449606.0732 900
LWP29085 H 2449599.0765 600 LWP29142 H 2449606.0925 600
SWP52017 L 2449599.0992 900 LWP29149 H 2449606.8835 600
SWP52023 L 2449600.0622 900 SWP52086 L 2449606.9052 900 noisy, excluded
LWP29091 H 2449600.0805 600 SWP52087 L 2449606.9342 900 noisy, excluded
SWP52024 L 2449600.1012 900 LWP31888 L 2450100.3785 90
SWP52034 L 2449601.0572 900 SWP56587 L 2450100.4143 5400 \*
LWP29099 H 2449601.0735 600 LWP31894 L 2450103.3545 90
SWP52035 L 2449601.0942 900 SWP56624 L 2450103.3913 5400
LWP29100 H 2449601.1135 600 LWP31895 L 2450103.3885 90
SWP52046 L 2449602.0722 900 LWP31896 L 2450103.4245 90
LWP29109 H 2449602.0895 600 SWP56630 L 2450105.3971 4680
SWP52047 L 2449602.1092 900 LWP31903 L 2450105.3995 90
---------- ------ -------------- ---------- --------- ---------- ------ -------------- ---------- -----------------
$^{*}$[Images studied by other authors(see http://archieve.stsci.edu/iue/search.php) in the past.]{}\
Short-wavelength, Low - Dispersion Spectra
------------------------------------------
The most prominent feature seen in the spectra is $Ly\alpha$ profile. Since this line is blended with geocoronal $Ly\alpha$ and overexposed throughout the observations, it is not included in the line analysis. Due to 6-$\AA$ resolution, some of the emission lines are unresolved or partially resolved multiplets. The identified emission lines of SWP spectra (see Figure 1) are OI ($\lambda$ $\lambda$1302,1305), CI ($\lambda$1657), SiII ($\lambda$ $\lambda$1808,1817), CII ($\lambda$ $\lambda$1334, 1335), HeII ($\lambda$1639), NV ($\lambda$1238), SiIV ($\lambda$ $\lambda$1393, 1402), and CIV ($\lambda$ $\lambda$1548, 1550). Figures 2 to 4 show the integrated emission Carbon line fluxes of low-dispersion spectra as a function of time (orbital epoch) and orbital phase. These lines originate in chromosphere and transition region. The same trend was seen for the other line fluxes (the lines mentioned above).
{width="75.00000%"}
{width="65.00000%"}
There is a flare event near the orbital phase $\sim 0.07$ (SWP03766). Apart from this flare event there are two rises in flux at phases $\sim 0.20$ and $\sim 0.70$ (Figures 2-4). It can be clearly seen that there are variations in the chromospheric and transition-region line fluxes with time and orbital phase. Since some of these low dispersion spectra (five images) are very noisy and have indeterminate lines, they were excluded from the analysis. These images are SWP26733, SWP26734, SWP26735, SWP52086 and SWP52087. The scattering of the emission line fluxes of the images, taken at the same epoch (see Figures 2-4) arose from the variation of flux with orbital phase.
{width="65.00000%"}
Short-Wavelength, High - Dispersion Spectra
-------------------------------------------
There exist only three images of [UX Ari]{} system taken by the SWP camera in high resolution: SWP15211, SWP15240 and SWP31952. NV ($\lambda$1239), OI ($\lambda$ $\lambda$ 1302, 1305), CII ($\lambda$ $\lambda$ 1334, 1335), SiIV ($\lambda$ $\lambda$ 1394, 1403), CIV ($\lambda$ $\lambda$ 1548, 1550), HeII ($\lambda$1640), CI ($\lambda$1657), SiII ($\lambda$ $\lambda$ 1808, 1817), SiIII ($\lambda$1892), and CIII ($\lambda$1909) lines were looked for in these three images to evaluate the fluxes of these lines together with those of the low dispersion spectra. Unfortunately, all three images have a lot of reseau (in the ITF), permenant ITF artifact, saturated pixels, warning tracks, RAW - SCREEN cosmic rays/bright spots, positively extrapolated ITF, and very noisy data in the range of these lines. Hence, these lines were hardly seen in the images SWP15211, SWP15240, and SWP31952 which were taken at orbital epochs 738.259, 738.721 and 1077.190, respectively. Therefore, these three images were excluded from the analysis.
{width="65.00000%"}
Long-Wavelength, Low - Dispersion Spectra
-----------------------------------------
The long-wavelength low dispersion spectra (24 images) were examined for ultraviolet excess, by comparing the ultraviolet continuum level of [UX Ari]{} (G5 V + K0 IV) with the levels of [$\kappa$ Cet]{} (G5 V) and [$\eta$ Cep]{} (K0 IV) in the same spectral range between $2100 \AA$ and $3200 \AA$. After polynomial fitting for the continuum levels (see Figure 5) and computing the integrated flux measured on Earth (f)between $2100 \AA$ and $3200 \AA$, the continuum level analysis can be carried out by converting the flux measured on Earth to the surface integrated flux (F) of a star by:
$$F = \left( \frac{d}{R} \right)^{2} f ,$$
where d is the distance from Earth and R is the radius of the star [@gray92]. If R is in units of solar radii and d is in pc, this relation can be written as
$$F = 1.96249 x 10^{15}\left(\frac{d}{R}\right)^{2} f .$$
Equation (2) can be written as $$\frac{F}{f} = \left( \frac{d}{R} \right)^{2} = \left( \frac{2}{\Theta} \right)^{2},$$ where ${\rm \Theta} (= 2R / d)$ is the angular diameter of the star, in radians. For double-lined uneclipsing binary stars, like UX Ari, ${\rm \Theta}$ could be taken as ${\rm \Theta} = 2 (R_{a} + R_{b})^2 / d$. Here R$_{a}$ and R$_{b}$ are the radius of the component stars of a binary system. Since $(R_{a} + R_{b}) \ll d$, $(R_{a} + R_{b})^2 \approx (R_{a})^2 + (R_{b})^2$. Then, Equation (2) can be written for binary system as $$F = \left( \frac{d^2}{R_{a}^{2} + R_{b}^{2}} \right) f.$$ The distance of [UX Ari]{} is given as 50 pc by @stras88 and @stras93. The recent and most reliable measurement of the distance (50.23 pc) is given in the Hipparcos Catalogue [@perry97]. Since the standard error of the parallax of [UX Ari]{} is 1.25 mas, given by @perry97, the distance of 50 pc can be adopted as a good estimate.
There are three remarkable estimations on the radii of the components of [UX Ari]{}. Therefore, the three conditions could be taken into consideration for [UX Ari]{} system:
1\. With a distance of 50 pc and component radii $ R_{G5} = 0.93 R_{\odot}$, $ R_{K0} = 3 R_{\odot}$ [@stras88], Equation (3) gives:
$$F = 4.906225 x 10^{17} f ,$$
where $ R^{2} = R_{G5}^{2} + R_{K0}^{2} $ for [UX Ari]{} system.
2\. With the same distance as given in (1), but radii $ R_{G5} = 0.93 R_{\odot} $ and $ R_{K0} = 4.7 R_{\odot} $ [@stras93], this relation becomes:
$$F = 2.137332334 x 10^{17} f .$$
3\. With a distance of 50 pc and radii $ R_{G5} = 0.80 R_{\odot} $, $ R_{K0} = 6.2 R_{\odot} $ [@huen89], this relation is
$$F = 1.255431167 x 10^{17} f .$$
By using Equations 4, 5 and 6 the integrated surface fluxes of the UV continuum were computed for each aspect given above, between 2100 and 3200 $\AA$.
For the comparison stars, [$\eta$ Cep]{} and [$\kappa$ Cet]{}, this relation can be obtained as follows. At a distance of 14.34 pc [@perry97] and the radius of $ R = 4 R_{\odot} $ [@black94] for [$\eta$ Cep]{} ( K0 IV ) Equation 3 gives:
$$F = 2.522236304 x 10^{16} f ,$$
and, with the distance of 9.16 pc [@perry97] and the radius of $ R = 0.9313 R_{\odot} $ [@black94] for the [$\kappa$ Cet]{} ( G5 V ) Equation 3 gives:
$$F = 1.898537562 x 10^{17} f .$$
The integrated continuum fluxes measured on Earth and corresponding surface fluxes between 2100 and 3200 $\AA$ spectral range obtained by means of the relations given above are listed in Table 3 for [UX Ari]{} system, and in Table 4 for [$\eta$ Cep]{} and [$\kappa$ Cet]{} together with IUE images. It can be seen that the fluxes obtained from LWR04857S are much lower than that those obtained from other two images. The reason is that the LWR04857S image was taken by using small aperture of the spectrograph while others were taken by large aperture.
The effective temoeratures and the radii of comparison stars, [$\eta$ Cep]{} and [$\kappa$ Cet]{}, are comparible with those of the component stars of [UX Ari]{}. Namely,\
$\bullet$ [$\eta$ Cep]{}(K0 IV); T$_{e}$= 4967 K [@soub08];\
$\bullet$ [UX Ari]{}(K0 IV); T$_{e}$= 4750 K [@vogt91];\
$\bullet$ [$\eta$ Cep]{}(K0 IV); R = 4 R$_{\odot}$ [@black94];\
$\bullet$ [UX Ari]{}(K0 IV); R = 4.7 R$_{\odot}$ [@stras93];\
$\bullet$ [$\kappa$ Cet]{}(G5 V); T$_{e}$= 5708 K [@soub08];\
$\bullet$ [UX Ari]{}(G5 V); T$_{e}$= 5700 K [@vogt91];\
$\bullet$ [$\kappa$ Cet]{}(G5 V); R = 0.93 R$_{\odot}$ [@black94];\
$\bullet$ [UX Ari]{}(G5 V); R = 0.93 R$_{\odot}$ [@stras93];\
Therefore, recalling that the sum of the fluxes of two components can be used in computing the magnitude of a binary sysytem, like [UX Ari]{}, by\
$$m_{1} - m_{s} = -2.5 log \left(\frac{f_{1}}{f_{1}+f_{2}}\right),$$ where the subscription ’1’ and ’2’ are refer to the component stars, and ’s’ refers to the system, the theoretical continuum flux of the [UX Ari]{} system can be estimated by adjusting for its distance of 50 pc and using the observed continuum fluxes of [$\eta$ Cep]{}(K0 IV) and [$\kappa$ Cet]{}(G5 V):
$$f_{theo} (Ari) = f_{\eta Cep} + f_{\kappa Cet} .$$
----------- ------------- ---------- --------------- ------------ ------------------------------- ------------------------------- -------------------------------
Image HJD(mid) Epoch f UV excess F$_{Str88}$ F$_{Huen89}$ F$_{Str93}$
(x10$^{-10}$) in $f(\%)$ ($10^{7} erg cm^{-2} s^{-1}$) ($10^{7} erg cm^{-2} s^{-1}$) ($10^{7} erg cm^{-2} s^{-1}$)
LWR06329S 2444215.975 634.089 1.44(0.06) - 7.07(0.28) 1.81(0.07) 3.08(0.12)
LWR06329L 2444215.978 634.090 2.57(0.05) 4.3 12.60(0.24) 3.22(0.06) 5.49(0.11)
LWP11747 2447067.937 1077.084 2.97(0.05) 20.6 14.57(0.26) 3.73(0.07) 6.35(0.11)
LWP11748 2447067.960 1077.088 3.05(0.05) 24.0 14.98(0.26) 3.83(0.07) 6.53(0.12)
LWP11751 2447068.238 1077.131 2.91(0.07) 18.4 14.30(0.32) 3.66(0.08) 6.23(0.14)
LWP11752 2447068.364 1077.150 2.89(0.06) 17.4 14.18(0.27) 3.63(0.07) 6.18(0.12)
LWP11753 2447068.484 1077.169 2.83(0.05) 15.1 13.90(0.27) 3.56(0.07) 6.06(0.12)
LWP11754 2447068.612 1077.189 2.84(0.05) 15.3 13.92(0.27) 3.56(0.07) 6.07(0.12)
LWP11755 2447068.732 1077.208 2.79(0.06) 13.5 13.71(0.27) 3.51(0.07) 5.97(0.12)
LWP11760 2447068.973 1077.245 2.81(0.06) 14.2 13.79(0.27) 3.53(0.07) 6.01(0.12)
LWP11761 2447068.998 1077.249 2.86(0.05) 16.0 14.01(0.24) 3.59(0.06) 6.10(0.11)
LWP11763 2447069.085 1077.262 2.86(0.06) 16.0 14.02(0.27) 3.59(0.07) 6.11(0.12)
LWP11764 2447069.234 1077.286 2.88(0.05) 16.9 14.12(0.27) 3.61(0.07) 6.15(0.12)
LWP11765 2447069.360 1077.305 2.92(0.06) 18.6 14.33(0.27) 3.67(0.07) 6.24(0.12)
LWP11766 2447069.483 1077.324 2.84(0.05) 15.3 13.92(0.27) 3.56(0.07) 6.07(0.12)
LWP11767 2447069.588 1077.341 2.93(0.06) 19.1 14.39(0.28) 3.68(0.07) 6.27(0.12)
LWP11768 2447069.614 1077.345 2.93(0.06) 19.1 14.38(0.28) 3.68(0.07) 6.27(0.12)
LWP11769 2447069.638 1077.348 2.87(0.06) 16.5 14.08(0.28) 3.60(0.07) 6.13(0.12)
LWP11770 2447069.664 1077.352 2.95(0.06) 19.7 14.46(0.29) 3.70(0.07) 6.30(0.13)
LWP31888 2450100.379 1548.113 2.51(0.05) 1.8 12.29(0.23) 3.15(0.06) 5.35(0.10)
LWP31894 2450103.355 1548.575 2.32(0.05) - 11.39(0.23) 2.92(0.06) 4.96(0.10)
LWP31895 2450103.389 1548.581 2.39(0.05) - 11.72(0.23) 3.00(0.06) 5.11(0.10)
LWP31896 2450103.425 1548.586 2.48(0.05) 1.0 12.17(0.24) 3.11(0.06) 5.30(0.10)
LWP31903 2450105.400 1548.893 2.55(0.05) 3.4 12.49(0.23) 3.20(0.06) 5.44(0.10)
----------- ------------- ---------- --------------- ------------ ------------------------------- ------------------------------- -------------------------------
[lccccc]{} Image & HJD(mid)& Star & f & F & f$_{50}$\
& & &($10^{-10}$ erg $cm^{-2}$ $s^{-1}$)&($10^{7}$ erg $cm^{-2}$ $s^{-1}$)&($10^{-10}$ erg $cm^{-2}$ $s^{-1}$)\
LWR12739 & 2445037.093 & $\eta$ Cep & 23.32(0.18)& 5.88(0.04)& 1.92(0.01)\
\
LWR04857S & 2444048.950 & $\kappa$ Cet & 10.56(0.14)& 20.06(0.26)& 0.35(0.05)\
LWR04857L & 2444048.955 & $\kappa$ Cet & 17.13(0.16)& 32.52(0.29)& 0.57(0.05)\
LWR04858 & 2444048.982 & $\kappa$ Cet & 15.29(0.82)& 29.03(0.16)& 0.51(0.03)\
This theoretical flux can be used for the examination of ultraviolet excess of [UX Ari]{} by comparison with the observed continuum fluxes. Here, the theoretical surface fluxes of the comparison stars (see Column 5 of Table 4), [$\kappa$ Cet]{} and [$\eta$ Cep]{}, were used for evaluating the observed fluxes with adjustment made for the distance of UX Ari. The comparison of observed fluxes (see Column 4 of Table 3) with those of 50 pc observed fluxes (adjustment to the distance of [UX Ari]{}, see the last column of Table 4) of [$\kappa$ Cet]{} and [$\eta$ Cep]{} show that there is some UV excess in UX Ari, which varies from 1% up to 24%, with the exception of two images (namely LWP31894 and LWP31895). In this evaluation LWR06329S image was excluded because it was taken with small aperture of the spectrograph. The adjustment 50 pc - observed fluxes of comparison stars were computed using the corresponding surface fluxes given in the Column 5 of Table 4.
Another Test of the UV Excess in [UX Ari]{}
-------------------------------------------
Although the surface fluxes are the most important data in this spectral analysis, it is also useful to look at the result of the testing of the UV excess by taking only the observed fluxes at Earth into consideration, as in the spectrophotometric analysis of @rhomb77.
By using Equation (12) written as
$$f_{theo} (Ari) = C_{1} f_{\eta Cep} + C_{2} f_{\kappa Cet} ,$$
where
{width="75.00000%"}
$$C_{1} = \left( \frac {R_{K0 UXAri}} {R_{Eta Cep}} \right)^{2} \left( \frac {d_{Eta Cep}} {d_{UX Ari}} \right)^{2} = 0.11356226,$$
with R$_{K0 UXAri}$= 4.7 R$_{\odot}$ and $$C_{2} = \left( \frac{R_{G5 UXAri}} {R_{Kap Cet}} \right)^{2} \left( \frac {d_{Kap Cet}}{d_{UX Ari}} \right)^{2} = 0.033468606,$$ with R$_{K0 UXAri}$= 0.93 R$_{\odot}$. Using the values of integrated fluxes measured at Earth of $1.92\times 10^{-10}$ for [$\eta$ Cep]{} and of $0.54\times 10^{-10}$ (mean value) for [$\kappa$ Cet]{} (see Table 4), we have $$f_{theo} (Ari) = f_{Eta Cep} + f_{Kap Cet} = 2.46\times 10^{-10}.$$ By examination of the ratio $f_{UX Ari}$ / $f_{theo}$ (Ari) = 1.01 to 1.24. Depending on wavelength, the calculation using the observed UV fluxes at Earth in Equation 12 (second approach) shows also that there is some ultraviolet excess in the [UX Ari]{} system.
{width="75.00000%"}
Long-Wavelength, High - Dispersion Spectra
------------------------------------------
The most prominent features seen in these spectra are the well-known chromospheric Mg II h and k emission lines. In all LWP and LWR high resolution images, these Mg II h and k line profiles of both K0 IV and G5 V stars, appeared to be compatible with the corresponding orbital phases (Figures 6 and 7). Therefore, the Mg II h and k flux variation can also be evaluated depending on orbital phase. Based on the fitting procedure mentioned at the beginning of Section 2, the integrated line fluxes, the equivalent widths and the radial velocities for all components of the Mg II profiles were computed. The integrated line fluxes of Mg II k and MgII h and k, from G5 V and K0 IV component, are plotted in Figures 8 and 9 as a function of time (in the sense of epoch) and orbital phase, respectively. Similar trends were seen for the total Mg II line fluxes of both components (G5 V + K0 IV) of [UX Ari]{}. The scattering of Mg II line fluxes that appeared in Figure 8 is similar to that in Figures 2-4. This scattering was also attributed to the flux variation (which showed the maxima near 0.20 and 0.70 orbital phases) with the orbital phase. The Mg II h and k radial velocity curves of [UX Ari]{} system are shown in Figure 10 together with the results of @duem01 and of @carlos71 obtained from the visible spectral range. It is seen that the velocities of K0 IV component are in a better aggrement with the optical data than the velocities of G5 V component. Recalling that the effect of interstellar absorption has been removed by the Gaussian profile fitting procedures (see Section 2), this greater scattering in the G5 V velocities is likely due to the physical interaction between the K0 IV and G5 V compenent of [UX Ari]{}, just as the mass exchange via coronal/magnetic loops. The velocity $\gamma$ of the centre of mass of the system was found to be $38.22 \pm 2.36$ from the sinusoidal fitting to the Mg II h radial velocity curve and $31.01 \pm 2.44$ from the same analysis of the Mg II k radial velocity curve. Therefore, the mean value of $\gamma$ is found to be $34.62 \pm 4.86$ from the Mg II h and k radial velocity curves of [UX Ari]{}. This $\gamma$ is somewhat greater (about 8 ) than that of @duem01’s and @carlos71’s value (26.5 ).
Discussion and Conclusions
==========================
The conclusions of this study together with related discussion are as follows:
All integrated emission line fluxes of short wavelength low dispersion spectra showed a clear variation with time and orbital phase, but the variation with time was not as clear as that with the orbital phase (Figures 2 to 4). For example, the spectra taken in 1978(near the epoch of 560) showed a bit larger scattering in the range of fluxes. But when plotting these data versus orbital phase, the flux distribution showed a more clear flux variation. This feature seems to be similar in the other epochs (epochs of 963, 1035, 1301 and 1468). Apart from the flare event shown on 1979 Jan 1 (SWP03766), there were some flux enhancements (especially in the lines originating in the middle and upper chromosphere) in the years of 1987 (epoch of 1035.732), 1991 (epoch of 1300.757) and 1994 (near the epoch of 1470). However, the periodicity of the flux variation in time was not detected clearly from the 18 years of data. An application of the period search by discrete Fourier transform to OI (middle chromosphere line), CII (upper chromosphere line), SiIV (transition region line), and MgII k emission line fluxes did not give the significant results due to large gaps in the IUE data. On the other hand, the evaluation of the highest flux level of emission lines (occured at some epochs) showed that the first enhancement was in 1987 (epoch of 1035) occurring 9 years after the first data was obtained in 1978. If the period was 9 years, the next flux enhancement should have been in 1996 (epoch of 1548) instead of 1994 (epoch of 1468). Therefore, the variation with time may be with the periodicity of 7 - 9 years [which is close to the well known 10 years cycle of the [RS CVn]{} phenomena, see @rodono80]. However, this variation was not as clear as that with orbital phase, probably due to the insufficient distribution of the data to determine such a periodicity.
{width="65.00000%"}
There were two explicit increments around 0.20P and 0.70P in all chromospheric and transition line flux variations depending on the orbital phase. The first flux increment near the 0.20P was composed of the data from the spectra taken in 1981 (SWP13162), in 1990 (SWP39460) and in 1994 (SWP51866 , SWP51867, SWP51961, SWP51962, SWP52016, SWP52017). The second flux increment near the 0.70P was composed of the data from the spectra taken in 1979 (SWP07267), in 1987(SWP30026, SWP30027, SWP30028, SWP30029, SWP30030), in 1991 (SWP42416) and in 1994 (SWP52046, SWP52047). The [*V*]{} light-curve amplitudes of [UX Ari]{} in these years are 0.16 mag (1981), 0.07 mag (1990), 0.19 mag (1994), 0.04 mag (1979), 0.19 mag (1987) and 0.06 mag (1991) [@rave95]. Therefore, these flux increments did not seem to correlate with the [*V*]{} light-curve amplitudes, but there was a good agreement with the configuration of the component stars near the quadratures. The same situation also appeared in the MgII h and k emission-line fluxes. By using several optical chromospheric activity indicators, HeI D$_{3}$, NaI D$_{1}$, D$_{2}$, H$_{\alpha}$ and CaII IRT lines @gu02 also detected this high activity level of [UX Ari]{} around the second quadrature. They suggested that this may originate in the coupling of the chromospheric activity of the secondary and mass-transfer activity of the two components. Another important consideration is that HeII ($\lambda$1640) fluxes may contribute to flux enhancement due to collisional excitation (Athay 1965 ; Jordan 1975) indicating a temperature of $\sim$ 8 x $10^{4}$ K and recombination following photoionization by coronal X-rays [@zirin76]. The contribution of recombination to the HeII flux increases it up to 80% in the more active region [@rego83]. Another contributor to HeII is the FeII $\lambda$1640.15 emission (Jordan, 1975; Kohl, 1977). Therefore, the HeII ($\lambda$1640) emission feature cannot be considered a pure chromospheric indicator for [UX Ari]{}.
Examination of the ultraviolet excess in [UX Ari]{} by using the 24 long-wavelength, low-dispersion spectra of [UX Ari]{} (see Table 3), and comparison stars [$\kappa$ Cet]{} and [$\eta$ Cep]{} (see Table 4) in the spectral range between $2100 \AA$ and $3200 \AA$ showed that there is some ultraviolet excess in [UX Ari]{} system which varies from 1% up to 24%. However, two of these images, LWP31894 and LWP31895, showed no ultraviolet excess for [UX Ari]{} system. These 24 spectra were taken in the 1979-1996 period and covered most of the orbital phases. This examination is based on the computation of the theoretical continuum surface fluxes (computed from [$\kappa$ Cet]{} and [$\eta$ Cep]{} spectra as comparison stars) and the [UX Ari]{} continuum surface fluxes in the spectral range mentioned above. Using the same comparison stars ([$\kappa$ Cet]{} and [$\eta$ Cep]{}) and based on their 1975 observations, @rhomb77 measured spectrophotometrically the ultraviolet excess in [UX Ari]{} in which the cool star contributes $75\% \pm 5\%$ of the total light of the system at $4700 \AA$. Their spectrophotometric observations were carried out at orbital phases 0.715 and 0.791, and they attributed the wavelength-dependent ultraviolet excess in [UX Ari]{} to the free-free emission from hot circumstellar gas in the system. There is clearly agreement with the results of @rhomb77 and this study on the existence of ultraviolet excess in [UX Ari]{}.
![The integrated MgII k line fluxes of the components of [UX Ari]{} system as a function of time. The fluxes are in units of erg cm-2 s-1.[]{data-label="fig8"}](Fig8_uxari.eps){width="7.5cm"}
![(a) The MgII h line fluxes, and (b) the MgII k line fluxes of the components of [UX Ari]{} system as a function of orbital phase. The fluxes are in units of erg cm-2 s-1.[]{data-label="fig9"}](Fig9_uxari.eps){width="7.5cm"}
The integrated UV continuum fluxes of [UX Ari]{} have the lowest flux level of long-wavelength, low-dispersion IUE spectra near the epoch of 634 and these two images (LWR06329L and LWR06329S) were taken about eleven months after the flare event that occured on 1979 Jan 1. At the time of LWR06329 images (1979 Dec. 8), the [*V*]{} light curve of [UX Ari]{} had an amplitude about 0.04 mag [@rave95]. The integrated UV continuum fluxes of [UX Ari]{}, near the epoch of 1077 (see Table 3), show the variaton with the orbital phase, and have the highest flux level of long-wavelength low dispersion IUE spectra. After the flare event appeared in January of 1987, detected from simultaneous IUE and VLA observations [@lang88], during the time interval from 1987 September 29 to 1987 October 1 (the dates of images from LWP11747 to LWP11770 ; see Table 3), the [*V*]{} light curve of [UX Ari]{} had an amplitude of about 0.19 mag [@rave95]. Near the epoch of 1548 (1996 January), the integrated UV continuum fluxes of [UX Ari]{} (see Table 3) show the variation with the orbital phase and have lower flux levels than that of the IUE spectra taken in 1987 (near the epoch of 1077). Especially the three images, taken sequentially (LWP31894, LWP31895 and LWP31896) on 1996 Jan 20 show some rise in the UV continuum fluxes near the 0.6 orbital phase. Unfortunately, no photometric, X-ray or radio observation of [UX Ari]{} made at this time interval of epoch 1548 has been made to enable comparison with these UV continuum fluxes.
![(a) The MgII k radial velocity curves of [UX Ari]{} system together with the curves of @duem01 (with the (DA) legends) which listed in their Table 2, and (b) the MgII h radial velocity curves of [UX Ari]{}.[]{data-label="fig10"}](Fig10_uxari.eps){width="7.5cm"}
Similar to the emission line fluxes of low dispersion spectra, MgII h and k emission line fluxes of long wavelength high resolution spectra (evaluated from 79 images) have a variation with the orbital phase (Figure 9). There are also some increments of fluxes near 0.20 and 0.70 orbital phases. From the individual MgII emission-line fluxes of the component stars of [UX Ari]{} (see Figure 9), it was found that the contributions to the activity of the system of G5 V and K0 IV components are about, on average, 20% and 80% respectively, but these ratios varied with time and the orbital phase. However, as mentioned in the introduction of this paper, these contributions are estimated to be about 25% for G5 V and 75% for K0 IV from 25 images with IUESIPS reduction, which are not much different from the evaluation given above. Therefore, it can be pointed out that the activity of the system not only comes from the K0 IV component but also comes partially from G5 V component of [UX Ari]{}. That means both components have activity phenomena with the most of the contributions to the activity of the system coming from K0 IV. As direct evidence for the activity level of the secondary star of [UX Ari]{}, this result confirms the findings and discussions on the secondary component given by @aarum3, by using CaII K core emission of the secondary.
Although the absorption feature, observed on the peak of the MgII k profiles of the K0 IV component and shifted together with the emission profile as the star revolving on its orbit, was determined on the IUE spectra with IUESIPS reduction [@ekm93] this absorption feature was not found to appear on the IUE spectra with NEWSIPS reduction (Figure 6). This discrepancy is likely to originate from the absolute flux errors in the IUESIPS processing summarized by @nichols96.
There is a flare event observed on 1979 Jan 1 at 0.062 orbital phase (LWR 03344). Also, this flare event appeared in the short wavelength low dispersion IUE image SWP 03766 taken on the same date at 0.068 orbital phase. These two images were also studied by @simon80 who gave a plausible explanation for this flare emission, stating that the downflowing material from K0 IV component onto the G5 V star with velocities ranging up to 475 km $s^{-1}$ possibly originates in stellar prominences, or at the base of coronal loops associated with the active regions on the surface of the K0 IV star, or with material streaming between the stars. Their flux estimations for this flare spectrum were 3.8 x $10^{-11}$ ergs $cm^{-2}$ $s^{-1}$ for MgII k and 3.2 x $10^{-11}$ ergs $cm^{-2}$ $s^{-1}$ for MgII h emission which are slightly different from the values estimated in this study ( 4.8 x $10^{-11}$ ergs $cm^{-2}$ $s^{-1}$ for MgII k and 4.7 x $10^{-11}$ ergs $cm^{-2}$ $s^{-1}$ for MgII h ). These differences could have mainly arisen from different reduction procedures(IUESIPS/NEWSIPS). In the Gaussian profile fitting procedure for this flare spectrum (LWR03344) a fourth Gaussian profile (denoted by brown solid line in Figure 7) was added to match the flare event for both h and k emission lines. The flare contribution to the integrated emission line fluxes not only comes from this fourth Gaussian component but also from the G5 V and K0 IV components. The total effect of this flare must be shared, with appropriate amounts, between the G5 V, K0 IV and the conjunction of coronal loops between the component stars of [UX Ari]{} system which is located nearer to G5 V star [see Figure 4 of @simon80].
Comparison of the radial velocities of MgII k emission line profiles of the components of [UX Ari]{} system with those of radial velocities of @duem01 and of @carlos71 obtained from the visible spectral range showed some agreements with the data of K0 IV component but the velocities of MgII for G5 V component were, mostly near the quadratures, much different (lower) than the velocities obtained from the visible spectral range (Figure 10a). In addition, the velocities of MgII for G5 V component showed a great scattering by comparison with velocities of MgII for K0 IV component. On the account of the effect of the interstellar absorption was removed by the Gaussian profile fitting procedure, this great scattering in the velocities of G5 V component could likely due to physical interaction between the K0 IV and G5 V component which is seen actively in UV spectral region. This great scattering and the lower velocities compared to visible data, could make a suggestion for the chromospheric activity via a magnetic dynamo which produced the active region loops. Moreover, the chromospheric instability of G5 V could be due to interaction between the G5 V and K0 IV components via magnetic coronal loops. The mean value of the velocity $\gamma$ of the centre of mass of the system was found to be $34.62 \pm 4.86$ from the MgII h and k radial-velocity curves, which was somewhat greater(about 8 ) than that of @duem01 and @carlos71 (26.5 ).\
Combining all spectral characteristics of [UX Ari]{} supported the model of inhomogeneous gyro-synchrotron emission arising from electrons which have interaction with inhomogeneous magnetic fields [@mutel87]. As a result of these agreements, the UV emission flux variation with orbital phase in [UX Ari]{} might be strongly correlated with the size and configuration of dark spots (see Vogt & Hatzes 1991 and Aarum & Engvold 2003) related to the magnetic origin of the activity phenomena. Based on the contribution of G5 V and K0 IV components (on the order of 20% and 80% , respectively) to the MgII activity of the system, it would be suggested that it is better to take into consideration the spot distribution not only on the surface of K0 IV but also on the surface of the G5 V component of [UX Ari]{}. However, some constraints on the secondary component were given by @aarum3.
Acknowledgments {#acknowledgments .unnumbered}
===============
I would like to thank Randy Thompson for his kind help in converting NEWSIPS files to ASCII formats by using IDL on the IUE account. I also thank Mesut Y[i]{}lmaz and Tolga Çolak for their assistance in compiling the manuscript with latex. And finally, I would like to thank the referee for his / her cautions on some points to improve the result of this study. This research has made use of the Simbad database, operated at CDS, Strasbourg, France, and of NASA’s Astrophsics Data System Bibliographic Services.
[99]{}
Aarum Ulvås, V., Engvold, O., 2003, [A&A]{} 402, 1043 Athay, R.G., 1965, [ApJ]{}, 142, 755 Bevington, P. R., 1969, Data Reduction and Error Analysis for The Physical Sciences, Mc Graw Hill, New York, p. 237 Blackwell, D.E., Lynas-Gray, A.E., 1994, [A&A]{}, 282, 899 Carlos, R.C., Popper, D.M., 1971, [PASP]{}, 83, 504 Duemmler, R., Aarum, V., 2001, [A&A]{}, 370, 974 Ekmekçi, F., 1993, PhD Thesis, Ankara Uni. Graduate School of Natural and Applied Sciences, Dept. of Astronomy and Space Sciences Gray, D. F., 1992, The Observations and Analysis of Stellar Photospheres, Second Edition, Cambridge Univ. Press , New York, p. 340 Gu, S.-h., Tan, H.-s., Shan, H.-g., Zhang, F.-h., 2002, [A&A]{}, 388, 889 Huenemoerder, D. P., Buzasi, D. L., Ramsey, L. W., 1989, [AJ]{}, 98, 1398 Jordan, C., 1975, [MNRAS]{}, 170, 429 Kohl, J.L., 1977, [ApJ]{}, 211, 958 Lang, K.R., Willson, R.F., 1988, [ApJ]{}, 328, 610 Montes, D., Fernández-Figueroa, M. J., De Castro, E., Cornide, M., 1995a, [A&AS]{}, 109, 135 Montes, D., Fernández-Figueroa, M. J., De Castro, E., Cornide, M., 1995b, [A&A]{}, 294, 165 Mutel, R.L., Morris, D.H., Doiron, D.J., Lestrade, J.F., 1987, [AJ]{}, 93, 1220 Nichols, J. S., Linsky, J. L., 1996, [AJ]{}, 111, 517 Perryman, M. A. C., Lindegren, L., Kovalevsky, J., et al., 1997, [A&A]{}, 323L, 49 Raveendran, A. V., Mohin, S., 1995, [A&A]{}, 301, 788 Rego, M., Gonzalez-Riestra, R., Fernández-Figueroa, M.J., 1983, [A&A]{}, 119, 227 Rhombs, C. G., Fix, J. D., 1977, [ApJ]{}, 216, 503 Rodonó, M., 1980, [MmSAI]{}, 51, 623 Simon, T., Linsky, J.L., Schiffer, F. H., 1980, [ApJ]{}, 239, 911 Soubiran, C., Bienaymé, O., Mishenina, T.V., & Kovtyukh, V.V., 2008, [A&A]{}, 480, 91 Strassmeier, K. G., Hall, D. S., Zeilik, M., et al., 1988, [A&AS]{}, 72, 291 Strassmeir, K. G., Hall, D. S., Fekel, F. C., Scheck, M., 1993, [A&AS]{}, 100, 173 Vogt, S. S., Hatzes, A. P., 1991, in [IAU]{} Coll. No 130, The Sun and Cool Stars. Activity, Magnetism, Dynamos, Eds.Tuominen, I., Moss, D., Rudiger, G., Springer Berlin Heidelberg New York, p. 297 Zirin, H., 1976, [ApJ]{}, 208, 414
|
---
abstract: 'We give an overview of recent developments in the problem of reconstructing a band-limited signal from non-uniform sampling from a numerical analysis view point. It is shown that the appropriate design of the finite-dimensional model plays a key role in the numerical solution of the non-uniform sampling problem. In the one approach (often proposed in the literature) the finite-dimensional model leads to an ill-posed problem even in very simple situations. The other approach that we consider leads to a well-posed problem that preserves important structural properties of the original infinite-dimensional problem and gives rise to efficient numerical algorithms. Furthermore a fast multilevel algorithm is presented that can reconstruct signals of unknown bandwidth from noisy non-uniformly spaced samples. We also discuss the design of efficient regularization methods for ill-conditioned reconstruction problems. Numerical examples from spectroscopy and exploration geophysics demonstrate the performance of the proposed methods.'
author:
- 'Thomas Strohmer[^1]'
title: '**Numerical Analysis of the Non-uniform Sampling Problem**'
---
0
Subject Classification: 65T40, 65F22, 42A10, 94A12\
non-uniform sampling, band-limited functions, frames, regularization, signal reconstruction, multi-level method.
Introduction {#s:intro}
============
The problem of reconstructing a signal $f$ from non-uniformly spaced measurements $f(t_j)$ arises in areas as diverse as geophysics, medical imaging, communication engineering, and astronomy. A successful reconstruction of $f$ from its samples $f(t_j)$ requires a priori information about the signal, otherwise the reconstruction problem is ill-posed. This a priori information can often be obtained from physical properties of the process generating the signal. In many of the aforementioned applications the signal can be assumed to be (essentially) band-limited.
Recall that a signal (function) is band-limited with bandwidth $\Omega$ if it belongs to the space $\BOM$, given by $$\BOM = \left\{ f \in \LtR : \hat{f}(\omega) = 0
\,\,\text{for}\,\,|\omega|>\Omega \right\}\,,
\label{bandlim}$$ where $\hat{f}$ is the Fourier transform of $f$ defined by $$\hat{f}(\omega)
= \int \limits_{-\infty}^{+\infty} f(t) e^{-2 \pi i \omega t} \, dt\,.$$ For convenience and without loss of generality we restrict our attention to the case $\Omega = \frac{1}{2}$, since any other bandwidth can be reduced to this case by a simple dilation. Therefore we will henceforth use the symbol $\BO$ for the space of band-limited signals.
It is now more than 50 years ago that Shannon published his celebrated sampling theorem [@Sha48]. His theorem implies that any signal $f \in \BO$ can be reconstructed from its regularly spaced samples $\{f(n)\}_{n \in \Zst}$ by $$f(t) = \sum_{n \in \Zst} f(n)
\frac{\sin \pi(t - n)}{\pi(t-n)}\,.
\label{shannon}$$
In practice however we seldom enjoy the luxury of equally spaced samples. The solution of the nonuniform sampling problem poses much more difficulties, the crucial questions being:
- Under which conditions is a signal $f \in \BO$ uniquely defined by its samples $\{f(t_j)\}_{j \in \Zst}$?
- How can $f$ be stably reconstructed from its samples $f(t_j)$?
These questions have led to a vast literature on nonuniform sampling theory with deep mathematical contributions see [@DS52; @Lan67; @BM67; @BSS88; @FG94] to mention only a few. There is also no lack of methods claiming to efficiently reconstruct a function from its samples [@Yen56; @YT67; @Ben92; @FGS95; @Win92; @Mar93a; @FG94]. These numerical methods naturally have to operate in a finite-dimensional model, whereas theoretical results are usually derived for the infinite-dimensional space $\BO$. From a numerical point of view the “reconstruction” of a bandlimited signal $f$ from a finite number of samples $\{f(t_j)\}_{j=1}^{r}$ amounts to computing an approximation to $f$ (or $\hat{f}$) at sufficiently dense (regularly) spaced grid points in an interval $(t_1, t_r)$.
Hence in order to obtain a “complete” solution of the sampling problem following questions have to be answered:
- Does the approximation computed within the finite-dimensional model actually converge to the original signal $f$, when the dimension of the model approaches infinity?
- Does the finite-dimensional model give rise to fast and stable numerical algorithms?
These are the questions that we have in mind, when presenting an overview on recent advances and new results in the nonuniform sampling problem from a numerical analysis view point.
In Section \[ss:truncated\] it is demonstrated that the celebrated frame approach does only lead to fast and stable numerical methods when the finite-dimensional model is carefully designed. The approach usually proposed in the literature leads to an ill-posed problem even in very simple situations. We discuss several methods to stabilize the reconstruction algorithm in this case. In Section \[ss:trigpol\] we derive an alternative finite-dimensional model, based on trigonometric polynomials. This approach leads to a well-posed problem that preserves important structural properties of the original infinite-dimensional problem and gives rise to efficient numerical algorithms. Section \[s:numeric\] describes how this approach can be modified in order to reconstruct band-limited signals for the in practice very important case when the bandwidth of the signal is not known. Furthermore we present regularization techniques for ill-conditioned sampling problems. Finally Section \[s:applications\] contains numerical experiments from spectroscopy and geophysics.
Before we proceed we introduce some notation that will be used throughout the paper. If not otherwise mentioned $\|h\|$ always denotes the $\LtR$-norm ($\ltZ$-norm) of a function (vector). For operators (matrices) $\|T\|$ is the standard operator (matrix) norm. The condition number of an invertible operator $T$ is defined by $\kappa (A) = \|A\| \|A^{-1}\|$ and the spectrum of $T$ is $\sigma (T)$. $I$ denotes the identity operator.
Nonuniform sampling, frames, and numerical algorithms {#s:theory}
-----------------------------------------------------
The concept of frames is an excellent tool to study nonuniform sampling problems [@Fei89; @BH90; @Ben92; @Hig96; @FG94; @Zay93]. The frame approach has the advantage that it gives rise to deep theoretical results and also to the construction of efficient numerical algorithms – [*if*]{} (and this point is often ignored in the literature) the finite-dimensional model is properly designed.
Following Duffin and Schaeffer [@DS52], a family $\{\fk\}_{j \in \Zst}$ in a separable Hilbert space $\Hsp$ is said to be a frame for $\Hsp$, if there exist constants (the [*frame bounds*]{}) $A,B>0$ such that $$\label{framedef}
A \|f\|^2 \le \sum_{j} |\langle f, \fk \rangle|^2 \le B \|f\|^2 \,,
\qquad \forall f \in \Hsp.$$ We define the [*analysis operator*]{} $T$ by $$T: f \in \Hsp \rightarrow Ff = \{ \langle f, \fk \rangle\}_{j \in \Zst}\,,
\label{frameanal}$$ and the [*synthesis operator*]{}, which is just the adjoint operator of $T$, by $$T^{\ast}: c \in \ltZ \rightarrow
T^{\ast} c = \sum_{j} c_{j} \fk\,.
\label{framesyn}$$ The [*frame operator*]{} $S$ is defined by $S = T^{\ast} T$, hence $Sf = \sum_{j} \langle f, \fk \rangle \fk$. $S$ is bounded by $A I \le S \le B I$ and hence invertible on $\Hsp$.
We will also make use of the operator $T T^{\ast}$ in form of its Gram matrix representation $R: \ltZ \rightarrow \ltZ $ with entries $R_{j,l} = \langle f_j, f_l \rangle$. On $\range (T) = \range (R)$ the matrix $R$ is bounded by $A I \le R \le B I$ and invertible. On $\ltZ$ this inverse extends to the [*Moore-Penrose inverse*]{} or pseudo-inverse $R^{+}$ (cf. [@EHN96]).
Given a frame $\{\fk\}_{j \in \Zst}$ for $\Hsp$, any $f \in \Hsp$ can be expressed as $$\label{frameexp}
f = \sum_{j \in \Zst} \langle f, \fk \rangle \fdk
= \sum_{j \in \Zst} \langle f, \fdk \rangle \fk \,,$$ where the elements $\fdk :=S^{-1} \fk$ form the so-called dual frame and the frame operator induced by $\fdk$ coincides with $S^{-1}$. Hence if a set $\{\fk\}_{j \in \Zst}$ establishes a frame for $\Hsp$, we can reconstruct any function $f \in \Hsp$ from its moments $\langle f, \fk \rangle$.
One possibility to connect sampling theory to frame theory is by means of the [*sinc*]{}-function $$\sinco(t) = \frac{\sin \pi t}{\pi t}\,.
\label{sinc}$$ Its translates give rise to a [*reproducing kernel*]{} for $\BO$ via $$f(t) = \langle f, \sinco(\cdot - t) \rangle \quad \forall t, f \in \BO\,.
\label{sincconv}$$ Combining with formulas and we obtain following well-known result [@Fei89; @BH90].
If the set $\sincframe$ is a frame for $\BO$, then the function $f \in \BO$ is uniquely defined by the sampling set $\{f(t_j)\}_{j \in \Zst}$. In this case we can recover $f$ from its samples by $$\label{recon1}
f(t) = \sum_{j \in \Zst} f(t_j) \gamma_j \,,
\qquad \text{where}\,\,\, \gamma_j = S^{-1} \sinco(\cdot - t_j)\,,$$ or equivalently by $$\label{recon2}
f(t) = \sum_{j \in \Zst} c_j \sinco(t-t_j)\,,
\qquad \text{where}\,\,\, Rc=b\,,$$ with $R$ being the frame Gram matrix with entries $R_{j,l}= \sinco(t_j - t_l)$ and $b=\{b_j\}=\{f(t_j)\}$.
The challenge is now to find easy-to-verify conditions for the sampling points $t_j$ such that $\sincframe$ (or equivalently the exponential system $\{e^{2 \pi i t_j \omega}\}_{j \in \Zst}$) is a frame for $\BO$. This is a well-traversed area (at least for one-dimensional signals), and the reader should consult [@Ben92; @FG94; @Hig96] for further details and references. If not otherwise mentioned from now on we will assume that $\sincframe$ is a frame for $\BO$.
Of course, neither of the formulas and can be actually implemented on a computer, because both involve the solution of an infinite-dimensional operator equation, whereas in practice we can only compute a finite-dimensional approximation. Although the design of a valid finite-dimensional model poses severe mathematical challenges, this step is often neglected in theoretical but also in numerical treatments of the nonuniform sampling problem. We will see in the sequel that the way we design our finite-dimensional model is crucial for the stability and efficiency of the resulting numerical reconstruction algorithms.
In the next two sections we describe two different approaches for obtaining finite-dimensional approximations to the formulas and . The first and more traditional approach, discussed in Section \[ss:truncated\], applies a finite section method to equation . This approach leads to an ill-posed problem involving the solution of a large unstructured linear system of equations. The second approach, outlined in Section \[ss:trigpol\], constructs a finite model for the operator equation in by means of trigonometric polynomials. This technique leads to a well-posed problem that is tied to efficient numerical algorithms.
Truncated frames lead to ill-posed problems {#ss:truncated}
===========================================
According to equation we can reconstruct $f$ from its sampling values $f(t_j)$ via $f(t) = \sum_{j \in \Zst} c_j\, \sinco(t - t_j)$, where $c=\Rp b$ with $b_j = f(t_j), j \in \Zst$. 0 Since $R$ is a compact operator it can be diagonalized via its singular system (or eigensystem since $R$ is self-adjoint) $(\lambda_n, u_n)$ as follows [@EHN96] $$R x = \sum_{n=1}^{\infty} \lambda_n \langle x, u_n \rangle u_n\,,
\label{svdexp}$$ with a corresponding complete orthogonal set of vectors $u_n$. The Moore-Penrose inverse $R^+$ can be expressed as $$R^+ y = \sum_{n=1}^{\infty} \frac{\langle y, u_n \rangle}{\lambda_n} u_n\,,
\label{pinvexp}$$ where, as usual, only the non-zero singular values of $R$ are used in the above sum. In order to compute a finite-dimensional approximation to $c = \{c_j\}_{j \in \Zst}$ we use the finite section method [@GF74]. For $x \in \ltZ$ and $n \in \Nst$ we define the orthogonal projection $\Pn$ by $$\label{defP}
\Pn x = (\dots, 0,0, x_{-n}, x_{-n+1},\dots , x_{n-1}, x_n, 0,0, \dots)$$ and identify the image of $\Pn$ with the space $\Cst^{2n+1}$. Setting $\Rn = \Pn R \Pn$ and $\bn = \Pn b$, we obtain the $n$-th approximation $\cn$ to $c$ by solving $$\Rn \cn = \bn\,.
\label{finitesec}$$
It is clear that using the truncated frame $\{\sinco(\cdot - t_j)\}_{j=-n}^{n}$ in for an approximate reconstruction of $f$ leads to the same system of equations.
If $\sincframe$ is an exact frame (i.e., a Riesz basis) for $\BO$ then we have following well-known result.
Let $\sincframe$ be an exact frame for $\BO$ with frame bounds $A,B$ and $Rc=b$ and $\Rn \cn = \bn$ as defined above. Then $\Rni$ converges strongly to $\Ri$ and hence $\cn \rightarrow c$ for $\ntoinf$.
Since the proof of this result given in [@Chr96b] is somewhat lengthy we include a rather short proof here.
Note that $R$ is invertible on $\ltZ$ and $A \le R \le B$. Let $x \in \Cst^{2n+1}$ with $\|x\| =1$, then $\langle \Rn x, x \rangle = \langle \Pn R \Pn x, x \rangle =
\langle Rx,x \rangle \ge A$. In the same way we get $\|\Rn \| \le B$, hence the matrices $\Rn$ are invertible and uniformly bounded by $A \le \Rn \le B$ and $$\frac{1}{B} \le \Rni \le \frac{1}{A} \qquad \text{for all} \,\,n \in \Nst.$$ The Lemma of Kantorovich [@RM94] yields that $\Rni \rightarrow \Ri$ strongly.
If $\sincframe$ is a non-exact frame for $\BO$ the situation is more delicate. Let us consider following situation.
[**Example 1:**]{} Let $f \in \BO$ and let the sampling points be given by $t_j = \frac{j}{m}, j \in \Zst, 1 < m \in \Nst$, i.e., the signal is regularly oversampled at $m$ times the Nyquist rate. In this case the reconstruction of $f$ is trivial, since the set $\{\sinco(\cdot - t_j)\}_{j \in \Zst}$ is a tight frame with frame bounds $A=B=m$. Shannon’s Sampling Theorem implies that $f$ can be expressed as $f(t) = \sum_{j \in \Zst} c_j \, \sinco(t-t_j)$ where $c_j = \frac{f(t_j)}{m}$ and the numerical approximation is obtained by truncating the summation, i.e., $$f_n(t) = \sum_{j =-n}^{n} \frac{f(t_j)}{m}\, \sinco(t-t_j)\,.$$
Using the truncated frame approach one finds that $R$ is a Toeplitz matrix with entries $$R_{j,l}=\frac{\sin\frac{\pi}{m} (j-l)}{\frac{\pi}{m}(j-l)}
\,, \qquad j,l \in \Zst\,,$$ in other words, $\Rn$ coincides with the prolate matrix [@sle78; @Var93]. The unpleasant numerical properties of the prolate matrix are well-documented. In particular we know that the singular values $\lambda_n$ of $\Rn$ cluster around $0$ and $1$ with $\log n$ singular values in the transition region. Since the singular values of $\Rn$ decay exponentially to zero the finite-dimensional reconstruction problem has become [*severely ill-posed*]{} [@EHN96], although the infinite-dimensional problem is “perfectly posed” since the frame operator satisfies $S = mI$, where $I$ is the identity operator.
Of course the situation does not improve when we consider non-uniformly spaced samples. In this case it follows from standard linear algebra that $\sigma(R) \subseteq \{0 \cup [A,B]\}$, or expressed in words, the singular values of $R$ are bounded away from zero. However for the truncated matrices $\Rn$ we have $$\sigma (\Rn)\subseteq \{(0,B]\}$$ and the smallest of the singular values of $\Rn$ will go to zero for $\ntoinf$, see [@Har98].
Let $A=U\Sigma V^{\ast}$ be the singular value decomposition of a matrix $A$ with $\Sigma = \diag(\{\lambda_{k}\})$. Then the Moore-Penrose inverse of $A$ is $A^+ = V \Sigma^+ U^{\ast}$, where (e.g., see [@GL96]) $$\Sigma^{+} = \diag(\{\lambda_{k}^{+}\})\,, \quad
\lambda_{k}^{+} =
\begin{cases}
1/\lambda_k & \text{if}\,\, \lambda_k \neq 0, \\
0 & \text{otherwise.}
\end{cases}
\label{pinv}$$ For $\Rn = U_n \Sigma_n V_n$ this means that the singular values close to zero will give rise to extremely large coefficients in $\Rnp$. In fact $\|\Rnp\| \toinf$ for $\ntoinf$ and consequently $\cn$ does not converge to $c$.
Practically $\|\Rnp\|$ is always bounded due to finite precision arithmetics, but it is clear that it will lead to meaningless results for large $n$. If the sampling values are perturbed due to round-off error or data error, then those error components which correspond to small singular values $\lambda_k$ are amplified by the (then large) factors $1/\lambda_k$. Although for a given $\Rn$ these amplifications are theoretically bounded, they may be practically unacceptable large. Such phenomena are well-known in regularization theory [@EHN96]. A standard technique to compute a stable solution for an ill-conditioned system is to use a truncated singular value decomposition (TSVD) [@EHN96]. This means in our case we compute a regularized pseudo-inverse $\Rnpe = V_n \Sigma_n^{+,\thresh} U_n^{\ast}$ where $$\Sigma^{+,\thresh} = \diag(\{d_{k}^{+}\})\,, \quad
d_{k}^{+} =
\begin{cases}
1/\lambda_k & \text{if} \,\,\lambda_k \ge \thresh, \\
0 & \text{otherwise.}
\end{cases}
\label{pinvtrunc}$$ In [@Har98] it is shown that for each $n$ we can choose an appropriate truncation level $\thresh$ such that the regularized inverses $\Rnpe$ converge strongly to $\Rp$ for $\ntoinf$ and consequently $\underset{\ntoinf}{\lim} \|f - \fn\| = 0$, where $$\fn(t) = \sum_{j=-n}^{n} c_{j}^{(n,\thresh)} \sinco(t-t_j) \notag$$ with $$c^{(n,\thresh)} = \Rnpe \bn\,. \notag$$ The optimal truncation level $\thresh$ depends on the dimension $n$, the sampling geometry, and the noise level. Thus it is not known a priori and has in principle to be determined for each $n$ independently.
Since $\thresh$ is of vital importance for the quality of the reconstruction, but no theoretical explanations for the choice of $\thresh$ are given in the sampling literature, we briefly discuss this issue. For this purpose we need some results from regularization theory.
Estimation of regularization parameter {#ss:est}
--------------------------------------
0 In realistic situations the samples are usually perturbed by noise, with $$\sum_{j=1}^{r}|f(t_j) - f^{\delta}(t_j)|^2 \le \delta^2$$ where $f^{\delta}(t_j)$ denotes a perturbed sample. The noise level $\delta$ is in practice in general much larger than the truncation level for noise-free data. Thus $\thresh$ will mainly depend on $\delta$. For noise-free data the accuracy is determined by the roundoff level via the machine precision of the computer.
Let $Ax = y^{\delta}$ be given where $A$ is ill-conditioned or singular and $y^{\delta}$ is a perturbed right-hand side with $\|y - y^{\delta}\| \le \delta \|y\|$. Since in our sampling problem the matrix under consideration is symmetric, we assume for convenience that $A$ is symmetric. From a numerical point of view ill-conditioned systems behave like singular systems and additional information is needed to obtain a satisfactory solution to $Ax=y$. This information is usually stated in terms of “smoothness” of the solution $x$. A standard approach to qualitatively describe smoothness of $x$ is to require that $x$ can be represented in the form $x=Sz$ with some vector $z$ of reasonable norm, and a “smoothing” matrix $S$, cf. [@EHN96; @Neu98]. Often it is useful to construct $S$ directly from $A$ by setting $$S = A^p\,, \qquad p \in \Nst_0 \,.
\label{smoothness}$$ Usually, $p$ is assumed to be fixed, typically at $p=1$ or $p=2$.
We compute a regularized solution to $Ax=y^{\delta}$ via a truncated SVD and want to determine the optimal regularization parameter (i.e., truncation level) $\tau$.
Under the assumption that $$x = Sz \, , \quad \|Ax-y^{\delta}\| \le \Delta \|z\|
\label{}$$ it follows from Theorem 4.1 in [@Neu98] that the optimal regularization parameter $\tau$ for the TSVD is $$\hat{\tau}=\left(\frac{\gamma_1 \delta}{\gamma_2 p}\right)^{\frac{1}{p+1}}\,,
\label{optreg}$$ where $\gamma_1=\gamma_2 =1$ (see Section 6 in [@Neu98]).
However $z$ and $\Delta$ are in general not known. Using $\|Ax-y^{\delta}\| \le \delta \|y\|$ and $\|y\|=\|Ax\|=\|A Sz\|=\|A^{p+1} z\|$ we obtain $ \|y\| \le \|A\|^{p+1}\|z\|$. Furthermore, setting $\delta \|y\| = \Delta \|z\|$ implies $$\Delta \le \delta \|A\|^{p+1}\,.
\label{Delta}$$ Hence combining and we get $$\hat{\tau} \le \left( \frac{\delta \|A\|^{p+1}}{p}\right)^{\frac{1}{p+1}}
= \|A\| \left(\frac{\delta}{p}\right)^{\frac{1}{p+1}}\,.
\label{tauest}$$
Applying these results to solving $\Rn \cn = \bn$ via TSVD as described in the previous section, we get $$\hat{\tau} \le \|\Rn\| \left(\frac{\delta}{p}\right)^{\frac{1}{p+1}}
\le \|R\| \left(\frac{\delta}{p}\right)^{\frac{1}{p+1}}
= B \left(\frac{\delta}{p}\right)^{\frac{1}{p+1}}\,,
\label{threshopt}$$ where $B$ is the upper frame bound. Fortunately estimates for the upper frame bound are much easier to obtain than estimates for the lower frame bound.
Thus using the standard setting $p=1$ or $p=2$ a good choice for the regularization parameter $\tau$ is $$\thresh \subseteq [B(\delta/2)^{1/3},B(\delta)^{1/2}]\,.
\label{thresh}$$ Extensive numerical simulations confirm this choice, see also Section \[s:applications\].
For instance for the reconstruction problem of Example 1 with noise-free data and machine precision $\eps = \delta = 10^{-16}$, formula implies $\thresh \subseteq [10^{-6},10^{-8}]$. This coincides very well with numerical experiments.
If the noise level $\delta$ is not known, it has to be estimated. This difficult problem will not be discussed here. The reader is referred to [@Neu98] for more details.
Although we have arrived now at an implementable algorithm for the nonuniform sampling problem, the disadvantages of the approach described in the previous section are obvious. In general the matrix $\Rn$ does not have any particular structure, thus the computational costs for the singular value decomposition are $\ord(n^3)$ which is prohibitive large in many applications. It is definitely not a good approach to transform a well-posed infinite-dimensional problem into an ill-posed finite-dimensional problem for which a stable solution can only be computed by using a “heavy regularization machinery”.
The methods in [@Yen56; @YT67; @Win92; @San94; @BH90] coincide with or are essentially equivalent to the truncated frame approach, therefore they suffer from the same instability problems and the same numerical inefficiency.
CG and regularization of the truncated frame method {#ss:cgtrunc}
---------------------------------------------------
As mentioned above one way to stabilize the solution of $\Rn \cn = \bn$ is a truncated singular value decomposition, where the truncation level serves as regularization parameter. For large $n$ the costs of the singular value decomposition become prohibitive for practical purposes.
We propose the conjugate gradient method [@GL96] to solve $\Rn \cn = \bn$. It is in general much more efficient than a TSVD (or Tikhonov regularization as suggested in [@Win92]), and at the same time it can also be used as a regularization method.
The standard error analysis for CG cannot be used in our case, since the matrix is ill-conditioned. Rather we have to resort to the error analysis developed in [@NP84; @Han95].
When solving a linear system $Ax=y$ by CG for noisy data $y^{\delta}$ following happens. The iterates $x_k$ of CG may diverge for $k \rightarrow \infty$, however the error propagation remains limited in the beginning of the iteration. The quality of the approximation therefore depends on how many iterative steps can be performed until the iterates turn to diverge. The idea is now to stop the iteration at about the point where divergence sets in. In other words the iterations count is the regularization parameter which remains to be controlled by an appropriate stopping rule [@Nem86; @Han95].
In our case assume $\|\bnd-\bn\| \le \delta \|\bn\|$, where $b_j^{(n,\delta)}$ denotes a noisy sample. We terminate the CG iterations when the iterates $(\cnd)_k$ satisfy for the first time [@Han95] $$\|\bn\ - (\cnd)_k\| \le \tau \delta \|\bn\|
\label{stopcg}$$ for some fixed $\tau >1$.
It should be noted that one can construct “academic” examples where this stopping rule does not prevent CG from diverging, see [@Han95], “most of the time” however it gives satisfactory results. We refer the reader to [@Nem86; @Han95] for a detailed discussion of various stopping criteria.
There is a variety of reasons, besides the ones we have already mentioned, that make the conjugate gradient method and the nonuniform sampling problem a “perfect couple”. See Sections \[ss:trigpol\], \[ss:ml\], and \[ss:regul\] for more details.
By combining the truncated frame approach with the conjugate gradient method (with appropriate stopping rule) we finally arrive at a reconstruction method that is of some practical relevance. However the only existing method at the moment that can handle large scale reconstruction problems seems to be the one proposed in the next section.
Trigonometric polynomials and efficient signal reconstruction {#ss:trigpol}
=============================================================
In the previous section we have seen that the naive finite-dimensional approach via truncated frames is not satisfactory, it already leads to severe stability problems in the ideal case of regular oversampling. In this section we propose a different finite-dimensional model, which resembles much better the structural properties of the sampling problem, as can be seen below.
The idea is simple. In practice only a finite number of samples $\{f(t_j)\}_{j=1}^{r}$ is given, where without loss of generality we assume $-M \le t_1 < \dots < t_r \le M$ (otherwise we can always re-normalize the data). Since no data of $f$ are available from outside this region we focus on a local approximation of $f$ on $[-M,M]$. We extend the sampling set periodically across the boundaries, and identify this interval with the (properly normalized) torus $\Tst$. To avoid technical problems at the boundaries in the sequel we will choose the interval somewhat larger and consider either $[-M-1/2,M+1/2]$ or $[-N,N]$ with $N= M+\frac{M}{r-1}$. For theoretical considerations the choice $[-M-1/2,M+1/2]$ is more convenient.
Since the dual group of the torus $\Tst$ is $\Zst$, periodic band-limited functions on $\Tst$ reduce to trigonometric polynomials (of course technically $f$ does then no longer belong to $\BO$ since it is no longer in $\LtR$). This suggests to use trigonometric polynomials as a realistic finite-dimensional model for a numerical solution of the nonuniform sampling problem. We consider the space $\PM$ of trigonometric polynomials of degree $M$ of the form $$p(t) = (2M+1)^{-1} \sum_{k=-M}^{M} a_{k} e^{2\pi i kt/(2M+1)}\,.
\label{pm}$$ The norm of $p \in \PM$ is $$\|p\|^2 =\int \limits_{-N}^{N} |p(t)|^2\, dt =\sum_{k=-M}^{M} |a_k|^2 \,.$$ Since the distributional Fourier transform of $p$ is $\hat{p} = (2M+1)^{-1} \sum_{k=-M}^{M} a_k \delta_{k/(2M+1)}$ we have $\supp \hat{p} \subseteq \{ k/(2M+1) , |k| \le M\} \subseteq [-1/2, 1/2]$. Hence $\PM$ is indeed a natural finite-dimensional model for $\BO$.
In general the $f(t_j)$ are not the samples of a trigonometric polynomial in $\PM$, moreover the samples are usually perturbed by noise, hence we may not find a $p \in \PM$ such that $p(t_j)=b_j = f(t_j)$. We therefore consider the least squares problem $$\underset{p \in \PM }{ \min } \sum_{j=1}^{r} |p(t_j) -b_j|^2 w_j\,.
\label{LSP}$$ Here the $w_j >0$ are user-defined weights, which can be chosen for instance to compensate for irregularities in the sampling geometry [@FGS95].
By increasing $M$ so that $ r \le 2M+1$ we can certainly find a trigonometric polynomial that interpolates the given data exactly. However in the presence of noise, such a solution is usually rough and highly oscillating and may poorly resemble the original signal. We will discuss the question of the optimal choice of $M$ if the original bandwidth is not known and in presence of noisy data in Section \[ss:regul\].
The following theorem provides an efficient numerical reconstruction algorithm. It is also the key for the analysis of the relation between the finite-dimensional approximation in $\PM$ and the solution of the original infinite-dimensional sampling problem in $\BO$.
[@Gro93a; @FGS95] \[th:act\] Given the sampling points $-M \le t_1 < \dots, t_{r} \le M$, samples $\{b_j\}_{j=1}^r$, positive weights $\{w_j\}_{j=1}^{r}$ with $2M+1 \le r$.\
Step 1: Compute the $(2M+1)\times (2M+1)$ Toeplitz matrix $T_M$ with entries $$(T_M)_{k,l}=\frac{1}{2M+1}\sum_{j=1}^{r} w_j e^{-2\pi i (k-l) t_j/(2M+1)}
\qquad \mbox{for $|k|,|l| \le M$}
\label{toepmat}$$ and $y_M \in \Cst^{(2M+1)}$ by $$\label{rightside}
(y_M)_k=\frac{1}{\sqrt{2M+1}} \sum_{j=1}^{r} b_j w_j e^{-2\pi i k t_j/(2M+1)}
\qquad \mbox{for $|k| \le M$} \,.$$ Step 2: Solve the system $$\label{toepsys}
T_M a_M = y_M \,.$$ Step 3: Then the polynomial $\plsp \in \PM$ that solves is given by $$\plsp(t)=\frac{1}{\sqrt{2M+1}} \sum_{k=-M}^M (a_M)_k e^{2 \pi i kt/(2M+1)} \,.
\label{lsppol}$$
[**Numerical Implementation of Theorem/Algorithm \[th:act\]:**]{}\
Step 1: The entries of $T_M$ and $y_M$ of equations and can be computed in $\ord(M \log M + r \log(1/\eps))$ operations (where $\eps$ is the required accuracy) using Beylkin’s unequally spaced FFT algorithm [@Bey95].\
Step 2: We solve $T_M a_M = y_M$ by the conjugate gradient (CG) algorithm [@GL96]. The matrix-vector multiplication in each iteration of CG can be carried out in $\ord (M \log M)$ operations via FFT [@CN96]. Thus the solution of takes $\ord (k M \log M)$ operations, where $k$ is the number of iterations.\
Step 3: Usually the signal is reconstructed on regularly space nodes $\{u_i\}_{i=1}^{N}$. In this case $p_M(u_i)$ in can be computed by FFT. For non-uniformly spaced nodes $u_i$ we can again resort to Beylkin’s USFFT algorithm.
There exists a large number of fast algorithms for the solution of Toeplitz systems. Probably the most efficient algorithm in our case is CG. We have already mentioned that the Toeplitz system can be solved in $\ord (kM \log M)$ via CG. The number of iterations $k$ depends essentially on the clustering of the eigenvalues of $T_M$, cf. [@CN96]. It follows from equation below and perturbation theory [@Chr96a] that, if the sampling points stem from a perturbed regular sampling set, the eigenvalues of $T_M$ will be clustered around $\beta$, where $\beta$ is the oversampling rate. In such cases we can expect a very fast rate of convergence. The simple frame iteration [@Mar93a; @Ben92] is not able to take advantage of such a situation.
For the analysis of the relation between the solution $\plsp$ of Theorem \[th:act\] and the solution $f$ of the original infinite-dimensional problem we follow Gröchenig [@Gro99]. Assume that the samples $\{f(t_j)\}_{j \in \Zst}$ of $f \in \BO$ are given. For the finite-dimensional approximation we consider only those samples $f(t_j)$ for which $t_j$ is contained in the interval $[-M-\frac{1}{2}, M+\frac{1}{2}]$ and compute the least squares approximation $\plsp$ with degree $M$ and period $2M+1$ as in Theorem \[th:act\]. It is shown in [@Gro99] that if $\sigma (T_M) \subseteq [\alpha, \beta]$ for all $M$ with $\alpha >0$ then $$\underset{M \toinf}{\lim} \int \limits_{[-M, M]} |f(t) - \plsp(t)|^2 \, dt =0 ,
\label{trigconv}$$ and also $\lim \plsp(t) = f(t)$ uniformly on compact sets.
Under the Nyquist condition $\sup(t_{j+1}-tj) :=\gamma < 1$ and using weights $w_j = (t_{j+1}-t_{j-1})/2$ Gröchenig has shown that $$\sigma (T_M) \subseteq [(1-\gamma)^2, 6]\,,
\label{condest}$$ independently of $M$, see [@Gro99]. These results validate the usage of trigonometric polynomials as finite-dimensional model for nonuniform sampling.
[**Example 1 – reconsidered:**]{} Recall that in Example 1 of Section \[ss:truncated\] we have considered the reconstruction of a regularly oversampled signal $f \in \BO$. What does the reconstruction method of Theorem \[th:act\] yield in this case? Let us check the entries of the matrix $T_M$ when we take only those samples in the interval $[-n,n]$. The period of the polynomial becomes $2N$ with $N=n+\frac{n}{r-1}$ where $r$ is the number of given samples. Then $$(T_M)_{k,l} = \frac{1}{2N} \sum_{j=1}^{r} e^{2\pi i (k-l)t_j/(2N)}
= \sum_{j=-nm}^{nm} e^{2\pi i (k-l) \frac{j}{2nm+1}} = m\delta_{k,l}
\label{kron}$$ for $k,l = -M,\dots,M$, where $\delta_{k,l}$ is Kronecker’s symbol with the usual meaning $\delta_{k,l}=1$ if $k=l$ and $0$ else. Hence we get $$T_M = m I\,,$$ where $I$ is the identity matrix on $\Cst^{2M+1}$, thus $T_M$ resembles the structure of the infinite-dimensional frame operator $S$ in this case (including exact approximation of the frame bounds). Recall that the truncated frame approach leads to an “artificial” ill-posed problem even in such a simple situation.
The advantages of the trigonometric polynomial approach compared to the truncated frame approach are manifold. In the one case we have to deal with an ill-posed problem which has no specific structure, hence its solution is numerically very expensive. In the other case we have to solve a problem with rich mathematical structure, whose stability depends only on the sampling density, a situation that resembles the original infinite-dimensional sampling problem.
In principle the coefficients $\alsp=\{\alspk\}_{k=-M}^{M}$ of the polynomial $\plsp$ that minimizes could also be computed by directly solving the Vandermonde type system $$WV \alsp = W b\,,
\label{vandermonde}$$ where $V_{j,k}=\frac{1}{\sqrt{2M+1}} e^{-2 \pi i k t_j/(2M+1)}$ for $j=1,\dots, r,\, k=-M,\dots, M$ and $W$ is a diagonal matrix with entries $W_{j,j}= \sqrt{w_j}$, cf. [@RAG91]. Several algorithms are known for a relatively efficient solution of Vandermonde systems [@BP70; @RAG91]. However this is one of the rare cases, where, instead of directly solving , it is advisable to explicitly establish the system of normal equations $$T_M a_M = y_M\,,
\label{normal}$$ where $T=V^{\ast} W^2 V$ and $y = V^{\ast} W^2 b$.
The advantages of considering the system $T_M a_M = y_M$ instead of the Vandermonde system are manifold:
- The matrix $T_M$ plays a key role in the analysis of the relation of the solution of and the solution of the infinite-dimensional sampling problem , see and above.
- $T_M$ is of size $(2M+1) \times (2M+1)$, independently of the number of sampling points. Moreover, since $(T_M)_{k,l}=\sum_{j=1}^{r} w_j e^{2\pi i (k-l) t_j}$, it is of Toeplitz type. These facts give rise to fast and robust reconstruction algorithms.
- The resulting reconstruction algorithms can be easily generalized to higher dimensions, see Section \[ss:multi\]. Such a generalization to higher dimensions seems not to be straightforward for fast solvers of Vandermonde systems such as the algorithm proposed in [@RAG91].
0 An interesting finite-dimensional model is proposed in [@FLS98]. The Bernstein-Boas formula yields an explicit way to reconstruct a function $f \in \BO$ from its (sufficiently dense) nonuniform samples $\{f(t_k)\}_{k \in \Zst}$, cf. [@Sei95]. This formula involves the numerically intractable computation of infinite products. However since only a finite number of samples can be used in a numerical reconstruction one may assume that the sequence of sampling points has regular structure outside a finite interval. This allows to replace the infinite products by finite products which yields following approximation formula for $f$ $$f(t) \approx \sum_{|n-t|\le L} f(t_n) h(t-t_n) \frac{k-t_n}{k-x}
\frac{\sin \pi x}{\sin \pi t_n} \prod_{|k-x| \le L, k \neq n}
\frac{t_k - x}{t_k - t_n} \frac{k-t_n}{k-x}
\notag$$ and an estimate for the approximation error.
Although their approach is computationally more expensive than the algorithm proposed in Section \[s:trigpol\] their approach may be an attractive alternative if only a small number of samples in a short interval $[0,L]$ are available and if at the same time the signal to be reconstructed is “strongly” non-periodic on $[0,L]$.
We point out that other finite-dimensional approaches are proposed in [@FLS98; @CC98]. These approaches may provide interesting alternatives in the few cases where the algorithm outlined in Section \[ss:trigpol\] does not lead to good results. These cases occur when only a few samples of the signal $f$ are given in an interval $[a,b]$ say, and at the same time we have $|f(a) - f(b)| \gg 0$ and $|f'(a) - f'(b)| \gg 0$, i.e., if $f$ is “strongly non-periodic” on $[a,b]$. However the computational complexity of the methods in [@FLS98; @CC98] is significantly larger.
Multi-dimensional nonuniform sampling {#ss:multi}
-------------------------------------
The approach presented above can be easily generalized to higher dimensions by a diligent book-keeping of the notation. We consider the space of $d$-dimensional trigonometric polynomials $\PMd$ as finite-dimensional model for $\BO^d$. For given samples $f(t_j)$ of $f \in \BO^d$, where $t_j \in \Rdst$, we compute the least squares approximation $\plsp$ similar to Theorem \[th:act\] by solving the corresponding system of equations $T_M a_M = y_M$.
In 2-D for instance the matrix $T_M$ becomes a block Toeplitz matrix with Toeplitz blocks [@Str97]. For a fast computation of the entries of $T$ we can again make use of Beylkin’s USFFT algorithm [@Bey95]. And similar to 1-D, multiplication of a vector by $T_M$ can be carried out by 2-D FFT.
Also the relation between the finite-dimensional approximation in $\PMd$ and the infinite-dimensional solution in $\BO^{d}$ is similar as in 1-D. The only mathematical difficulty is to give conditions under which the matrix $T_M$ is invertible. Since the fundamental theorem of algebra does not hold in dimensions larger than one, the condition $(2M+1)^d \le r$ is necessary but no longer sufficient for the invertibility of $T_M$. Sufficient conditions for the invertibility, depending on the sampling density, are presented in [@Gro99a].
Bandwidth estimation and regularization {#s:numeric}
=======================================
In this section we discuss several numerical aspects of nonuniform sampling that are very important from a practical viewpoint, however only few answers to these problems can be found in the literature.
A multilevel signal reconstruction algorithm {#ss:ml}
--------------------------------------------
In almost all theoretical results and numerical algorithms for reconstructing a band-limited signal from nonuniform samples it is assumed that the bandwidth is known a priori. This information however is often not available in practice.
A good choice of the bandwidth for the reconstruction algorithm becomes crucial in case of noisy data. It is intuitively clear that choosing a too large bandwidth leads to over-fit of the noise in the data, while a too small bandwidth yields a smooth solution but also to under-fit of the data. And of course we want to avoid the determination of the “correct” $\Omega$ by trial-and-error methods. Hence the problem is to design a method that can reconstruct a signal from non-uniformly spaced, noisy samples without requiring a priori information about the bandwidth of the signal.
The multilevel approach derived in [@SS97] provides an answer to this problem. The approach applies to an infinite-dimensional as well as to a finite-dimensional setting. We describe the method directly for the trigonometric polynomial model, where the determination of the bandwidth $\Omega$ translates into the determination of the polynomial degree $M$ of the reconstruction. The idea of the multilevel algorithm is as follows.
Let the noisy samples $\{b^{\delta}_j\}_{j=1}^{r}=\{f^{\delta}(t_j)\}_{j=1}^r$ of $f \in \BO$ be given with $\sum_{j=1}^{r}|f(t_j)-b^{\delta}(t_j)|^2 \le \delta^2 \|b^{\delta}\|^2$ and let $Q_M$ denote the orthogonal projection from $\BO$ into $\PM$. We start with initial degree $M=1$ and run Algorithm \[th:act\] until the iterates $p_{0,k}$ satisfy for the first time the [*inner*]{} stopping criterion $$\sum_{j=1}^{r}|p_{1,k}(t_j) - b^{\delta}_j|^2 \le
2 \tau (\delta \|b^{\delta}\| + \|Q_0 f - f\|)\|b^{\delta}\|
\notag
$$ for some fixed $\tau >1$. Denote this approximation (at iteration $k_*$) by $p_{1,k_*}$. If $p_{1,k_*}$ satisfies the [*outer*]{} stopping criterion $$\sum_{j=1}^{r}|p_{1,k}(t_j) - b^{\delta}_j|^2 \le
2 \tau \delta \|b^{\delta}\|^2
\label{stopout}$$ we take $p_{1,k_*}$ as final approximation. Otherwise we proceed to the next level $M=2$ and run Algorithm \[th:act\] again, using $p_{1,k_*}$ as initial approximation by setting $p_{2,0} =p_{1,k_*}$.
At level $M=N$ the inner level-dependent stopping criterion becomes $$\sum_{j=1}^{r}|p_{N,k}(t_j) - b^{\delta}_j|^2 \le
2 \tau (\delta \|b^{\delta}\| + \|Q_N f - f\|)\|b^{\delta}\|,
\label{stopin}$$ while the outer stopping criterion does not change since it is level-independent.
Stopping rule guarantees that the iterates of CG do not diverge. It also ensures that CG does not iterate too long at a certain level, since if $M$ is too small further iterations at this level will not lead to a significant improvement. Therefore we switch to the next level. The outer stopping criterion controls over-fit and under-fit of the data, since in presence of noisy data is does not make sense to ask for a solution $p_M$ that satisfies $\sum_{j=1}^{r}|p_{M}(t_j) - b^{\delta}_j|^2=0$.
Since the original signal $f$ is not known, the expression $\|f - Q_N f\|$ in cannot be computed. In [@SS97] the reader can find an approach to estimate $\|f - Q_N f\|$ recursively.
Solution of ill-conditioned sampling problems {#ss:regul}
---------------------------------------------
A variety of conditions on the sampling points $\{t_j\}_{j \in \Zst}$ are known under which the set $\sincframe$ is a frame for $\BO$, which in turn implies (at least theoretically) perfect reconstruction of a signal $f$ from its samples $f(t_j)$. This does however not guarantee a stable reconstruction from a numerical viewpoint, since the ratio of the frame bounds $B/A$ can still be extremely large and therefore the frame operator $S$ can be ill-conditioned. This may happen for instance if $\gamma$ in goes to 1, in which case $\cond(T)$ may become large. The sampling problem may also become numerically unstable or even ill-posed, if the sampling set has large gaps, which is very common in astronomy and geophysics. Note that in this case the instability of the system $T_M a_M = y_M$ does [*not*]{} result from an inadequate discretization of the infinite-dimensional problem.
There exists a large number of (circulant) Toeplitz preconditioners that could be applied to the system $T_M a_M = y_M$, however it turns out that they do not improve the stability of the problem in this case. The reason lies in the distribution of the eigenvalues of $T_M$, as we will see below.
Following [@Tyr96], we call two sequences of real numbers $\{\lambda^{(n)}\}_{k=1}^{n}$ and $\{\nu^{(n)}\}_{k=1}^{n}$ [*equally distributed*]{}, if $$\underset{\ntoinf}{\lim} \frac{1}{n} \sum_{k=1}^{n}
[F(\lambda^{(n)}_{k}) - F(\nu^{(n)}_{k}) ] = 0
\label{defdist}$$ for any continuous function $F$ with compact support[^2].
Let $C$ be a $(n \times n)$ circulant matrix with first column $(c_0,\dots,c_{n-1})$, we write $C = \circ (c_0,\dots,c_{n-1})$. The eigenvalues of $C$ are distributed as $\lambda_k = \frac{1}{\sqrt{n}}\sum_{l=0}^{n-1} c_l e^{2\pi i kl/n}$. Observe that the Toeplitz matrix $A_n$ with first column $(a_0,a_1,\dots, a_n)$ can be embedded in the circulant matrix $$C_n =\circ (a_0,a_1,\dots, a_n, \bar{a_n},\dots, \bar{a_1})\,.
\label{circembed}$$ Thms 4.1 and 4.2 in [@Tyr96] state that the eigenvalues of $A_n$ and $C_n$ are equally distributed as $f(x)$ where $$f(x) = \sum_{k=-\infty}^{\infty} a_k e^{2 \pi i kx}\,.
\label{fcirc}$$ The partial sum of the series is $$f_n(x) = \sum_{k=-n}^{n} a_k e^{2 \pi i kx}\,.
\label{fcircm}$$
To understand the clustering behavior of the eigenvalues of $T_M$ in case of sampling sets with large gaps, we consider a sampling set in $[-M,M)$, that consists of one large block of samples and one large gap, i.e., $t_j = \frac{j}{Lm}$ for $j=-mM,\dots mM$ for $m,L \in \Nst$. (Recall that we identify the interval with the torus). Then the entries $z_k$ of the Toeplitz matrix $T_M$ of (with $w_j=1$) are $$z_k=\frac{1}{2M+1}\sum_{j=-mM}^{mM} e^{-2\pi i k \frac{j}{Lm}/(2M+1)},
\quad k=0,\dots,2M\,.$$ To investigate the clustering behavior of the eigenvalues of $T_M$ for $M \toinf$, we embed $T_M$ in a circulant matrix $C_M$ as in . Then becomes $$f_{mM}(x) = \frac{1}{Lm(2M+1)}\sum_{l=-mM}^{mM} \sum_{j=-mM}^{mM}
e^{2 \pi il [k/(4M+1) - j/((2M+1)mL)]}$$ whence $f_{mM} \rightarrow {\bf 1}_{[-1/(2L),1/(2L)]}$ for $M \toinf$, where ${\bf 1}_{[-a,a]}(x) = 1$, if $-a < x < a$ and 0 else.
Thus the eigenvalues of $T_M$ are asymptotically clustered around zero and one. For general nonuniform sampling sets with large gaps the clustering at 1 will disappear, but of course the spectral cluster at 0 will remain. In this case it is known that the preconditioned problem will still have a spectral cluster at the origin [@YC93] and preconditioning will not be efficient.
Fortunately there are other possibilities to obtain a stabilized solution of $T_M a_M = y_M$. The condition number of $T_M$ essentially depends on the ratio of the maximal gap in the sampling set to the Nyquist rate, which in turn depends on the bandwidth of the signal. We can improve the stability of the system by adapting the degree $M$ of the approximation accordingly. Thus the parameter $M$ serves as a regularization parameter that balances stability and accuracy of the solution. This technique can be seen as a specific realization of [*regularization by projection*]{}, see Chapter 3 in [@EHN96]. In addition, as described in Section \[ss:regul\], we can utilize CG as regularization method for the solution of the Toeplitz system in order to balance approximation error and propagated error. The multilevel method introduced in Section \[ss:ml\] combines both features. By optimizing the level (bandwidth) and the number of iterations in each level it provides an efficient and robust regularization technique for ill-conditioned sampling problems. See Section \[s:applications\] for numerical examples.
0 In many applications the physical process that generates the signal implies not only that the signal is (essentially) band-limited but also that its spectrum of the signal has a certain rate of decay. For instance geophysical potential fields have exponentially decaying Fourier transform. This a priori knowledge can be used to improve the accuracy of the approximation. Instead of the usual regularization methods, such as Tikhonov regularization, we propose a different, computationally much more efficient method.
Assume that the decay of the Fourier transform of $f$ can be bounded by $|\hat{f}(\omega)| \le \phi(\omega)$. Typical choice in practice are $\phi(\omega) = e^{-C|\omega|}$ or $\phi(\omega) = C(1+|\omega|^2)^{-1}$. For a given system $T_M a = y$ define the diagonal matrix $P$ by $P_{l,l} = \phi(l)$. Instead of solving $Ta=y$ we consider the “weighted problem” $$P T a = Py
\label{precond1}$$ or $$TP c = y\,,\qquad a = Pc
\label{precond2}$$ In the first case the solution is $$a_P = (PA)^+ Pb$$ and in the second case we have $$a_P = P (AP)^+ b \,.$$ Of course, if $T$ is invertible both solutions coincide with the solution of $Ta=b$. However if $T$ is not invertible, then both equations lead to a weighted minimal norm least squares solution. Note that $P$ is not chosen to minimize the condition number of the problem, since as outlined above standard preconditioning will not work in this case.
Systems and can be solved by conjugate gradient methods. Hence the computational effort of such an approach is of the same order as Algorithm \[th:act\]. A detailed numerical analysis of the convergence properties of this approach has still to be completed. For a numerical example see Section \[ss:geo\].
Applications {#s:applications}
============
We present two numerical examples to demonstrate the performance of the described methods. The first one concerns a 1-D reconstruction problem arising in spectroscopy. In the second example we approximate the Earth’s magnetic field from noisy scattered data.
An example from spectroscopy {#ss:spectro}
----------------------------
The original spectroscopy signal $f$ is known at 1024 regularly spaced points $t_j$. This discrete sampling sequence will play the role of the original continuous signal. To simulate the situation of a typical experiment in spectroscopy we consider only 107 randomly chosen sampling values of the given sampling set. Furthermore we add noise to the samples with noise level (normalized by division by $\sum_{k=1}^{1024}|f(t_j)|^2$) of $\delta=0.1$. Since the samples are contaminated by noise, we cannot expect to recover the (discrete) signal $f$ completely. The bandwidth is approximately $\Omega =5$ which translates into a polynomial degree of $M \approx 30$. Note that in general $\Omega$ and (hence $M$) may not be available. We will also consider this situation, but in the first experiments we assume that we know $\Omega$. The error between the original signal $f$ and an approximation $f_n$ is measured by computing $\|f- f_n\|^2/\|f\|^2$.
First we apply the truncated frame method with regularized SVD as described in Section \[ss:truncated\]. We choose the truncation level for the SVD via formula . This is the optimal truncation level in this case, providing an approximation with least squares error $0.0944$. Figure \[fig:spect\](a) shows the reconstructed signal together with the original signal and the noisy samples. Without regularization we get a much worse “reconstruction” (which is not displayed).
We apply CG to the truncated frame method, as proposed in Section \[ss:cgtrunc\] with stopping criterion (for $\tau =1$). The algorithm terminates already after 3 iterations. The reconstruction error is with $0.1097$ slightly higher than for truncated SVD (see also Figure \[fig:spect\](b)), but the computational effort is much smaller.
Also Algorithm \[th:act\] (with $M=30$) terminates after 3 iterations. The reconstruction is shown in Figure \[fig:spect\](c), the least squares error ($0.0876$) is slightly smaller than for the truncated frame method, the computational effort is significantly smaller.
We also simulate the situation where the bandwidth is not known a priori and demonstrate the importance of a good estimate of the bandwidth. We apply Algorithm \[th:act\] using a too small degree ($M = 11$) and a too high degree ($M = 40$). (We get qualitatively the same results using the truncated frame method when using a too small or too large bandwidth). The approximations are shown in Figs. \[fig:spect\](d) and (e), The approximation errors are $0.4648$ and $0.2805$, respectively. Now we apply the multilevel algorithm of Section \[ss:ml\] which does not require any initial choice of the degree $M$. The algorithm terminates at “level” $M=22$, the approximation is displayed in Fig. \[fig:spect\](f), the error is $0.0959$, thus within the error bound $\delta$, as desired. Hence without requiring explicit information about the bandwidth, we are able to obtain the same accuracy as for the methods above.
\
\
Approximation of geophysical potential fields {#ss:geo}
---------------------------------------------
Exploration geophysics relies on surveys of the Earth’s magnetic field for the detection of anomalies which reveal underlying geological features. Geophysical potential field-data are generally observed at scattered sampling points. Geoscientists, used to looking at their measurements on maps or profiles and aiming at further processing, therefore need a representation of the originally irregularly spaced data at a regular grid.
The reconstruction of a 2-D signal from its scattered data is thus one of the first and crucial steps in geophysical data analysis, and a number of practical constraints such as measurement errors and the huge amount of data make the development of reliable reconstruction methods a difficult task.
It is known that the Fourier transform of a geophysical potential field $f$ has decay $|\hat{f}(\omega)| = \ord (e^{-|\omega|})$. This rapid decay implies that $f$ can be very well approximated by band-limited functions [@RS98]. Since in general we may not know the (essential) bandwidth of $f$, we can use the multilevel algorithm proposed in Section \[ss:ml\] to reconstruct $f$.
The multilevel algorithm also takes care of following problem. Geophysical sampling sets are often highly anisotropic and large gaps in the sampling geometry are very common. The large gaps in the sampling set can make the reconstruction problem ill-conditioned or even ill-posed. As outlined in Section \[ss:regul\] the multilevel algorithm iteratively determines the optimal bandwidth that balances the stability and accuracy of the solution.
Figure \[fig:geo\](a) shows a synthetic gravitational anomaly $f$. The spectrum of $f$ decays exponentially, thus the anomaly can be well represented by a band-limited function, using a “cut-off-level” of $|f(\omega)| \le 0.01$ for the essential bandwidth of $f$.
We have sampled the signal at 1000 points $(u_j,v_j)$ and added 5% random noise to the sampling values $f(u_j,v_j)$. The sampling geometry – shown in Figure \[fig:geo\] as black dots – exhibits several features one encounters frequently in exploration geophysics [@RS98]. The essential bandwidth of $f$ would imply to choose a polynomial degree of $M=12$ (i.e., $(2M+1)^2 = 625$ spectral coefficients). With this choice of $M$ the corresponding block Toeplitz matrix $T_M$ would become ill-conditioned, making the reconstruction problem unstable. As mentioned above, in practice we usually do not know the essential bandwidth of $f$. Hence we will not make use of this knowledge in order to approximate $f$.
We apply the multilevel method to reconstruct the signal, using only the sampling points $\{(u_j,v_j)\}$, the samples $\{f^{\delta}(u_j,v_j)\}$ and the noise level $\delta=0.05$ as a priori information. The algorithm terminates at level $M=7$. The reconstruction is displayed in Figure \[fig:geo\](c), the error between the true signal and the approximation is shown in Figure \[fig:geo\](d). The reconstruction error is $0.0517$ (or $0.193$ mGal), thus of the same order as the data error, as desired.
\[fig:geo\]
\
\[fig:geo2\]
[10]{}
J. Benedetto. Irregular sampling and frames. In C. K. Chui, editor, [*Wavelets: A Tutorial in Theory and Applications*]{}, pages 445–507. Academic Press, 1992.
J. Benedetto and W. Heller. , 10:103–125, 1990.
A. Beurling and P. Malliavin. On the closure of characters and the zeros of entire functions. , 118:79–93, 1967.
G. Beylkin. On the fast [F]{}ourier transform of functions with singularities. , 2(4):363–381, 1995.
A.A. [Björk]{} and V. Pereyra. Solution of [V]{}andermonde systems of equations. , 24:893 – 903, 1970.
P. L. Butzer, W. [Splettstöer]{}, and R. L. Stens. The sampling theorem and linear prediction in signal analysis. , pages 1–70, 1988.
P.G. Casazza and O. Christensen. Approximation of the inverse frame operator and applications to [W]{}eyl-[H]{}eisenberg frames. , accepted for publication.
R. Chan and M. Ng. Conjugate gradient methods for [T]{}oeplitz systems. , 38(3):427–482, 1996.
O. Christensen. Frames containing [Riesz]{} bases and approximation of the frame coefficients using finite dimensional methods. , 199:256–270, 1996.
O. Christensen. Moment problems and stability results for frames with applications to irregular sampling and [G]{}abor frames. , 3(1):82–86, 1996.
R. Duffin and A. Schaeffer. A class of nonharmonic [F]{}ourier series. , 72:341–366, 1952.
H.W. Engl, M. Hanke, and A. Neubauer. . Kluwer Academic Publishers Group, Dordrecht, 1996.
H. G. Feichtinger. Coherent frames and irregular sampling. , NATO ASI Series C, Vol. 315:427–440, 1989. NATO conference, Pisa.
H. G. Feichtinger, K. Gr[ö]{}chenig, and T. Strohmer. Efficient numerical methods in non-uniform sampling theory. , 69:423–440, 1995.
H.G. Feichtinger and K.H. Gr[ö]{}chenig. Theory and practice of irregular sampling. In J. Benedetto and M. Frazier, editors, [*Wavelets: Mathematics and Applications*]{}, pages 305–363. CRC Press, 1994.
K.M. Flornes, Y.I. Lyubarskii, and K. Seip. A direct interpolation method for irregular sampling. , 7(3):305–314, 1999.
I.C. Gohberg and I.A. Fel’dman. . American Mathematical Society, Providence, R.I., 1974. Translated from the Russian by F. M. Goldware, Translations of Mathematical Monographs, Vol. 41.
G.H. Golub and C.F. van Loan. Johns Hopkins, London, Baltimore, 1996.
K. Gr[ö]{}chenig. A discrete theory of irregular sampling. , 193:129–150, 1993.
K. Gr[ö]{}chenig. Irregular sampling, [T]{}oeplitz matrices, and the approximation of entire functions of exponential type. , 68:749–765, 1999.
K. Gr[ö]{}chenig. Non-uniform sampling in higher dimensions: [From]{} trigonometric polynomials to band-limited functions. In J.J. Benedetto and P.J.S.G Ferreira, editors, [*Modern Sampling Theory: Mathematics and Applications*]{}. [Birkhäuser]{}, Boston, to appear.
M. Hanke. . Longman Scientific & Technical, Harlow, 1995.
M.L. Harrison. . PhD thesis, University of Maryland – College Park, 1998.
J.R. Higgins. . Oxford University Press, 1996.
H. Landau. Necessary density conditions for sampling and interpolation of certain entire functions. , 117:37–52, 1967.
F.A. Marvasti. Nonuniform sampling. In R. J. Marks II, editor, [*Advanced Topics in [S]{}hannon Sampling and Interpolation Theory*]{}, pages 121–156. Springer Verlag, 1993.
A. S. Nemirovski[ĭ]{}. Regularizing properties of the conjugate gradient method in ill-posed problems. , 26(3):332–347, 477, 1986.
A.S. Nemirovski[ĭ]{} and B.T. Polyak. Iterative methods for solving linear ill-posed problems under precise information [I]{}. , 22:1–11, 1984.
A. Neumaier. Solving ill-conditioned and singular linear systems: a tutorial on regularization. , 40(3):636–666, 1998.
M. Rauth and T. Strohmer. Smooth approximation of potential fields from noisy scattered data. , 63(1):85–94, 1998.
L. Reichel, G. Ammar, and W. Gragg. Discrete least squares approximation by trigonometric polynomials. , 57:273–289, 1991.
R.D. Richtmeyer and K.W. Morton. . Krieger Publishing Company, Malabar, Florida, 1994.
I.W. Sandberg. The reconstruction of band-limited signals from nonuniformly spaced samples. , 41(1):64–66, 1994.
O. Scherzer and T. Strohmer. A multi–level algorithm for the solution of moment problems. , 19(3–4):353–375, 1998.
C. Shannon. A mathematical theory of communication. , 27:379–623, 1948.
D. Slepian. Prolate spheroidal wave functions, [[F]{}ourier analysis and uncertainty V]{}: the discrete case. , 57:1371–1430, 1978.
T. Strohmer. Computationally attractive reconstruction of band-limited images from irregular samples. , 6(4):540–548, 1997.
E.E. Tyrtyshnikov. A unifying approach to some old and new theorems on distribution and clustering. , 232:1–43, 1996.
J.M. Varah. The prolate matrix. , 187:269–278, 1993.
D.J. Wingham. The reconstruction of a band-limited function and its [F]{}ourier transform from a finite number of samples at arbitrary locations by singular value decomposition. , 40:559–570, 1992.
K. Yao and J. O. Thomas. On some stability and interpolatory properties of nonuniform sampling expansions. , 14:404–408, 1967.
J.L. Yen. On nonuniform sampling of bandwidth-limited signals. , CT-3:251–257, 1956.
M.C. Yeung and R.H. Chan. Circulant preconditioners for [T]{}oeplitz matrices with piecewise continuous generating functions. , 61(204):701–718, 1993.
A.I. Zayed. . CRC Press, Boca Raton, 1993.
[^1]: Department of Mathematics, University of California, Davis, CA-95616; [email protected]. The author was supported by NSF DMS grant 9973373.
[^2]: In H.Weyl’s definition $\lambda^{(n)}_{k}$ and $\nu^{(n)}_{k}$ are required to belong to a common interval.
|
---
abstract: 'We experimentally demonstrate that critical Casimir forces in colloidal systems can be continuously tuned by the choice of boundary conditions. The interaction potential of a colloidal particle in a mixture of water and 2,6-lutidine has been measured above a substrate with a gradient in its preferential adsorption properties for the mixture’s components. We find that the interaction potentials at constant temperature but different positions relative to the gradient continuously change from attraction to repulsion. This demonstrates that critical Casimir forces respond not only to minute temperature changes but also to small changes in the surface properties.'
author:
- Ursula Nellen
- Laurent Helden
- Clemens Bechinger
title: Tunability of Critical Casimir Interactions by Boundary Conditions
---
In 1978 Fisher and de Gennes pointed out that if two objects are immersed in a fluid close to its critical point, long-ranged forces due to confined critical fluctuations act between their surfaces [@fis78]. Such critical Casimir forces arise due to the confinement of fluctuations in the order parameter of the fluid between the objects. In the case of e.g. a classical binary liquid mixture close to its demixing point, the order parameter corresponds to the concentration difference between the two components of the mixture. The strength and range of critical Casimir forces is set by the fluid’s bulk correlation length $\xi$ which diverges upon approaching the critical temperature $T_{C}$. Therefore, close to $T_{C}$, the interaction strongly depends on the temperature as has been recently confirmed in several experiments [@gam09b; @her08; @raf07; @gan06; @fuk05; @gar02; @gar99; @muk99].
In addition to their temperature dependence, critical Casimir forces are very sensitive to the boundary conditions (BC) which are determined by the adsorption preferences of the mixture’s components at the confining surfaces: not only the magnitude, but even the sign of critical Casimir interactions can be altered by corresponding symmetric or asymmetric BC. So far, theoretical studies largely concentrated on BC, where one species of molecules in the binary liquid mixture forms a saturated monolayer at the confining surfaces [@han98; @kre97]. Depending on whether both surfaces strongly adsorb the same $(- -)$ or different species $(- +)$, this results in attractive or repulsive forces which have been recently observed in several experiments [@tro09; @soy08; @raf07; @gan06; @fuk05; @gar02; @gar99; @muk99].
In this Letter we report the first critical Casimir measurements for continuously tunable boundary conditions. This has been achieved by measuring the interaction energy of a single colloidal particle suspended in a critical water-2,6-lutidine mixture above a solid surface with a gradient in its adsorption preference for the two liquid components. Upon lateral displacement of the particle relative to the substrate we find a smooth transition from attractive to repulsive critical Casimir forces. The observed scaling functions are found to lie between that of the limiting cases of $(- -)$ and $(- +)$ BC.
Surfaces with a spatial variation of adsorption preference for lutidine and water molecules were fabricated by immersing hydrophilic silica substrates into a mixture (1300:1) of hexane and octadecyltrichlorosilane (OTS). After about 30 minutes, a monolayer of OTS molecules binds to the surface and thus renders it hydrophobic [@ulm96]. Measurements of the contact angle confirm that this treatment alters the adsorption preference from that of water to lutidine. We obtained samples with a smooth lateral gradient regarding the OTS coverage by partially shielding the substrate with a thin metal blade and exposing it to an oxygen-nitrogen plasma, so that OTS molecules are fractionally removed. This gradient can be visualized by cooling the sample below the dew point. The corresponding breath figure (Fig. \[fig:1\]a) shows small droplets with a large contact angle on the hydrophobic side where the sample was fully covered by the blade (right) and much larger droplets with small contact angles (left) where the OTS molecules were removed and the sample is hydrophilic. The gradient in the surface properties was further characterized by force-distance curves obtained with an atomic force microscope (AFM) under ambient conditions and a freshly plasma-cleaned hydrophilic tip. Upon approaching the surface, at small distances the tip is suddenly attracted towards the surface due to van der Waals and capillary forces (Fig. \[fig:1\]b). The strong contribution of capillary forces is supported by the fact that both strength $\Delta$ and range of the attraction decrease towards the hydrophobic side, i.e. with increasing $\Delta x$ (Fig. \[fig:1\]c). Similar to the breath figures, the AFM-measurements show that the above method yields substrates with smooth chemical gradients which laterally extend over a distance of several hundreds of microns.
![(Color online) (a) Water droplets on a silica substrate with a gradient in its wetting properties below the dew point (breath figure). Small and large droplets indicate hydrophobic and hydrophilic regions. (b) AFM force-distance curves obtained with a hydrophilic cantilever at different positions $\Delta x$ on the surface. Curves were arranged such that the regions of constant compliance overlap at zero deflection signal. (c) Attraction strength $\Delta$ vs. $\Delta x$ along the chemical gradient. Symbols correspond to those used in (b).[]{data-label="fig:1"}](Figure1){width="8.5cm"}
We fabricated thin sample cells ($150\, \mu m$ height) with the described substrates as the bottom plate and inserted a diluted suspension of colloidal particles in a water-2,6-lutidine (WL) mixture at critical composition, i.e. a lutidine mass fraction of $c{^c}{_L} \approx 0.286$. Such mixtures have a lower critical point at $T_C\approx 307\, K$ [@bey85]. As colloids we used negatively charged melamine spheres (MF) with radius $R=1.35 \mu m$ [^1]. Due to their high surface charge density the particles are strongly hydrophilic, i.e. they show a preference for water adsorption.
Interaction potentials between a single colloid and a substrate were measured with total internal reflection microscopy (TIRM). The entire sample cell was mounted onto a glass prism such that an incident p-polarized laser beam ($\lambda =473\, nm$, $P\approx2\,
mW$) is totally reflected at the substrate-fluid interface. Under these conditions an evanescent field is created which exponentially decays into the fluid. Our experiments were performed with a penetration depth of $153\, nm$ by adjusting the angle of incidence accordingly. When the height $z$ of the colloid above the surface is in the region illuminated by the evanescent field, it will partially scatter the evanescent light. For the chosen conditions, evanescent light scattering on critical fluctuations in the mixture can be neglected compared to the light scattered by the colloidal particle. From the scattered intensity, which is monitored with a photomultiplier, the particle-substrate height distribution $P(z)$ can be inferred. Employing the Boltzmann factor, the height-resolved interaction potential for a colloid close to the substrate is derived. The lateral motion of the particle was reduced to about $
\pm 1\, \mu m$ with a weakly focussed laser beam ($\lambda=532\,
nm$) acting as an optical tweezers from above. Since this value is orders of magnitude smaller than the lateral extension of the chemical gradient, the boundary conditions can be considered as homogeneous on the area probed during a single measurement. For further details regarding TIRM and the experimental setup we refer to the literature [@her08; @wal97; @pri90].
Temperature control of the sample cell was achieved by a two-step procedure. We connected the sample with a copper frame to a heat bath operated at a constant temperature slightly below $T_C$. In addition, we used an electrical heater, which was connected to a temperature controller. In contrast to previous experiments, where the temperature of the binary liquid mixture was stabilized with respect to a platinum resistor placed outside the liquid, here the light scattering intensity from the critical fluctuations was used as input for the temperature-controller [@sch04]. For this purpose an additional laser beam ($\lambda =658\, nm$) was coupled into the cell to propagate parallel to the substrate in the fluid. With this setup we achieved a temperature stability of about $\pm
2\, mK$. Since the scattering signal tends to diverge at the critical temperature, we can determine $T_C$ with a significantly improved accuracy of $\pm 5\, mK$.
![(Color online) Interaction potentials between a MF particle and a silica substrate with a chemical gradient, obtained at different lateral positions $\Delta x$. Closed symbols were taken at $T_{C}-T=220\, mK$ corresponding to $\xi=19\, nm$. The open symbols show a measurement for $T_{C}-T=1.50\, K$ where critical Casimir forces are negligible. Inset: interaction potential as measured (squares) and after subtraction of the linear contributions (circles) which are due to optical and gravitational forces. The solid line is a fit to Eq. \[eq:dlvo\] with $A=2770\, k_BT$, $\kappa^{-1}=11.1\, nm$ and $G^*= 13.5\, k_BT/ \mu m$. \[fig:2\] ](Figure2.eps){width="8.5cm"}
The inset of Fig. \[fig:2\] (upper curve) shows a typical interaction potential between a single MF particle and a surface far below $T_C$ where critical Casimir forces are negligible. The shape of the potential can be fitted to $$\label{eq:dlvo}
\Phi (z)=A \exp(-\kappa z)+G^*z$$ with $A$ the amplitude of electrostatic interactions between the negatively charged particle and substrate, $\kappa$ the inverse Debye screening length of the mixture and $G^*$ the effective weight of the colloid due to gravity and light pressure from the optical tweezers. Since the linear contribution from gravitational and optical forces does not vary between individual measurements, in the following it has been subtracted from all data in this paper. Since the particle-wall interaction potential (Fig. \[fig:2\] inset) is well fitted by Eq. \[eq:dlvo\] and parameters in agreement with literature values [@gal92; @gru01], possible contributions from van der Waals forces are negligible in the present experiment. A more detailed discussion can be found in [@gam09c; @dan07].
Interaction potentials at constant temperature $T_{C}-T=220\, mK$ and different lateral positions $\Delta x=x-x_0$ relative to the substrate (with $x_0$ a reference position at the strongly hydrophilic site of the gradient) are shown in Fig. \[fig:2\]. Since the colloidal particle is strongly hydrophilic, symmetric BC should apply at small values of $\Delta x$, i.e. on the strongly hydrophilic side. Under these conditions critical Casimir forces are attractive. In combination with the short-ranged electrostatic force, this leads to potential wells with depths of several times the thermal energy $k_{B}T$. With increasing $\Delta x$, i.e. upon approaching the hydrophobic side of the gradient, the BC become increasingly asymmetric and the critical Casimir forces become weaker. Accordingly, the potential wells become shallower and are shifted towards larger distances. Close to the hydrophobic region of the gradient lutidine is preferred by the substrate and the critical Casimir forces should be repulsive. Indeed for $\Delta x=580\, \mu
m$, such a repulsion is observed in our data, as can be seen by direct comparison with the particle-wall interaction potential far below $T_C$ (open symbols) where critical Casimir forces are negligible.
![(Color online) Temperature dependence of critical Casimir forces at fixed position $\Delta x=0$. The inset shows the distance range where electrostatic interactions are negligible. Fits to theoretical predictions for $(- -)$ BC are shown as solid lines.[]{data-label="fig:3"}](Figure3n){width="8.5cm"}
For $(- -)$ and $(- +)$ BC the critical Casimir potential of a colloidal sphere with radius $R$ at height $z$ above a homogeneous surface is given by [@her08; @han98] $$\Phi_{Cas}\left(z,T\right)=\frac{R}{z}\vartheta\left(\frac{z}{\xi}\right)
\label{eq:1}$$ with the correlation length $$\xi =\xi_0 \left ( \frac{T_C-T}{T_C} \right )^{-\nu}, \label{eq:2}$$ $\xi_0$ reflecting the typical length scale set by the intermolecular pair potential in the mixture, $\nu = 0.63$ the critical exponent of the 3D Ising universality class and $\vartheta$ the corresponding scaling functions which have been inferred from Monte-Carlo simulations [@vas07] [^2]. To confirm, that the potentials indeed result from critical Casimir forces, we first investigated the temperature-dependence of the potential at ($\Delta
x=0$) where $(- -)$ BC apply (Fig. \[fig:3\]). In the inset we show the experimental data for the region where electrostatic and van-der-Waals interactions are negligible. As solid lines we plotted the fits according to Eq. \[eq:1\] which show good agreement. It should be emphasized that only $\xi_{0}$ has been used as an adjustable parameter. Best agreement with our experimental data was found for $\xi_{0}\approx 0.2\, nm$, which is in good agreement with other measurements in critical water-lutidine mixtures [@gam09c; @gul72].
![(Color online) Measured scaling functions for different lateral positions $\Delta x$ on the substrate. Symbols of the same kind but with different fillings correspond to measurements at different temperatures and collapse to a single curve each. The square symbols were determined from the corresponding curves in Fig. \[fig:3\]. Theoretical calculations for the limiting cases of $(- -)$ and $(- +)$ BC are shown as solid black lines. [@her08; @vas07]. Dashed lines represent the same curves shifted along the $\frac{z}{\xi}$-axis to obtain best agreement with data for weaker adsorption preference.[]{data-label="fig:4"}](Figure4n){width="8.5cm"}
According to Eq. \[eq:1\] the information about the BC is entirely encoded in the scaling function $\vartheta$. Therefore, we determined $\vartheta$ from the measured critical Casimir interaction potentials for different substrate positions $\Delta x$ (Fig. \[fig:4\]). Note that symbols with identical shape but different fillings correspond to scaling functions obtained at the same position $\Delta x$ but for different temperatures. Data taken at different temperatures collapse onto a single curve in this representation. With increasing $\Delta x$ the scaling functions change systematically from negative to positive values. This is consistent with the sign change of critical Casimir interactions observed along the chemical gradient as shown in Fig. \[fig:2\]. For comparison we added as solid lines the theoretical predictions for the scaling functions for $(- -)$ and $(- +)$ BC. As can be seen, the measured values for $\vartheta$ lie in between these limiting cases. On the hydrophilic side of the substrate we obviously reached $(- -)$ BC while we did not reach $(- +)$ BC on the hydrophobic side. This indicates that the lutidine adsorption on the OTS treated substrate is not saturated.
Scaling behavior is observed for all positions $\Delta x$ which is [*not a priori*]{} clear because Eq. \[eq:1\] is strictly valid only for $(- -)$ and $(- +)$ BC [@sch08]. This indicates that additional scaling variables which may arise in the presence of undersaturated adsorption layers are not relevant for data collapse at the $\frac{z}{\xi}$-range sampled in our experiments [@cho02]. Mean field theory calculations predict that the scaling functions for BC close to the strong adsorption limit can be obtained by a shift along the $\frac{z}{\xi}$-axis [@mac09; @bin83]. This is in remarkable agreement with the dashed lines in Fig. \[fig:4\] which just correspond to shifted $\vartheta$ functions for $(- -)$ and $(- +)$ BC obtained by a least mean square fit to the data.
At present, it remains unclear how experimentally accessible parameters for the quantitative characterization of boundary conditions can be related to e.g. the surface field $h_1$ which is often used to theoretically describe continuously varying BC [@mac03; @des95; @dur87]. Ellipsometry studies on critical adsorption of binary mixtures under weak surface field conditions suggest that $h_1$ is proportional to the surface energy difference of the two liquid components [@cho02], while other approaches tried to connect the surface field with the difference in the chemical potential [@des95]. We hope that our work will stimulate further theoretical investigations in this direction.
In summary, we have shown that critical Casimir forces can be continuously varied by appropriate BC of the confining surfaces. Experimentally, this was achieved by lateral variation of the surface coverage of a single layer of OTS molecules on the substrate which leads both to a change of the magnitude and the sign of critical Casimir forces between a colloidal particle and the surface. In addition to the exquisite temperature dependence, this remarkable sensitivity on the surface properties of the interacting objects distinguishes critical Casimir forces as a versatile interaction type which adds novel perspectives to the use of colloidal suspensions as model systems but also opens new possibilities for the fabrication of colloidal crystals which hold significant interest for technical applications.
We thank S. Dietrich, A. Maciołek, T. Mohry, and A. Gambassi for stimulating discussions, T. Geldhauser for assistance with AFM measurements and the Deutsche Forschungsgemeinschaft for financial support.
[31]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, ****, ().
, , , , , ****, ().
, , , , , ****, ().
, , , ****, ().
, , , , ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , , ****, ().
, ****, ().
, ().
, , , , , ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, , , ****, ().
, , , , ****, ().
, , , , , , , ****, ().
, , , ****, ().
, , , , ****, ().
, , , , ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, in **, edited by (, , ), vol. , p. .
, , , ****, ().
, , , ****, ().
, ****, ().
[^1]: MF-COOH-S1285, $R=1.35 \pm 0.05\, \mu m $ microparticles GmbH, Berlin, Germany. According to the manufacturer the surface potential in water is 70-100 mV.
[^2]: The Derjaguin approximation was used to adapt simulation results for wall-wall geometry to the sphere-wall geometry of the experiment. This is justified since R is much larger than its distance $z$ and the maximum correlation length $\xi_{max}=40nm$.
|
---
abstract: 'We prove that every finite dimensional algebra over an algebraically closed field is either derived tame or derived wild. The proof is based on the technique of boxes and reduction algorithm. It implies, in particular, that any degeneration of a derived wild algebra is derived wild; respectively, any deformation of a derived tame algebra is derived tame.'
address:
- 'Departamento de Matemática, ICEx, Universidade Federal de Minas Gerais, Av. Antônio Carlos, 6627, CP 702, CEP 30123-970, Belo Horizonte-MG, Brasil'
- 'Department of Mechanics and Mathematics, Kyiv Taras Shevchenko University, 01033 Kyiv, Ukraine'
author:
- 'Viktor I. Bekkert'
- 'Yuriy A. Drozd'
title: 'Tame–wild dichotomy for derived categories'
---
[^1]
Introduction {#introduction .unnumbered}
============
The notions of tame and wild problems is now rather poplar in various branches of representation theory and related topics, especially because of the so-called *tame-wild dichotomy* (cf. e.g. [@d0; @dg] and other papers). Namely, in most cases it so happens that either indecomposable representations depend on at most one parameter or their description becomes in some sense “universal,” i.e. containing a classification of representations of all finitely generated algebras. Last time these notions have also been studied for derived categories, and tame-wild dichotomy has been proved in some cases (though rather restrictive ones), cf. [@gk; @br; @ge]. In this paper we shall prove such a dichotomy for derived categories of arbitrary finite dimensional algebras over an algebraically closed field. The used technique, just as in [@d0; @dg] (see also the survey [@d1]), is that of “matrix problems,” more precisely, boxes and reduction algorithm. There are some new features: we have to consider such boxes that the underlying category is no more free. Fortunately, the arising relations are of rather special nature, which leads to the notion of *sliced boxes*. Actually, the tame-wild dichotomy is proved for such boxes, wherefrom the result for derived categories is obtained almost in the same way as the tame-wild dichotomy for representations of algebras has been obtained from that for free boxes. As it is rather usual, we formulate the result for *locally finite dimensional categories*. If such a category only has finitely many indecomposable objects, this language is equivalent to that of finite dimensional algebras, though a bit more convenient. But categories with infinitely many indecomposables naturally arise in representation theory (for instance, when we consider coverings), so we prefer to use this language, especially as this generality does not imply the proofs.
Derived categories {#s1}
==================
We consider categories and algebras over a fixed algebraically closed field $\Mk$. A $\Mk$-category $\kA$ is called *locally finite dimensional* (shortly *lofd*) if the following conditions hold:
1. All spaces $\kA(x,y)$ are finite dimensional for all objects $x,y$.
2. $\kA$ is *fully additive*, i.e. it is additive and all idempotents in it split.\
Conditions 1,2 imply that the category $\kA$ is *Krull–Schmidt*, i.e. each object uniquely decomposes into a direct sum of indecomposable objects; moreover, it is *local*, i.e. for each indecomposable object $x$ the algebra $\kA(x,x)$ is local. We denote by $\ind\kA$ a set of representatives of isomorphism classes of indecomposable objects from $\kA$.
3. For each object $x$ the set $\setsuch{y\in\ind\kA}{\kA(x,y)\ne0\text{ or }\kA(y,x)\ne0}$ is finite.
We denote by $\vec$ the category of finite dimensional vector spaces over $\Mk$ and by $\md\kA$ the category of *finite dimensional $\kA$-modules*, i.e. functors $M:\kA\to\vec$ such that $\setsuch{x\in\ind\kA}{Mx\ne0}$ is finite. We also denote by $D(\kA)$ the derived category of the category $\md\kA$ and by $D^b(\kA)$ its full subcategory consisting of bounded complexes. The latter is again a lofd category.
For an arbitrary category $\kC$ we denote by $\add\kC$ the minimal fully additive category containing $\kC$. For instance, one can consider $\add\kC$ as the category of finitely generated projective $\kC$-modules; especially, $\add\Mk=\vec$. We denote by $\Rep(\kA,\kC)$ the category of functors $\Fun(\kA,\add\kC)$ and call them *representations* of the category $\kA$ in a category $\kC$. Obviously, $\Rep(\kA,\kC)\iso\Rep(\add\kA,\kC)$. If the category $\kA$ is lofd, we denote by $\rep(\kA,\kC)$ the full subcategory of $\Rep(\kA,\kC)$ consisting of the representations $M$ with *finite support* $\supp M=\setsuch{x\in\ind\kA}{Mx\ne0}$. In particular, $\rep(\kA,\Mk)=\md\kA$.
We denote by $\kD(\bA)$ (respectively, $\kD^-(\bA),\,\kD^b(\bA)\,$) the *derived category* (respectively, right bounded and (two-sided) bounded derived category) of the category $\kA$-mod, where $\kA$ is a lofd category. Recall that $\kA$ embeds as a full subcategory into $\md\kA$. Namely, each object $x$ corresponds to the functor $\kA^x=\kA(x,\_\,)$. These functors are projective in the category $\md\kA$; if $\kA$ is fully additive, these are all projectives (up to isomorphism). On the other hand, $\md\kA$ embeds as a full subcategory into $\kD^b(\kA)$: a module $M$ is treated as a complex only having a unique nonzero component equal $M$ at the $0$-th position. It is also known that $\kD^-(\kA)$ can be identified with the category $\kK^-(\kA)$ whose objects are right bounded complexes of projective modules and morphisms are homomorphisms of complexes modulo homotopy [@gm]. If $\gdim\kA<\8$, every bounded complex has a bounded projective resolution, hence $\kD^b(\kA)$ can identified with $\kK^b(\bA)$, the category of bounded projective complexes modulo homotopy, but it is no more the case if $\gdim\kA=\8$. Moreover, if $\kA$ is lofd, we can confine the considered complexes by *minimal* ones, i.e. always suppose that $\im d_n\subseteq\rad P_{n-1}$ for all $n$. We denote by $\sJ$ the *radical* of the category $\kA$, i.e. the set of morphisms having no invertible components with respect to some (hence, any) decomposition of its source and target into direct sums of indecomposables. Then $\rad M=\sJ M$ for each $M\in\md\kA$.
Even if $\gdim\bA=\8$, one easily shows [@d2] that $\kD^b(\kA)$ can be identified with a direct limit $\varinjlim_N\kQ^N(\kA)$ of the categories $\kQ^N(\kA)$ defined as follows.
1. Objects of $\kQ^N(\kA)$ are right bounded complexes $(P_\bp,d_\bp)$ of projective modules from $\md\kA$ with $P_n=0$ for $n>N$.
2. Morphisms of $\kQ^N(\kA)$ are homomorphisms of complexes modulo *quasi-homotopy*, where two homomorphisms $f,g:(P_\bp,d_\bp)\to(P'_\bp,d'_\bp)$ are said to be *quasi-homotopic* if there are homomorphisms of modules $s_n:P_n\to P'_{n+1}$ such that $f_n-g_n=d'_{n+1}s_n+s_{n-1}d_n$ for all $n<N$.
3. The functor $\kQ^N(\kA)\to \kQ^{N+1}(\kA)$ maps a complex $(P_\bp,d_\bp)$ to a complex $$0\to \hat P_{n+1}\stackrel h\larr P_n\to P_{n-1}\to \dots \to P_m\to 0,$$ where $h$ maps $\hat P_{n+1}$ onto $\Ker d_n$. (Such a complex is defined up to an isomorphism inside $\kQ^{N+1}(\kA)$.)
Note that these functors are full embeddings; thus all functors $\kQ^N(\kA)\to\kD^b(\kA)$ are full embeddings too, so we may treat $\kD^b(\kA)$ as a sort of union $\bigcup_N\kQ^N(\kA)$. Especially, in all classification problems, we may replace the study of the category $\kD^b(\kA)$ by that of the categories $\kQ^N(\kA)$. If $\kA$ is lofd, any complex from $\kQ^N(\kA)$ is isomorphic (in this category) to a minimal complex $P_\bp$ such that $\Ker d_N\subseteq\rad P_N$. We denote by $\kQ^N_0(\kA)$ the full subcategory of $\kQ^N(\kA)$ only consisting of such complexes. Thus $\kD^b(\kA)\iso\varinjlim_N\kQ^N_0(\kA)$.
\[QN\] Two complexes from $\kQ^N_0(\kA)$ are isomorphic in $\kQ^N(\kA)$ they are isomorphic as complexes.
If two complexes $P_\bp,\,P'_\bp$ from $\kQ_0^N(\kA)$ are isomorphic in $\kQ^N(\kA)$, there is a diagram $$\xymatrix{
{P_N} \ar[rr]^{d_N} \ar@<.5ex>[d]^{\phi_N} &&
{P_{N-1}} \ar[rr]^{d_{N-1}} \ar@<.5ex>[d]^{\phi_{N-1}} &&
{P_{N-2}} \ar[r] \ar@<.5ex>[d]^{\phi_{N-2}} & {\dots} \\
{P'_N} \ar[rr]^{d'_N} \ar@<.5ex>[u]^{\psi_N} &&
{P'_{N-1}} \ar[rr]^{d'_{N-1}} \ar@<.5ex>[u]^{\psi_{N-1}} &&
{P'_{N-2}} \ar[r] \ar@<.5ex>[u]^{\psi_{N-2}} & {\dots}
\ ,}$$ where all upgoing and downgoing squares commute. Moreover, all products $\psi_n\phi_n\ (n<N)$ are of the form $1+\si_{n-1} d_n+d_{n+1}\si_n$, thus isomorphisms, as well as all products $\phi_n\psi_n\ (n<N)$. Hence all $\phi_n,\,\psi_n\ (n<N)$ are isomorphisms. As $\phi_{N-1}(\im d_N)\subseteq\im d'_N$ and $\psi_{N-1}(\im d'_N)\subseteq\im d_N$, it implies that $\im d_N\iso\im d'_N$. Since $\Ker d_N\subseteq\rad P_N$, the latter is a projective cover of $\im d_N$, and $P'_N$ is a projective cover of $\im d'_N$. Therefore $P_N\iso P'_N$. Moreover, $\phi_Nd'_N=\phi_{N-1}d_N:P_N\to\im d'_{N-1}$ is an epimorphism, hence $\im\phi_N+\Ker d'_N=P'_N$, so $\phi_N$ is an epimorphism, thus an isomorphism.
We introduce the notions of derived tame and derived wild lofd categories in the following way, which do not formally coincide with those of some earlier papers, such as [@br; @ge; @gk], but is equivalent to them. Due to the preceding considerations, it is more convenient to deal with.
\[tw\] Let $\kA$ be a lofd category.
1. The *rank* of an object $x\in\kA$ (or of the corresponding projective module $\kA^x$) is the function $\fR(x):\ind\kA\to\mZ$ such that $x\iso\bigoplus_{y\in\ind\kA}\fR(x)(y)y$. The *vector rank* $\fR_\bp(P_\bp)$ of a bounded complex of projective $\kA$-modules is the sequence $(\dots,\fR(P_n),\fR(P_{n-1}),\dots)$ (actually it only has finitely many nonzero entries).
2. We call a *rational family* of bounded minimal complexes over $\kA$ a bounded complex $(P_\bp,d_\bp)$ of finitely generated projective $\kA\*\sR$-modules, where $\sR$ is a *rational algebra*, i.e. $\sR=\Mk[t,f(t)^{-1}]$ for a nonzero polynomial $f(t)$, and $\im d_n\subseteq\sJ P_{n-1}$ For such a complex we define $P_\bp(m,\la)$, where $m\in\mN,\,\la\in\Mk,\,f(\la)\ne0$, the complex $(P_\bp\*_\sR\sR/(t-\la)^m,d_\bp\*1)$. It is indeed a complex of projective $\kA$-modules. We put $\fR_\bp(P_\bp)=
\fR_\bp(P_\bp(1,\la))$ (this vector rank does not depend on $\la$).
3. We call a lofd category $\kA$ *derived tame* if there is a set $\dP$ of rational families of bounded complexes over $\kA$ such that:
1. For each vector rank $\fR_\bp$ the set $\dP(\fR_\bp)=\setsuch{P_\bp\in\dP}{\fR_\bp(P_\bp)=\fR}$ is finite.
2. For each vector rank $\fR_\bp$ all indecomposable complexes $(P_\bp,d_\bp)$ of projective $\kA$-modules of this vector rank, except finitely many isomorphism classes, are isomorphic to $P_\bp(m,\la)$ for some $P_\bp\in\dP$ and some $m,\la$.
The set $\dP$ is called a *parameterising set* of $\kA$-complexes.
4. We call a lofd category $\kA$ *derived wild* if there is a bounded complex $P_\bp$ of projective modules over $\kA\*\Si$, where $\Si$ is the free $\Mk$-algebra in 2 variables, such that, for every finite dimensional $\Si$-modules $L,L'$,
1. $P_\bp\*_\Si L\iso P_\bp\*_\Si L'$ $L\iso L'$.
2. $P_\bp\*_\Si L$ is indecomposable so is $L$.
(It is well-known that then an analogous complex of $\kA\*\Ga$-modules exists for every finitely generated $\Mk$-algebra $\Ga$.)
Note that, according to these definitions, every *derived discrete* (in particular, *derived finite*) lofd category [@vo] is derived tame (with the empty set $\dP$). Simple geometric considerations, like in [@d3], show that neither lofd category can be both derived tame and derived wild. We are going to demonstrate the following result.
\[main\] Every lofd category over an algebraically closed field is either derived tame or derived wild.
This theorem will be proved in Section \[s3\]. Note that, in particular, it makes valid the following corollaries, which have been proved in [@d2] under supposition that every finite dimensional algebra is either derived tame or derived wild.
\[12\] Let $\kA$ be a flat family of finite dimensional algebras based on an algebraic variety $X$. Then the set $\setsuch{x\in X}{\kA(x) \text{ \em is derived wild}}$ is a union of a countable sequence of closed subsets.
\[13\] Suppose that a finite dimensional algebra $\sA$ *degenerates* to another algebra $\sB$ (or, the same, $\sB$ *deforms* to $\sA$), i.e. there is a flat family of algebras $\kA$ based on a variety $X$ such that $\kA(x)\simeq\sA$ for all $x$ from a dense open subset $U\subseteq X$ and there is a point $y\in X$ such that $\kA(y)\simeq\sB$. If $\sA$ is derived wild, so is $\sB$; respectively, if $\sB$ is derived tame, so is $\sA$.
If the families are not assumed flat, these assertions are no more true [@br] (see [@d2; @d4] for further comments).
Related boxes {#s2}
=============
Recall [@d0; @d1] that a *box* is a pair $\dA=(\kA,\kV)$ consisting of a category $\kA$ and an $\kA$-coalgebra $\kV$. We denote by $\mu$ the comultiplication in $\kV$, by $\eps$ its counit and by $\oV=\Ker\eps$ its *kernel*. We always suppose that $\dA$ is *normal*, i.e. there is a *section* $\om:x\mapsto\om_x\ (x\in\ob\kA)$ such that $\eps(\om_x)=1_x$ and $\mu(\om_x)
=\om_x\*\om_x$ for all $x$. A category $\kA$ is called *free* if it is isomorphic to a path category $\Mk\Ga$ of an oriented graph (quiver) $\Ga$, and *semi-free* if $\kA=\Mk\Ga[\sL^{-1}]$, where $\sL$ is a set of *loops*, i.e. arrows $a:x\to x$ from $\Ga$. The arrows $a:x\to y$ with $x\ne y$ will be called *edges*. If $\Ga$ contains no arrows at all, the category $\kA$ is called *trivial*; if $\Ga$ only has loops, and at most one loop at every vertex $x$, $\kA$ is called *minimal*. A normal box $\dA=(\kA,\kV)$ is called *free* (*semi-free*) if so is the category $\kA$, while the kernel $\oV$ is a free $\kA$-bimodule. If we fix a set of free generators $\De$ of $\oV$, we call the elements from $\De$ *dashed arrows* of the box $\dA$, while the arrows of $\Ga$ are called *solid arrows*. The union $\arr\dA=\Ga\cup\De $ is called a *set of free* (or *semi-free*) *generators* of the free (semi-free) box $\dA$. We also call the objects of $\kA$ the *vertices* of $\dA$, denote by $\ver\dA$ the set of vertices, and write $\arr^0\dA=\Ga,\ \arr^1\dA=\De$. Note that a choice of free (semi-free) generators is usually not unique, and most of proofs related to boxes use a change of free (semi-free) generators.
Recall that the *differential* of a normal box $\dA=(\kA,\kV)$ is the pair $\dd=(\dd_0,\dd_1)$ of mappings, $\dd_0:\kA\to\oV,\ \dd_1:\oV\to\oV\*_\kA\oV$, namely $$\begin{aligned}
\dd_0 a&=a\om_x-\om_ya \quad\text{ for } a\in \kA(x,y),\\
\dd_1 v&=\mu(v)-v\*\om_x-\om_y\*v \quad\text{for } v\in\oV(x,y).
\end{aligned}$$ Usually we omit the index and write both $\dd a$ and $\dd v$. A set of arrows $\arr\dA$ of semi-free box is said to be *triangular*, if there is a function $h:\arr\dA\to\mN$ (called *height*) such that, for any $a\in\arr\dA$ (either solid or dashed) $\dd a$ belongs to the sub-box generated by the arrows $b\in\arr\dA$ with $\dd b<\dd a$, especially, $\dd a=0$ if $h(a)=0$. If such a set of arrows exists, we call the box $\dA$ *triangular*.
A normal box $\dA$ such that the category $\kA$ is trivial, is called *so-trivial* (trivial with respect to solid arrows). If $\kA$ is a minimal category and $\dd a=0$ for each solid loop, we call the box $\dA$ *so-minimal*.
In what follows we also use boxes, which are not free (or semi-free), but are their factors. Namely, let $\dA=(\kA,\kV)$ be a semi-free box, $\sI\subseteq\kA$ be an ideal of the category $\kA$ such that $\dd a\in\sI\oV+\oV\sI$ for all $a\in\sI$. Denote by $\dA/\sI$ the box $(\tA,\tV)$, where $\tA=\kA/\sI$ and $\tV=\kV/(\sI\kV+\kV\sI)$, with natural comultiplication and counit. Note that in this case the kernel of the box $\dA/\sI$ is a free $\tA$-bimodule, namely, it is isomorphic to $\oV/(\sI\oV+\oV\sI$). If $\dA$ is a triangular semi-free box and the ideal $\sI$ is contained in the ideal generated by all products $ab$, where $a,b$ are solid arrows, we call $\ti\dA=\dA/\sI$ a *convenient* box. The vertices and arrows of $\ti\dA$ are, by definitions, those of $\dA$. Especially, the notions of *triangular set of arrows* and *triangular box* are transmitted to convenient boxes. Actually, we need a rather specific kind of convenient boxes, defined as follows.
\[slice\]
1. A free box $\dA$ is called *sliced* if there is a function $s:\ver\dA\to\mZ$ such that
1. $s(y)<s(x)$ for every solid arrow $a:x\to y$; we set $s(a)=s(x)$;
2. $s(x)=s(y)$ for every dashed arrow $\ga:x\dar y$; we set $s(\ga)=s(x)=s(y)$.
2. A box $\ti\dA=\dA/\sI$, where $\dA=(\kA,\kV)$ is a free box and $\sI\subset\kA$ is an ideal in $\kA$ such that $\dd a\in\sI\oV+\oV\sI$ for all $a\in\sI$, is called *sliced* if so is the box $\dA$.
We call the function $s$ a *slicing* of the box $\dA$ or $\ti\dA$.
Note that if a free box $\dA$ is sliced, there are neither loops nor oriented cycles in it. Therefore, if an ideal $\sI$ is not contained in the ideal generated by the paths of length $2$, we are able just drop an arrow that occur in an element of $\sI$. Hence sliced boxes are always convenient.
A *representation* of a box $\dA=(\kA,\kV)$ over a category $\kC$ is defined as a functor $M:\kA\to\add\kC$. A *morphism* of such representations $f:M\to N$ is defined as a homomorphisms of $\kA$-modules $\kV\*_\kA M\to N$. If $g:N\to L$ is another morphism, there product is defined as the composition $$\begin{CD}
\kV\*_\kA M @>\mu\*1>> \kV\*_\kA\kV\*_\kA M @>1\*f>>\kV\*_\kA N@>g>> L.
\end{CD}$$ Thus we obtain the *category of representations* $\Rep(\dA,\kC)$. If $\dA$ is a free (or a convenient) box, we denote by $\rep(\dA,\kC)$ the full subcategory of $\Rep(\dA,\kC)$ consisting of representations with finite support $\supp M=\setsuch{x\in\ver\dA}{Mx\ne0}$. If $\kC=\vec$, we write $\Rep(\dA)$ and $\rep(\dA)$.
Given a lofd $\kA$, we are going to construct a sliced box $\dB=\dB(\kA)=(\kB,\kW)$ such that its representations classify the objects of the derived category $\kD^b(\kA)$.
We denote by $\kS$ the trivial category with the set of objects $$\ob\kS=\setsuch{(x,n)}{x\in\ind\kA,\,n\in\mZ}$$ and consider the $\kS$-bimodule $\kJ$ such that $$\kJ\big((x,n),(y,m)\big)= \begin{cases}
0 &\text{if } m\ne n-1,\\
\sJ(x,y)^* &\text{if }m=n-1,
\end{cases}$$ where $\sJ$ is the radical of $\kA$ and $V^*$ denotes the dual vector space to $V$. Let $\tB=\kS[\kJ]$ be the tensor category of this bimodule; equivalently, it is the free category having the same set of objects as $\kS$ and the union of bases of all $\kJ\big((x,n),(y,m)\big)$ as a set of free generators. Denote by $\kU$ the $\kS$-bimodule such that $$\kU\big((x,n),(y,m)\big)= \begin{cases}
0 &\text{if } n\ne m,\\
\kA(x,y)^* &\text{if } n=m
\end{cases}$$ and set $\tU=\tB\*_\kS\kU\*_\kS\tB$. Dualizing the multiplication $\kA(y,z)\*\kA(x,y)\to\kA(x,z)$, we get homomorphisms $$\begin{aligned}
\la_r&: \tB\larr \tB\*_\kS\tU,\\
\la_l&: \tB\larr \tU\*_\kS\tB,\\
\tmu&: \tU\larr \tU\*_\kS\tU.
\end{aligned}$$ In particular, $\tmu$ defines on $\tU$ a structure of $\tB$-coalgebra. Moreover, the sub-bimodule $\kU_0$ generated by $\im(\la_r-\la_l)$ is a coideal in $\tU$, i.e. $\tmu(\kU_0)\subseteq\kU_0\*_{\tB}\tU\+\tU\*_{\tB}\kU_0$. Therefore, $\tW=\tU/\kU_0$ is also a $\tB$-coalgebra, so we get a box $\ti\dB=(\tB,\tW)$. One easily checks, like in [@d0], that it is free and triangular.
Dualizing multiplication also gives a mapping $$\label{e21}
\nu:\sJ(x,y)^*\larr\bigoplus_z\sJ(z,y)^*\*\sJ(x,z)^*.$$ Namely, if we choose bases $\set\al,\,\set\be\,\set\ga$ in the spaces, respectively, $\sJ(x,y),$ $\sJ(z,y),\,\sJ(x,z)$, and dual bases $\set{\al^*},\,\set{\be^*},\,\set{\ga^*}$ in their duals, then $\be^*\*\ga^*$ occurs in $\nu(\al^*)$ with the same coefficient as $\al$ occurs in $\be\ga$. Note that the right-hand space in coincide with each $\tB\big((x,n),(y,n-2)\big)$. Let $\sI$ be the ideal in $\tB$ generated by the images of $\nu$ in all these spaces and $\dB=\ti\dB/\sI=(\kB,\kW)$, where $\kB=\tB/\sI,\ \kW=\tW/(\sI\tW+\tW\sI)$. One easily checks that $\dd\sI\subseteq\sI\tW+\tW\sI$, so it is a convenient box. If necessary, we write $\dB(\kA)$ to emphasise that this box has been constructed from a given algebra $\kA$. Certainly, $\dB$ is a sliced triangular box, and the following result holds.
\[box\] The category of finite dimensional representations $\rep(\dB(\kA))$ is equivalent to the category $\kP^b_{\min}(\kA)$ of bounded minimal projective $\kA$-complexes.
We denote $\sJ^x=\sJ(x,\_\,)=\rad\kA^x$. Then $\Hom_\kA(\kA^x,\sJ^y)\simeq\sJ(x,y)$. A representation $M\in\rep(\dB)$ is given by vector spaces $M(x,n)$ and linear mappings $$M_{xy}(n):\sJ(x,y)^*=\kA\big((x,n),(y,n-1)\big)\to\Hom\big(M(x,n),M(y,n-1)\big),$$ where $x,y\in\ind\kA,\,n\in\mZ$, subject to the relations $$\label{e22}
\sum_z \fM\big(M_{zy}(n)\*M_{xz}(n+1)\big)\nu(\al)=0$$ for all $x,y,n$ and all $\al\in\sJ_{xy}$, where $\fM$ denotes the multiplication of mappings $$\begin{gathered}
\Hom\big(M(z,n),M(y,n-1)\big)\*\Hom\big(M(x,n+1),M(z,n)\big)\to\\
\to\Hom\big(M(x,n+1),M(y,n-1)\big).
\end{gathered}$$ For such a representation, set $P_n=\bigoplus_x \kA^x\*M(x,n)$. Then $\rad P_n=\bigoplus_x \sJ^x\*M(x,n)$ and $$\begin{aligned}
\Hom_\kA(P_n,\rad P_{n-1})&\simeq \bigoplus_{x,y} \Hom_\kA\big(\kA^x\*M(x,n),\sJ^y\*M(y,n-1)\big)\simeq\\
&\simeq \bigoplus_{x,y} \Hom\big(M(x,n),\Hom_\kA\big(\kA^x,\sJ^y\*M(y,n-1)\big)\big)\simeq\\
&\simeq \bigoplus_{x,y} M(x,n)^*\*\sJ(x,y)\*M(y,n-1) \simeq\\
&\simeq \bigoplus_{x,y} \Hom\big(\sJ^*(x,y),\Hom\big(M(x,n),M(y,n-1)\big)\big).
\end{aligned}$$ Thus the set $\setsuch{M_{xy}(n)}{x,y\in\ind\kA}$ defines a homomorphism $d_n:P_n\to P_{n-1}$ and vice versa. Moreover, one easily verifies that the condition is equivalent to the relation $d_nd_{n+1}=0$. Since every projective $\kA$-module can be given in the form $\bigoplus_x\kA^x\*V_x$ for some uniquely defined vector spaces $V_x$, we get a between finite dimensional representations of $\dB$ and bounded minimal complexes of projective $\kA$-modules. In the same way one also establishes between morphisms of representations and of the corresponding complexes, compatible with their multiplication, which accomplishes the proof.
Note that we can pick up subcategories of $\rep(\dB)$ that describe each of $\kQ^N(\kA)$. Namely, denote by $\rep^N(\dB)$ the full subcategory of $\rep(\dB)$ consisting of all representations $M$ such that $M(x,n)=0$ for $n>N$. Let $\sT_N$ be the ideal of $\rep^N(\dB)$ generated by the identity morphisms of *trivial representations* $S_{x,N}$, where $S_{x,N}(x,N)=\Mk$, $S_{x,N}(y,n)=0$ if $(y,n)\ne(x,N)$. Obviously, the equivalence of the categories $\rep(\dB)$ and $\kP_{\min}^b(\kA)$ maps representations from $\rep^N(\dB)$ onto the complexes $P_\bp$ with $P_n=0$ for $n>N$. Moreover, it maps $S_{x,N}$ to the complex $T^{x,N}_\bp$ with $T^{x,N}_N=\kA^x$, $T^{x,N}_n=0$ for $n\ne N$. Note that a morphism of complexes from $\kQ^N(\kA)$ is quasi-homotopic to zero it factorises through a direct sum of complexes $T^{x,N}_\bp$. It gives the following
\[boxN\] The category $\kQ^N(\kA)$ is equivalent to the factor category $\rep^N(\dB)/\sT_N$.
Evidently, $\,\ind\big(\rep^N(\dB)/\sT_N\big)=
\ind\big(\rep^N(\dB)\big)\=\setsuch{S_{x,N}}{x\in\ver\kA}$.
\[twbox\] An algebra $\kA$ is derived tame (derived wild) if so is the box $\dB(\kA)$.
Proof of the Main Theorem {#s3}
=========================
Now we are able to prove the main theorem. Namely, according to Corollary \[twbox\], it follows from the analogous result for sliced boxes.
\[mbox\] Every sliced triangular box is either tame or wild.
Actually, just as in [@d0] (see also [@d1]), we shall prove this theorem in the following form.
\[mini\] Suppose that a sliced triangular box $\dA=(\kA,\kV)$ is not wild. For every dimension $\fD$ of its representations there is a functor $F_\fD:\kA\to\add\kM$, where $\kM$ is a minimal category, such that every representation $M:\kA\to\vec$ of $\dA$ of dimension $\Dim (M)\le\fD$ is isomorphic to the inverse image $F^*N=N\circ F$ for some functor $N:\kM\to\vec$. Moreover, $F$ can be chosen *strict*, which means that $F^*N\simeq F^*N'$ implies $N\simeq N'$ and $F^*N$ is indecomposable if so is $N$.
\[r21\] We can consider the induced box $\dA^F=(\kM,\kM\*_\kA\kV\*_\kA\kM)$. It is a so-minimal box, and $F^*$ defines a full and faithful functor $\rep(\dA^F)\to\rep(\dA)$. Its image consists of all representations $M:\kA\to\vec$ that factorise through $F$.
As we only consider finite dimensional representations, we may assume that the set of objects is finite. Then we may assume that all values of a slicing $s:\ver\dA\to\mZ$ belong to $\mN$, and there are finitely many of them. Let $m=\max\setsuch{s(x)}{x\in\ver\dA}$. We use induction on $m$. If $m=1$, $\dA$ is free, and our claim has been proved in [@d0]. So we may suppose that the theorem is true for smaller values of $m$, especially, it is true for the restriction $\dA'=(\kA',\kV')$ of the box $\dA$ onto the subset $\sV=\setsuch{x\in\ver\dA}{s(x)<m}$. Thus there is a strict functor $F':\kA'\to\add\kM$, where $\kM$ is a minimal category, such that every representation of $\dA'$ of dimension smaller than $\fD$ is of the form ${F'}^*N$ for $N:\kM\to\vec$. Consider now the amalgamation $\kB=\kA\bigsqcup^{\kA'}\kM$ and the box $\dB=(\kB,\kW)$, where $\kW=\kB\*_\kA\kV\*_\kA\kB$. The functor $F'$ extends to a functor $F:\kA\to\kB$ and induces a homomorphism of $\kA$-bimodules $\kV\to\kW$; so it defines a functor $F^*:
\rep(\dB)\to\rep(\dA)$, which is full and faithful. Moreover, every representation of $\dA$ of dimension smaller than $\fD$ is isomorphic to $F^*N$ for some $N$, and all possible dimensions of such $N$ are restricted by some vector $\fB$. Therefore, it is enough to prove the claim of the theorem for the box $\dB$.
Note that the category $\kB$ is generated by the loops from $\kM$ and the images of arrows from $\kA(a,b)$ with $s(a)=m$ (we call them *new arrows*). It implies that all possible relations between these morphisms are of the form $\sum_\al g_\al(\be)\al=0$, where $\be\in\kB(b,b)$ is a loop (necessarily minimal, i.e. with $\dd\be=0$), $g_\al$ are some polynomials, and $\al$ runs through the set of new arrows from $a$ to $b$ for some $a$ with $s(a)=m$. Consider all of these relations for a fixed $a$; let them be $\sum_\al g_{\al,k}(\be)\al=0$. Their coefficients form a matrix $\big(g_{\al,k}(\be)\big)$. Using transformations of the set $\set\be$ and of the set of relations, we can make this matrix diagonal, i.e. make all relations being $ f_\al(\be)\al=0$ for some polynomials $f_\al$. If one of $f_\al$ is zero, the box $\dB$ has a sub-box $$\xymatrix{
{a} \ar[rr]^{\al} && b \ar@(ur,dr)[]^{\be} },$$
with $\dd\al=\dd\be=0$, which is wild; hence $\dB$ and $\dA$ are also wild. Otherwise, let $f(\be)\ne0$ be a common multiple of all $f_\al(\be)$, $\La=\set{\lst \la r}$ be the set of roots of $f(\be)$. If $N\in\rep(\dB)$ is such that $N(\be)$ has no eigenvalues from $\La$, then $f(N(\be))$ is invertible; thus $N(\al)=0$ for all $\al:a\to b$. So we can apply the *reduction of the loop* $\be$ with respect to the set $\La$ and the dimension $d(a)$, as in [@d0 Propositions 3,4] or [@d1 Theorem 6.4]. It gives a new box that has the same number of loops as $\dB$, but the loop corresponding to $\be$ is “isolated,” i.e. there are no more arrows starting or ending at the same vertex. In the same way we are able to isolate all loops, obtaining a semi-free triangular box $\dC$ and a morphism $G:\dB\to\dC$ such that $G^*$ is full and faithful and all representations of $\dB$ of dimensions smaller than $\fB$ are of the form $G^*L$. As the theorem is true for semi-free boxes, it accomplishes the proof.
[99]{} Th. Brüstle. *Tree Algebras and Quadratic Forms*. Habilitation Thesis. Universität Bielefeld, 2002.
Yu. A. Drozd. On tame and wild matrix problems. *Matrix Problems.* Institute of Mathematics, Kiev, 1977, 104–114.
Yu. A. Drozd. Tame and wild matrix problems. *Representations and quadratic forms.* Institute of Mathematics, Kiev, 1979, 39–74. (English translation: *Amer. Math. Soc. Translations* [**128**]{} (1986) 31–55.)
Yu. A. Drozd. Reduction algorithm and representations of boxes and algebras. *Comptes Rendues Math. Acad. Sci. Canada* [**23**]{} (2001), 97-125.
Yu. A. Drozd. Semi-continuity for derived categories. arXiv:math.RT/0212015 (to appear in *Algebras and Representation Theory*).
Yu. A. Drozd. Derived tame and derived wild algebras. arXiv:math.RT/0310171 (to appear in *Algebra and Discrete Mathematics*).
Yu. A. Drozd and G.-M. Greuel. Tame-wild dichotomy for Cohen–Macaulay modules. *Math. Ann.* [**294**]{} (1992), 387–394.
Ch. Geiß. Derived tame algebras and Euler-forms. *Math. Z.* [**239**]{} (2002), 829–862.
Ch. Geiß and H. Krause. On the notion of derived tameness. *J. Algebra Appl.* [**1**]{} (2002), 133–157.
S. I. Gelfand and Yu. I. Manin. *Methods of Homological Algebra.* Springer–Verlag, 1996.
D. Happel. *Triangulated Categories in the Representation Theory of Finite Dimensional Algebras*. London Mathematical Society Lecture Notes Series, [**119**]{}, Cambridge University Press, Cambridge, 1988.
D. Vossieck, The algebras with discrete derived category. *J. Algebra* [**243**]{} (2001), 168–176.
[^1]: The first author was supported by FAPESP (Grant N 98/14538-0) and CNPq (Grant 301183/00-7).
|
---
author:
- 'Elias M. Stein and Brian Street[^1]'
bibliography:
- 'radon.bib'
title: 'Multi-parameter singular Radon transforms'
---
\[section\] \[thm\][Corollary]{} \[thm\][Proposition]{} \[thm\][Lemma]{} \[thm\][Conjecture]{}
\[thm\][Remark]{}
\[thm\][Definition]{}
\[thm\][Example]{}
Introduction
============
Statement of main results {#SectionResults}
-------------------------
When $\gamma$ is real analytic {#SectionRealAnal}
==============================
Product Kernels {#SectionProductKernel}
===============
Scaling and the Frobenius theorem {#SectionScaling}
=================================
The $L^2$ theorem {#SectionL2}
=================
The Littlewood-Paley theory {#SectionLittlewood}
===========================
Auxiliary operators
===================
Completion of the proof
=======================
More general results {#SectionGeneral}
====================
[^1]: The second author was partially supported by NSF DMS-0802587.
|
---
abstract: 'Electromagnetic Navigation Systems (eMNS) can be used to control a variety of multiscale devices within the human body for remote surgery. Accurate modeling of the magnetic fields generated by the electromagnets of an eMNS is crucial for the precise control of these devices. Existing methods assume a linear behavior of these systems, leading to significant modeling errors within nonlinear regions exhibited at higher magnetic fields. In this paper, we use a random forest (RF) and an artificial neural network (ANN) to model the nonlinear behavior of the magnetic fields generated by an eMNS. Both machine learning methods outperformed the state-of-the-art linear multipole electromagnet method (LMEM). The RF and the ANN model reduced the root mean squared error of the LMEM when predicting the field magnitude by around 40% and 80%, respectively, over the entire current range of the eMNS. At high current regions, especially between 30 and 35 A, the field-magnitude RMSE improvement of the ANN model over the LMEM was over 35 mT. This study demonstrates the feasibility of using machine learning methods to model an eMNS for medical applications, and its ability to account for complex nonlinear behavior at high currents. The use of machine learning thus shows promise for improving surgical procedures that use magnetic navigation.'
author:
- |
Ruoxi Yu$^{1}$, Samuel L. Charreyron$^{2}$, Quentin Boehler$^{2}$, Cameron Weibel$^{2}$,\
Carmen C. Y. Poon$^{1}$ and Bradley J. Nelson$^{2}$[^1] [^2]
bibliography:
- 'references.bib'
title: '**Modeling Electromagnetic Navigation Systems for Medical Applications using Random Forests and Artificial Neural Networks** '
---
INTRODUCTION
============
Magnetic Navigation Systems (MNS) use magnetic fields to wirelessly control biomedical devices inside the body. These may be untethered magnetic micro or nanorobots that are pulled by magnetostatic forces due to spatially varying magnetic fields [@Ullrich2013], or that “swim" in fluids due to time-varying magnetic fields [@Servant2015]. Additionally, magnetic navigation can be used for steering tethered surgical devices such as ophthalmic microcatheters [@Charreyron2018] or endoscopes [@Scaglioni2019]. Magnetic navigation has seen clinical adoption for cardiovascular interventions, with MNS systems from Aeon Scientific [@Chautems2017] and Stereotaxis Inc. [@Ernst2004] achieving clinical certification and performing operations on several thousand patients. Magnetic navigation can also be adopted to control wireless capsules for noninvasive examination of the large gastric cavity [@MACE], and therapeutic functions along the gastrointestinal tract, such as haemostasis [@tWCE] and endoscopic submucosal dissection [@DBLP:journals/tii/LauLCYLP16], can potentially be improved with the integration of such systems.
Magnetic navigation can either be performed by sets of moving or rotating permanent magnets [@Wright2017], or by systems comprising several electromagnets [@Kummer2010], also known as Electromagnetic Navigation Systems (eMNS). Modeling a MNS consists of determining the magnetic field flux density at different locations within the workspace, given different varying control parameters such as the permanent magnet placement or the electromagnet currents. By modeling the magnetic fields acting on the steered tools, such as microrobots or catheters, forward kinematic models relating the control variables and the state of the tool can be obtained. The kinematic models can then be inverted to determine the control variables for a desired tool configuration. Therefore, accurately modeling the magnetic fields of a MNS is important for precisely steering the tool. Accurate magnetic models are even more important for tracking devices in a MNS, due to the significant position-dependency of magnetic fields [@DiNatali2016; @Son2016]. By combining precise measurements of onboard magnetic sensors and an accurate magnetic field map, the pose of a tool in a MNS can be tracked without using line-of-sight, magnetic resonance imaging, or fluoroscopy.
The magnetic vector fields generated by ferromagnets can be modeled using finite-element-method simulation [@Sikorski2017], by interpolating the measured values over space [@Ongaro2018], or using a physics-based multipole model [@Petruska2017] that is fit to the measured magnetic field data. When using electromagnets, one must characterize the relationship between the currents applied to the electromagnet coil windings and the resultant magnetic fields. Electromagnets often comprise ferromagnetic cores for magnifying the fields generated by the coils. In previous modeling approaches, the magnetization of such cores are assumed to depend linearly on the magnetic fields that were used to magnetize them. Thus, by the principle of superposition, one can represent the magnetic field flux-density $\mathbf{b} \in \mathbb{R}^3$ generated at a given position $\mathbf{p} \in \mathbb{R}^3$ as the product of an actuation matrix $A \in \mathbb{R}^{3\times N_c}$ and a vector of currents $\mathbf{i} \in \mathbb{R}^{N_c}$, i.e. $$\begin{aligned}
\mathbf{b}(\mathbf{p}, \mathbf{i}) &= A(\mathbf{p}) ~ \mathbf{i}, \label{eq:linear}\end{aligned}$$ where $\mathbf{i}$ corresponds to the currents in the current windings of the $N_c$ electromagnets.
However, as the external magnetization fields increase, ferromagnetic materials exhibit saturation, and the relationship between coil currents and the generated magnetic field is no-longer linear. Due to these nonlinearities, the superposition-principle does not hold, and it is impossible to separate the effect of different coils. A more general expression $g$ for the magnetic fields must be used to take into account the effects of all coil currents, i.e. $$\begin{aligned}
\mathbf{b}(\mathbf{p}, \mathbf{i}) &= g(\mathbf{p}, \mathbf{i}). \label{eq:nonlinear}\end{aligned}$$
For systems with a small number of electromagnets, $g$ can be determined by measuring discrete magnetic fields that span the entire space of currents, and then interpolating a smooth function between the field measurements. However, for systems consisting of more than three electromagnets, such an approach becomes computationally infeasible due to the “curse of dimensionality".
In this work, we propose two machine learning methods to model the magnetic fields generated by the CardioMag, an eMNS exhibiting strong saturation over 70% of its magnetic field generation capacity. For comparison, a state-of-the-art linear model is used as the baseline.
This paper is organized as follows. We first introduce the baseline model and the applied machine learning models in \[sec:models\]. We subsequently detail the data collection and model training processes in \[sec:experiments\]. We present the obtained results in \[sec:results\] followed by a discussion in \[sec:discussion\], and conclude in \[sec:conclusion\].
MODELING METHODS {#sec:models}
================
Two supervised machine learning methods, a random forest (RF) and an artificial neural network (ANN), are used to model an eMNS. Both approaches are compared to a linear multipole electromagnetic method (LMEM) introduced in the literature, which constitutes our baseline.
Linear Multipole Electromagnet Method (Baseline)
------------------------------------------------
The LMEM uses a multi-source spherical multipole expansion to describe the magnetic scalar potential produced by a set of electromagnets with ferromagnetic cores [@Petruska2017]. This formulation ensures that the magnetic field associated with this scalar potential is curl-free and divergence-free, which constitute two fundamental physical properties of the field. This is because the multipole expansion is a solution to Laplace’s equation, which defines the magnetic scalar potential. This method has been previously used to model the CardioMag.
The model assumes a linear relationship between the magnetic fields and the coil currents, and superimposes the contribution of each electromagnet to predict the magnetic field. It neither considers the nonlinearities that occur within the saturation region of the ferromagnetic cores, nor the perturbations in the magnetic field resulting from other unidentified sources.
Machine Learning Methods
------------------------
In this work, we use data collected from magnetic flux density sensors placed over the workspace of the CardioMag, at a number of electromagnet currents, to train both machine learning models. The task at hand is to predict the generated 3-D magnetic field flux density at a specific position, given the electrical currents that are measured on the electromagnets. The prediction output is a vector that contains the continuous 3-D magnetic field flux density. Since multivariate regression is needed to predict these three values describing the field, a RF and an ANN were used in this study.
A RF [@Breiman2001] is an ensemble learning method, where predictions are made by growing multiple decision trees. Each tree performs a binary split of samples at each node by considering a subset of features. For regression problems, final predictions are made in a RF by averaging the results of all trees. An ANN [@Mitchell:1997:ML:541177] contains many connected neurons arranged in layers to produce network outputs. ANNs are motivated by biological neural systems and can be trained to minimize the error between the network output values and the target values using the backpropagation algorithm.
EXPERIMENTS {#sec:experiments}
===========
In this section, we introduce the data collection process, including the hardware setup as well as the data collection protocol, and the model training and the evaluation process.
Hardware Setup
--------------
The eMNS to be modeled was introduced in [@Petruska2017] and is depicted in Fig. \[cmag\].A. It comprises $N_c = 8$ electromagnets surrounding a $10 \times 10 \times 10$ cubic workspace. The maximum current in each electromagnet is , and the maximum power in the whole system is .
![Data collection setup for an eMNS. A: The CardioMag, an eMNS B: Magnetic sensor array C: Magnetic field measurements with the sensor array for a random current set[]{data-label="cmag"}](setup)
To obtain 3-D magnetic field measurements within the workspace, an array of magnetic sensors was built, as shown in Fig. \[cmag\].B. The array consists of 125 identical Hall-effect magnetic sensors (TLV493D-A1B6, Infineon) [@infineon:TLE4473GV55-2], arranged in a $5 \times 5 \times 5$ cubic grid with 5 cm spacing in each direction.
Data Collection
---------------
A set of uniformly random current vectors with values between - and was first generated. Current vectors exceeding the maximum system power were discarded. A total number of 3,590 distinct current vectors were generated for the dataset. The sensor array was placed in the center of the workspace, as shown in Fig. \[cmag\].B. The pre-generated current vectors were applied to the system and the resultant magnetic fields were recorded by the sensors at a frequency of .
The raw measurements from the magnetic sensors were then preprocessed to construct a dataset for experiments. Since the electromagnets exhibit a dynamic response, the transient region of measurements was discarded to obtain static measurements only. The currents that were measured on the coil windings had insignificant white noise, with a mean standard deviation of . Due to the slow dynamic response of the coils, the effect of such high-frequency noise had little effect on the generated magnetic fields. Nonetheless, the current measurements were smoothed by averaging their values over the measurement window. The mean standard deviation of the magnetic field measurement was . Magnetic field measurements were also averaged to reduce the effect of such measurement noise.
The dataset consisted of $M = $ 427,210 samples obtained from 119 sensors[^3]. Table \[tbl\_dataset\] shows the statistics of the dataset. Each recorded sample $j \in [1,M]$ consisted of: 1) a position vector $\mathbf{p}^j = \begin{bmatrix}x^j & y^j & z^j \end{bmatrix}^T$ of the sensor; 2) a current vector $\mathbf{i}^j = \begin{bmatrix}i^j_1 & \dots & i^j_8 \end{bmatrix}^T$, where $i^j_k$ indicates the smoothed current applied to the $k$-th coil with $k \in [1,8]$; 3) a magnetic field vector $\mathbf{b}^j = \begin{bmatrix}b^j_x & b^j_y & b^j_z \end{bmatrix}^T$ measured at $\mathbf{p}^j$. The magnitude of the magnetic field corresponds to the magnetic flux density magnitude $\|\mathbf{b}^j\|$. A sample extracted from the dataset is depicted in Fig. \[cmag\].C, where each arrow represents the measured magnetic field at a sensor position in space.
**Parameter** **Minimum** **Maximum** **Unit**
------------------------- ------------- ------------- ----------
x -10.12 10.12 cm
y -10.61 12.33 cm
z 1.94 22.20 cm
$i_k$ for $k \in [1,8]$ -35.00 34.99 A
$b_x$ -179.35 178.46 mT
$b_y$ -166.81 170.45 mT
$b_z$ -179.50 183.89 mT
: Statistical information about the dataset.[]{data-label="tbl_dataset"}
Model Training
--------------
The collected current vectors were randomly divided into a training and testing dataset with a 9:1 ratio. For all models, the input data consisted of an 11-dimensional vector concatenating the position in the workspace $\mathbf{p} \in \mathbb{R}^3$ and the electromagnet current vector $\mathbf{i} \in \mathbb{R}^8$. Each model output a 3-D magnetic field $\mathbf{b} \in \mathbb{R}^3$. To generate a training dataset for the LMEM, we followed the original requirements in [@Petruska2017] where the maximum current in each coil was limited to 5 A. As neural networks are sensitive to the scale of inputs, all features ($\textbf{p}$ and $\textbf{i}$) were scaled between 0 and 1 using the min-max scaling method based on the statistics calculated from the training dataset similar to those shown in Table \[tbl\_dataset\].
The RF model was implemented using the scikit-learn package [@Pedregosa2011]. A five-fold cross-validation grid search was performed to select hyperparameters for the model. The searched parameter grid covered the number of trees between 10 and 100, the maximum depth of each tree between 10 and 30, the minimum number of samples to split at each node between 2 and 20, the maximum number of features to consider at each node between 3 and 5, and the minimum number of samples at a leaf node between 1 and 15. The best performing model had 100 trees with a maximum depth of 25, and a maximum of 5 features to consider. It required at least 2 samples at each node and 1 sample at each leaf node.
The ANN model was implemented using the Keras library [@CholletFrancois2015]. The model structure was adopted from a study [@Christensen2017] with similar feature dimensions. With 11 neurons in the input layer, the implemented ANN contained three hidden layers with 100, 50 and 25 neurons, respectively. The hyperbolic tangent function (tanh) was selected as the activation function in each hidden layer. Finally, the output layer had three neurons with a linear activation function. During training, 10% of training data were set aside for validation. The ANN model was trained using the Adam [@adam] optimizer with an initial learning rate of 0.001, in order to minimize a mean squared error between the predicted and the measured magnetic fields. The batch size was chosen as 128 samples, and the epoch number was 50. To prevent overfitting, early stopping was applied during training when the validation loss did not decrease in 5 consecutive epochs. The model with the lowest validation loss was selected for testing. The size of the training data is an important factor limiting the performance of machine learning models. In this study, we conduct further experiments to evaluate the impact of the size of training data on the prediction accuracy for the RF and ANN models. Volume of 10% to 90% of the training samples was randomly selected from the original training dataset to construct multiple smaller training subsets. The RF and ANN models were independently trained on these training subsets. We used the same model hyperparameters and training process as described previously. All trained models were then tested on the original testing dataset for comparison.
Evaluation Metrics
------------------
To evaluate the prediction performance of the models, two general goodness-of-fit metrics were used to compare the measured and predicted magnetic field. These included the $R^2$-score and the root mean squared error (RMSE) for each component computed as
$$\begin{aligned}
R_\star^2 = 1 - \frac{\sum_{j=1}^{N} ({{b}}^j_{\star}- {\hat{b}^j}_\star)^2}{\sum_{j=1}^{N} ({b}^j_{\star} - \bar{{b}}_\star)^2},\end{aligned}$$
and $$\begin{aligned}
\text{RMSE}_{\star} = \sqrt{\frac{\sum_{j=1}^{N}(b^j_{\star} - \hat{b}^j_{\star})^2}{N}},\end{aligned}$$ where $b^j_\star$ and $\hat{b}^j_\star$ are respectively the measured and model predicted values for the $j$-th sample and the $\star$ component; $\bar{b}_\star$ is the mean of the measured magnetic field over the $N$ samples composing the testing dataset. Additionally, the prediction performance on the magnetic field magnitude was also evaluated using these two metrics denoted as $R^2_\text{norm}$ and $\text{RMSE}_\text{norm}$. An $R^2$ value of 1 indicates that the model predictions perfectly fit the measurements, whereas a RMSE close to 0 suggests a good model.
To evaluate the models’ prediction performance at different locations, the mean absolute percentage error of the magnetic field magnitude at location **p** is calculated by $$\begin{aligned}
\text{MAPE}_{norm}^\textbf{p} = \frac{100\%}{K} \sum_{k=1}^{K} \left |\frac{ \|\mathbf{b}^\mathbf{p}_k\| - \|\hat{\mathbf{b}}^\mathbf{p}_k\| }{\|\mathbf{b}^\mathbf{p}_k\|}\right |,\end{aligned}$$ where $\|\textbf{b}^\textbf{p}_k\|$ and $\|\hat{\textbf{b}}^\textbf{p}_k\|$ are respectively measured and predicted magnetic flux density magnitude at location **p**, and k is the index of the current vector with a total K currents vectors tested at each location.
RESULTS {#sec:results}
=======
The overall testing performance of the LMEM, RF, and ANN models are summarized in Table \[tbl\_overall\_test\]. The LMEM achieved over 0.75 $R^2$ for all field components, while only 0.29 $R^2$ for the field magnitude. The LMEM produced at least component-wise RMSE, and collectively nearly field-magnitude RMSE. The RF and ANN models achieved significantly better results. The RF model achieved over 0.85 $R^2$ in all field components and 0.74 in field-magnitude $R^2$. The RF model produced around 30% improvement over the baseline model based on the component-wise RMSE and a 40% improvement on the field-magnitude RMSE. The ANN model achieved 0.99 $R^2$ in predicting each field component and magnitude. The ANN model showed an 80% improvement over the LMEM based on both the component-wise and field-magnitude RMSE.
**Model** $\mathbf{R^2_x}$ $\mathbf{R^2_y}$ $\mathbf{R^2_z}$ $\mathbf{R^2_{norm}}$ $\mathbf{RMSE_x (mT)} $ $\mathbf{RMSE_y (mT)} $ $\mathbf{RMSE_z (mT)} $ $\mathbf{RMSE_{norm} (mT)} $
----------- ------------------ ------------------ ------------------ ----------------------- ------------------------- ------------------------- ------------------------- ------------------------------
LMEM 0.86 0.81 0.76 0.29 14.34 15.81 14.51 23.90
RF 0.92 0.92 0.86 0.74 10.89 10.00 11.11 14.43
ANN 0.99 0.99 0.99 0.99 3.10 2.68 2.72 3.01
To further examine the prediction models’ performance at different currents, testing samples were grouped into different current levels according to the maximum electromagnet current in the current vector $i^j_{\max} = \max(|i^j_1|, \dots ,|i^j_8|)$. Predictions of the generated field magnitudes were then evaluated independently for different current levels, as shown in Fig. \[fig\_Results\_diff\_amps\]. For both metrics, the performance of the LMEM had a tendency to decrease as the current level increased. The RF and ANN models had relatively stable performance across all current levels in terms of $R^2_\text{norm}$. The RF model also showed an increasing $\text{RMSE}_\text{norm}$ as currents increased. The ANN model showed better performance than the RF model across all current levels. When applied currents were small, where linear assumptions of the LMEM still held, the LMEM and the ANN model showed similar performance, while the RF performed worse. For testing samples with the maximum current over , the ANN model performed better than the LMEM. The superior performance of the RF model over the LMEM was shown when the maximum current exceeded . When the maximum applied current was within 30-35 , the improvement of the RF and ANN models over the LMEM were and respectively in terms of field-magnitude RMSE.
![Prediction performance comparison in the testing dataset stratified by current levels.[]{data-label="fig_Results_diff_amps"}](images/metrics_by_current_icra)
To examine the spatial modeling error distribution, predictions of the three models were evaluated at all sensor locations. The $\text{MAPE}_{norm}$ was calculated at each location as depicted in Fig. \[spatial\_error\] for all samples in the testing dataset. Both LMEM and RF produced significantly higher $\text{MAPE}_{norm}$ than the ANN model at all evaluated locations. The RF model showed slightly better prediction performance than the LMEM.

Fig. \[fig\_train\_size\] shows the results of the testing performance of the RF and ANN models when trained with different amounts of training data. In general, both machine learning methods showed an increase in performance with the increasing amount of training data. Compared with the ANN model, the RF model’s performance exhibited a more significant performance improvement when supplied with more training data. For both models, the performance gain started to decrease when training subsets were over 40% of the original training data, especially for the ANN model.
![The impact of the training set size on prediction performance.[]{data-label="fig_train_size"}](images/less_training_single_trial_icra)
DISCUSSION {#sec:discussion}
==========
eMNS can be designed for specific surgical applications, and different numbers, or different configurations of electromagnets can be used to maximize the resultant magnetic fields within a sufficiently large workspace for the operation. Modeling a given eMNS can be carried out prior to deployment with a protocol similar to the one described in this study. Our modeling approach can be used to model any eMNS regardless of the workspace size as well as the number and properties of the electromagnets.
Overall, both implemented machine learning models performed better than the LMEM on the entire testing dataset. The LMEM was able to model the magnetic fields precisely at low currents but not at currents higher than 15-20 A. This was as expected, since the linear assumption does not hold at these currents. The development of the LMEM requires prior knowledge on the geometry and strength of the dipoles which model the ferromagnetic sources of the eMNS, which in some cases are difficult to define. Although samples consisted of a wide range of current levels and spatial locations, relatively stable prediction performance was achieved by both machine learning models, especially the ANN model.
The ANN model performed better than the RF model for all evaluation metrics in all scenarios. Since the regression output from a RF model is predicted by averaging results from all trees, and the prediction of each tree depends on the samples arrived at the leaf node, only a finite number of potential prediction outputs are possible once trained. When used in reality, additional steps are required to interpolate the RF field predictions between locations and current vectors. The ANN model, on the other hand, can directly output continuous prediction values within the range of the activation function in the output layer. In this case, the ANN model may be a better prediction method to model the continuous magnetic fields generated by an eMNS. Although RF was not the most precise method for modeling the eMNS, it could compute the relative importance of features on predicting the magnetic fields. The higher the value, the more important the feature is in the prediction. The feature importance values returned from our RF model were as follows: $i_8$ (0.15), $x$ (0.12), $i_2$ (0.12), $i_4$ (0.11), $i_6$ (0.11), $y$ (0.09), $z$ (0.08), $i_1$ (0.06), $i_7$ (0.06), $i_3$ (0.05), $i_5$ (0.05). All features contributed on a similar level of importance to the magnetic field prediction. However, from the final RF model’s perspective, a location’s coordinate along the x-direction was slightly more important than the other two directions. Moreover, currents of the even-numbered coils were, in general, more important than those of the odd-numbered coils from these returned feature importance values. These values provide insights into understanding the MNS behavior for those who are unfamiliar with the system. In addition, when considering additional factors which may relate to the magnetic field prediction, these values can be used for selecting important predictors for modeling approaches. The sample size of the training data is critical for both RF and ANN models to achieve good performance. In this study, we evaluated the influence of the size of the training data on model performance. As anticipated, the performance of both RF and ANN models improved when supplying the model with more training data. Since the RF model cannot extrapolate target values, increasing the training size may potentially increase the range of values it can predict, and hence leading to better performance.
The target modeling performance will depend on the specific application of the eMNS, and whether the modeled magnetic field map is going to be used to determine control variables applied to the system or the states of the devices. In general, there is no ceiling on the performance improvement, but some potential applications like localization would require performance that is much higher than what is achieved by the LMEM.
CONCLUSIONS {#sec:conclusion}
===========
We presented two machine learning approaches to model a medical eMNS, namely using RF and ANN models. The results of both machine learning models were compared to a state-of-the-art LMEM. Both RF and ANN models achieved better overall performance than the LMEM in terms of $R^2$ and RMSE. The ANN model achieved even better modeling performance than the RF model, as an improvement over the LMEM was over 80% as opposed to 40% in terms of field-magnitude RMSE. The RMSE improvement of the ANN model over the LMEM model exceeded when the applied current was in the range of 30-35 A, while the improvement of the RF model was around . Machine learning shows promise for improving the precision of surgical procedures that use magnetic navigation by improving the magnetic field prediction.
ACKNOWLEDGMENT {#acknowledgment .unnumbered}
==============
This work was done when R. Yu was visiting the Multi-Scale Robotics Lab, ETH Zurich. The research activity was supported in part by the CUHK Research Postgraduate Student Grants for Overseas Academic Activities, Hong Kong Innovation and Technology Fund, and General Research Fund.
We would like to thank NVIDIA for providing us with a Titan Xp through the GPU grant program. This work was also supported by the Swiss National Science Foundation through grant number 200021 165564.
[^1]: $^{1}$R. Yu and C.C.Y. Poon are with the Division of Biomedical Engineering Research, Department of Surgery, The Chinese University of Hong Kong, Shatin, Hong Kong SAR.
[^2]: $^{2}$S.L. Charreyron, Q. Boehler, C. Weibel, and B.J Nelson are with the Multi-Scale Robotics Lab, ETH Zurich, Zurich, Switzerland.
[^3]: Six sensors were malfunctioning during the data collection process, and their measurements were thus removed from the dataset.
|
---
abstract: 'We examine the scaling regime for the detrended fluctuation analysis (DFA) - the most popular method used to detect the presence of long memory in data and the fractal structure of time series. First, the scaling range for DFA is studied for uncorrelated data as a function of length $L$ of time series and regression line coefficient $R^2$ at various confidence levels. Next, an analysis of artificial short series with long memory is performed. In both cases the scaling range $\lambda$ is found to change linearly – both with $L$ and $R^2$. We show how this dependence can be generalized to a simple unified model describing the relation $\lambda=\lambda(L, R^2, H)$ where $H$ ($1/2\leq H \leq 1$) stands for the Hurst exponent of long range autocorrelated data. Our findings should be useful in all applications of DFA technique, particularly for instantaneous (local) DFA where enormous number of short time series has to be examined at once, without possibility for preliminary check of the scaling range of each series separately.'
author:
- 'Dariusz Grech$^{(1)}$[^1] and Zygmunt Mazur$^{(2)}$'
title: ' On the scaling ranges of detrended fluctuation analysis for long-memory correlated short series of data'
---
\(1) Institute of Theoretical Physics, University of Wroc[ł]{}aw, Pl. M.Borna 9, PL-50-204 Wroc[ł]{}aw, Poland
\(2) Institute of Experimental Physics, University of Wroc[ł]{}aw, Pl. M.Borna 9,\
PL-50-204 Wroc[ł]{}aw, Poland
$$$$ **Keywords**: scaling range, detrended fluctuation analysis, Hurst exponent, power laws, time series, long memory, econophysics, complex systems\
**PACS:** 05.45.Tp, 05.40.-a, 05.45.-a, 89.75.Da, 89.65.Gh, 89.90.+n
Introduction and description of the method.
===========================================
Detrended fluctuation analysis (DFA) [@DFA_1; @DFA_2; @DFA_3] is now considered the main tool in searching for fractal [@fractals_1; @fractals_2; @fractals_3], multifractal [@multifr_1; @multifr_2] and long memory effects in ordered data. There is more than one thousand articles published on DFA and its applications so far. The detrended technique has been widely applied to various topics, just to mention: genetics (see e.g. [@DFA_2; @gen_1; @gen_2; @gen_3], meteorology (see e.g. [@meteo_1; @meteo_2; @meteo_3]), cardiac dynamics (see e.g. [@heart_1; @heart_2]), astrophysics (see e.g. [@astro]), finances (see e.g. [@finance_1; @finance_2; @finance_3; @finance_4; @finance_5; @finance_6; @finance_7]) and many others. The indisputable advantage of DFA over other available methods searching for the Hurst exponent $H$ [@Hurst1; @Hurst2] in series of data, like the rescaled range method (R/S) [@multifr_1; @Hurst1; @Hurst2; @R/S], is that DFA is shown to be resistant to some extent to non-stationarities in time series [@DFA_nonstat].
We will not describe the DFA technique in details here, for it is done in many other publications (see e.g. [@DFAdescr1; @DFAdescr2; @DFAdescr3; @DFAdescr4]). Instead, we will focus mainly on the issues which are relevant for the so called scaling range being the goal of this article.
Briefly, the DFA method contains the following steps: (i) the time series $x(t)$ ($t=1,2,...,L$) of data (random walk) is divided into non-overlapping boxes (time windows) of length $\tau$ each, (ii) the linear trend[^2] is found within each box and then subtracted from the signal giving so called detrended signal $\hat{x}(t)$, (iii) the mean-square fluctuation $F^2(\tau)$ of the detrended signal is calculated in each box and then $F^2(\tau)$ is averaged over all boxes of size $\tau$, (iv) the procedure is repeated for all box sizes $\tau$ ($1<\tau<L$).
One expects that the power law
$$\langle F^2(\tau)\rangle_{box}\sim \tau^{2H}$$
is fulfilled for stationary signal[^3] where $\langle.\rangle_{box}$ is the expectation value - here, the average taken over all boxes of size $\tau$. The latter equation allows to make the linear fit in log-log scale to extract the value of $H$ exponent necessary in various applications. One can also look alternatively at the above relationship as a link between the variance of the detrended random walk $\hat{x}(t)$ and its duration time $t$, i.e. $\langle \hat{x}^2(t)\rangle \sim t^{2H}$ what reflects the precise definition of Hurst exponent in stochastic processes. The $H$ exponent clearly indicates the randomness nature of this process. One deals with uncorrelated steps in data series if $H=1/2$, once for other values of $H$ these steps are respectively anticorrelated ($0<H<1/2$) or autocorrelated with (positive) long memory ($1/2\leq H \leq 1$).
The edge part of time series is usually not covered by any box. Some authors suggest to overcome this difficulty performing DFA in two opposite directions in time series, i.e. according to increasing and then according to decreasing time arrow (see e.g. [@kantelhardt]). The average of mean-square fluctuations from such divisions is then taken for evaluation of time series properties.
We proposed another solution in Refs.[@grech_1; @grech_2]. If the remaining part of time series $\Delta L$ has the length $\tau/2\leq\Delta L<\tau$, we cover it by an additional box of size $\tau$ partly overlapping the preceding data. If $\Delta L<\tau/2$, we do not take into account the part of data contained in $\Delta L$. Such recipe is particularly useful in the ’local’ version of DFA [@finance_1; @grech_1; @grech_2; @DFA_loc2; @DFA_loc4; @grech_3; @kristoufek], where the time arrow is important. Throughout this article we will apply the latter approach.
If time series are infinitely long, the formula in Eq.(1) holds for all $\tau's$. However, in practise we always deal with finite, and sometimes with rather short time series. Particularly, it is a case for the mentioned already instantaneous or local DFA analysis, where one wants to find a dynamics of fractal properties changing in time and (or) their time dependent long memory in data. Covering the data series with boxes, we are finally stuck with situation that for small number of boxes covering the time series (for large $\tau$ ), the scaling is not revealed in Eq.(1) due to small statistics we deal with. In other words, we are allowed in this case to take $\tau$ only within some range $\tau_{min}\leq \tau\leq \tau_{max}$ called the scaling range. One expects within this range “sufficiently good” performance of the power law, thus leading to $H$ exponent extraction via linear fit. But what does this “sufficiently good” performance exactly mean? In most research activities authors end up with $\tau_{max}\sim 1/4 L$, where $L$ is the total length of considered data. Is it still good or already too large scaling range? This problem is somehow circumvented in papers but it does have impact on the final results. The aim of this and other forthcoming article [@grech_fut] is to confront this issue. Our approach will be different than the one published in [@stanley_scr; @michalski_scr; @comp_meth_fluct_anal]. The goal is to find qualitative and quantitative dependence between the scaling range $\lambda \equiv \tau_{max}$ and main parameters of time series like its length, level of long memory described by the Hurst exponent $H$, and the goodness of linear fit induced by the form of Eq.(1) in log-log scale. The latter one is usually measured by the $R^2$ regression line coefficient. All this can be done at desired confidence level ($CL$) indicating the minimal ratio of time series fulfilling the functional dependence $\lambda = \lambda (L, R^2, H)$. We are going to find this relation below.
Throughout this paper we assumed that $\tau_{min} = 8$ because below this threshold a significant lack of scaling in DFA is observed due to emergence of artificial autocorrelations associated with too short bursts of data in $\tau$ boxes. We start with analysis of uncorrelated data in the next section and proceed with long memory correlated time series in section 3. Section 4 tries to obtain an unified formula for scaling range vs $L$ and $R^2$ for all $H\geq 1/2$. Although the presented considerations are done exclusively for DFA method, they can be easy extended to other detrended methods introduced in literature, in particular to those based on moving averages [@DMA1; @DMA2; @DMA3; @DMA4]. The latter analysis is left to another publication \[40\].
DFA scaling ranges for uncorrelated data
========================================
The starting point for the entire search is the statistical analysis of an ensemble of artificially generated time series with a given length. For this ensemble we find the percentage rate of series which are below the specified level of regression line fit parameter $R^2$. This rate will obviously depend on the maximum size of the box $\tau_{max}$. The larger $\tau_{max}$, the percentage rate of series not matching the assumed criterion for $R^2$ will be also larger. Fig.1. illustrates this fact for two specified series lengths $L=10^3$ and $L=3\times 10^3$ of uncorrelated data increments ($H=1/2$) drawn from the normalized Gaussian distribution. The rejection rate, i.e the percentage rate of series not matching the assumed criterion for $R^2$ is shown there for different $R^2$ values.
We took two particular values of rejection rate in further analysis: $2.5\%$ and $5.0\%$, connected with confidence levels $CL= 97.5\%$ and $CL=95.0\%$ respectively. All data have been gathered numerically on a set of $5\times 10^4$ artificially generated time series of length between $5\times 10^2\leq L\leq2\times 10^4$ for the above-stated confidence levels. The $\tau_{max}$ value corresponding to required $CL$ and for given $R^2$ is identified with the scaling range and referred to exactly as $\lambda$.
Introducing for convenience a new parameter $u=1-R^2$, we may search for a $\lambda(L)$ dependence for $L\leq 2\times 10^4$, for different values of $u$ and for selected $CL$’s. The results are presented in a series of graphs in Figs. 2, 3 and reveal a very good linear relationship between the scaling range profile and the length of uncorrelated data[^4]:
$$\lambda(u,L) = A(u)L + B(u)$$
The functional dependence of coefficients $A(u)$ and $B(u)$ on $u$ has to be further specified from the regression line fit of the above equation. The latter procedure yields to the values of $A$ and $B$ estimated for the spread of $u$ parameters and gathered in Fig.4.
We see from these graphs that the dependence of $A(u)$ is again linear for both cases of $CL=97.5\%$ and $CL=95\%$, while the value of B varies very weakly with $u$, what legitimates us to accept $B(u) = b = const$.
Ultimately, the foregoing considerations lead to the following simple formula describing the full scaling range dependence on $L$ and $u$: $$\lambda(u,L) = (au+ a_0)L + b$$ with some unknown constants $a$, $a_0$ and $b$ to be fitted.
We made the fit for Eq.(3) requiring minimization of mean absolute error (MAE) and simultaneously, minimization of the maximal relative error (ME) for each of the fitting points[^5]$(L_i, u_j)$. The MAE denoted as $\Delta_{MAE}(\lambda)$ is understood as $$\Delta_{MAE}(\lambda)=1/N_{(ij)} \sum_{ij}|(\lambda^{exp}_{ij}(L,u)-\lambda_{ij}(L,u))/\lambda_{ij}(L,u)|$$ where $\lambda_{ij}(L,u)\equiv \lambda(L_i,u_j)$ is taken from Eq.(3) for the particular choice $L=L_i$ and $u=u_j$, while $\lambda^{exp}_{ij}(L,u)$ is the respective value simulated numerically for given ensemble of time series, and $N_{(ij)}$ counts different $(ij)$ pairs.
Similarly ME marked below as $\Delta_{ME}$ is simply defined as $$\Delta_{ME} = \max_{(ij)}(|\lambda^{exp}_{ij}(L,u)-\lambda_{ij}(L,u))|/\lambda_{ij}(L,u))$$ Note that some pairs $(L_i,u_j)$ are not permitted by the specific $CL$ demand[^6]. It is seen already in Fig.1. These points are therefore absent in Figs 2-3,5-6.
The fitting procedure led to the values of parameters in equation Eq.(3) gathered in Table 1. The exemplary results of scaling ranges for the wide spread of $L$ and $u$ values are presented graphically in Figs.5,6. Whenever $\lambda(L,u)$ comes out negative in the found fitting patterns for the particular length of the series, one should interpret this as a lack of scaling range at the given confidence level $CL$ for the required value of regression line coefficient $R^2$ within DFA.
DFA scaling range for long-memory correlated data
=================================================
The analysis presented in the previous section can be extended to time series manifesting long memory. The series with $0.5<H<0.9$ are of particular interest since they correspond to long-range autocorrelated data one often meets in practice in various areas.
To construct such signals we used Fourier filtering method (FFM) [@ffm] . The level of autocorrelations in this approach was directly modulated by the choice of autocorrelation function $C(\delta t)$ which satisfies for stationary series with long memory the known power law [@corr-gamma]: $$C(s)\equiv \langle \Delta x(t+s)\Delta x(t) \rangle \sim H(2H-1)s^{2H-2}
\label{corr}$$ where $\Delta x(t)=x(t+1)-x(t)$, ($t=1, 2,..., L-1$) are increments of discrete time series, $s$ is the time-lag between observations, $H$ is the Hurst exponent [@Hurst1; @Hurst2], and the average $\langle\rangle$ is taken over all data in series.
We start with similar analysis as the one shown in Fig.1 for uncorrelated data. Fig.7 presents an example of plot made for the ensemble of $5\times 10^4$ autocorrelated signals of length $L=10^3$ with $H=0.7$. The percentage rate of rejected time series not satisfying the assumed goodness $R^2$ of DFA fit is shown there for several distinct $R^2$ as a function of maximal box size $\tau_{max}$. The outcome of such analysis for a range of simulated data lengths and for various Hurst exponents can be collected in number of plots as in Figs.8a,9a for $\lambda(L)$, and in Fig.8b,9b for $\lambda(u)$ dependence. To make the figure readable and due to lack of space, only plots for $u=0.02$ and $L=10^3$ are shown. The relations for other values look qualitatively the same. We should not be surprised, taking into account the results of the previous chapter, that these relationships are again linear. Thus the formula in Eq.(2) is more general and coefficients $A(u)$ and $B(u)$ are linear function of $u$ also for series with memory. The latter relationships are drawn in details for $H=0.6,\, 0.7,\, 0.8$ in Fig.10. In particular, we notice from Fig.10b the similar behavior of $B(u)$ coefficient for autocorrelated data as it has been observed in the previous section for uncorrelated signals, i.e. $B(u)$ remains almost constant as a function of $u$. Moreover, its dependence on $H$ is also negligible. Thus, the formula postulated in Eq.(3) applies also for autocorrelated data with $a$, $a_0$ and $b$ coefficients to be fitted independently for each $H$.
We did such a fit for series with long memory, assuming the same criterions for MAE and ME as previously. The results are collected in Table 1 for two different confidence levels and are shown graphically in Figs.11-16. These figures generalize plots shown for $H=0.5$ in Figs.5,6. The extremely good linear relationship of Eq.(3) is kept for autocorrelated signals up to $L=10^4$. Only for highly autocorrelated series ($H> 0.8$) or very long ones ($L\geq 10^4$) we noticed some slight departure from the linear dependence[^7].
$H\setminus CL$ $a^{97.5\%}$ $a^{97.5\%}_0$ $b^{97.5\%}$ $\Delta^{97.5\%}_{MAE}$ $\Delta^{97.5\%}_{ME}$ $a^{95\%}$ $a^{95\%}_0$ $b^{95\%}$ $\Delta^{95\%}_{MAE}$ $\Delta^{95\%}_{ME}$
----------------- -------------- ---------------- -------------- ------------------------- ------------------------ ------------ -------------- ------------ ----------------------- ----------------------
$H=0.5$ 6.02 0.0034 -92 1.8% 5.1% 7.00 0.0031 -100 1.8% 5.1%
$H=0.6$ 6.14 0.0110 -95 1.6% 5.2% 7.22 0.0098 -105 1.9% 5.3%
$H=0.7$ 6.46 0.0124 -97 1.9% 4.6% 7.66 0.0084 -103 1.8% 4.6%
$H=0.8$ 6.88 0.0136 -100 1.5% 4.8% 8.12 0.0091 -104 2.5% 6.0%
: Results of the best fit for coefficients in Eq.(3) found for series with various autocorrelation level measured by $H$ exponent and for chosen two confidence levels: $97.5\%$ and $95\%$. The accuracy of fitted parameters are respectivel: $\Delta a=\pm 10^{-3}$, $\Delta a_0 =\pm 10^{-5}$, $\Delta b=0$.[]{data-label="tab1"}
Towards unified model of scaling ranges
=======================================
Finally, we should investigate if there exists a unified formula with the minimal number of free parameters, able to describe all scaling ranges of both uncorrelated and autocorrelated data. So far we know that Eq.(3) with parameters fitted according to Table 1 describes very well $\lambda(L,u)$ dependence for given $H$. We should discuss then the form of relationships $a(H)$, $a_0(H)$, and $b(H)$ in the relation
$$\lambda(u,L,H) = (a(H)u+ a_0(H))L + b(H)$$
Looking at the bottom panels of Figs.4, 10 one perceives immediately that the assumption $b(H) = const$ can be justified. Similarly, we may easily notice from data collected in Table 1 that $a_0 (H)/({a(H)u}) \lesssim\textit {O}({10^{-1}})$. It means that the component $a(H)u$ gives the leading contribution to the linear factor $a(H)u + a_0(H)$ in Eq. (7) for each value of $H$ and therefore, one should focus mainly on $a(H)$ dependence depicted in Fig.17. The latter relationship also appears to be linear, which allows to represent Eq.(7) in its simplest unified form containing the smallest number of four free parameters ($\alpha, \beta, \alpha_0, \gamma$) as follows:
$$\lambda(u,L,H) = ((\alpha H + \beta)u+ \alpha_0)L + \gamma$$
Demanding minimization of MAE and ME during fitting procedure of the proposal given in Eq.(8) to all data points $\lambda^{exp}(L_i,u_j,H)$ indicated in previous sections, we arrive with the best fit results for these free parameters as shown in Table 2. The obtained unified formula can be particularly useful while doing interpolation to arbitrary autocorrelation levels $1/2<H<1$.
In fact the fit based on Eq.(8) is of the same quality as the one produced by Eq.(3) (see Table 1 and 2 to compare MAE and ME errors). The difference between two fitting methods is so negligible that it cannot be noticed graphically. Therefore the fitting lines shown in series of Figs.11-16 describe equally well the unified model based on data from Table 2 and the ’local’ fit based on data from Table 1. We may also easy conclude from Eq.(8) that the average relative change in the scaling range $\delta \lambda(\delta H)/\lambda (H)$ due to the small change $\delta H$ in Hurst exponent is given as
$$\frac{\delta\lambda(\delta H,u)}{\lambda(H,u)} \simeq \alpha H u$$
and varies from $3\%$ (at $R^2=0.99$) to $10\%$ (at $R^2=0.97$) for any change $\delta H=0.1$ in the investigated signal.
$\alpha^{97.5\%}$ $\beta^{97.5\%}$ $\alpha^{97.5\%}_0$ $\gamma^{97.5\%}$ $\Delta^{97.5\%}_{MAE}$ $\Delta^{95\%}_{ME}$
------------------- ------------------ --------------------- ------------------- ------------------------- ----------------------
3.40 4.16 0.0097 -96 1.9% 5.9%
$\alpha^{95\%}$ $\beta^{95\%}$ $\alpha^{95\%}_0$ $\gamma^{95\%}$ $\Delta^{95\%}_{MAE}$ $\Delta^{95\%}_{ME}$
3.95 5.03 0.0070 -106 2.7% 5.7%
: Results of the best fit for coefficients in unified formula in Eq.(8). Fit was done for all data coming from investigated series, separately for two chosen confidence levels: $97.5\%$ and $95\%$. The accuracy of fitted parameters have been estimated: $\Delta \alpha = \Delta \beta = 10^{-3}$, $\delta \alpha_0 = 10^{-5}$, $\Delta \gamma =0$.[]{data-label="tab1"}
Discussion and Conclusions
==========================
In this study we searched for the scaling range properties of the most substantial power law between fluctuations of detrended random walk $F^2(\tau)$ and the length of the time window $\tau$ in which such fluctuations are measured. This power law proposed within DFA technique gives us an important information about the nature of randomness in stochastic process via link between the scaling exponent $H$ and the autocorrelation exponent between steps of random walk. Therefore, the precise knowledge of scaling range dependence on any other involved parameters is a substantial task and has an impact on the final outcomes of DFA power law quoted in Eq.(1). We did our simulations on the ensemble of $5\times 10^4$ short and medium-length time series with $5\times10^2\leq L \leq 10^4$. We varied also their autocorrelation properties in order to reflect properties of real random walk signals mostly existing in nature.
First, it has been found that for uncorrelated process, the scaling range $\lambda$ of DFA power law is the perfect linear function of data length and the goodness of linear fit to power law formula in Eq.(1). Moreover, this linear relationship extends also to time series with long memory. The uniform shape of $\lambda (L, u)$ dependence for different memory levels in data, rises the question if one unified simple formula describing dependence of scaling range on all parameters in a game, i.e. $\lambda
(L, u, H)$ exists . We found such a formula, and showed that it fits data obtained from numerical simulations no worse than patterns previously found in this article for $\lambda (L, u)$ dependence at separate values of H. The unified formula contains only four free parameters, which were calculated with high precision and are presented in Table 2. We showed also that scaling range grows with a long memory level present in time series – on the average of $3\div10\%$ for every $\delta H = 0.1$ (see Eq.(9)). A rather slight increase in the scaling range for the series with memory in comparison with the array of uncorrelated data may entitle us to simplify the scaling range for the series with long memory, using a model for uncorrelated data, i.e. with $H = 1/2$. The presented results can be considered therefore as the lower limit for the DFA scaling range profile.
The relations we found strike with their simplicity and make a useful recipe how to determine the scaling ranges, especially for short time series – wherever one needs to consider very large data sets arranged in shorter subseries. In particular, these results can be used in search for evolving (time-dependent) local Hurst exponent in large amount of moving time windows. The extension of this approach to other techniques of fluctuation analysis (FA) can also be done \[40\].
[50]{} C.-K. Peng, S. Havlin, H. E. Stanley, and A. L. Goldberger, Chaos 5, 82 (1995). C.-K. Peng, S. V. Buldyrev, S. Havlin, M. Simons, H. E. Stanley,and A. L. Goldberger, Phys.Rev.E 49, 1685 (1994). A. Bunde, S. Havlin, J. W. Kantelhardt, T. Penzel, J. H. Peter, and K. Voigt, Phys. Rev. Lett. 85, 3736 (2000). J. Feder, Fractals (Plenum, New York, 1988). H.-O. Peitgen, H. Jurgens, D. Saupe, Chaos and fractals (Springer, Berlin, 2004). D. Sornette, Critical phenomena in natural sciences (Springer, Berlin, 2004). B. B. Mandelbrot, Multifractals and 1/f noise: wild self-affinity in physics, selected works (1963-1976) (Springer, Berlin 1999). J. W. Kantelhardt, S. A. Zschiegner, E. Koscielny-Bunde, S. Havlin, A. Bunde, H. E. Stanley, Physica A 316, 87 (2002). C.-K. Peng, S. V. Buldyrev, A. L. Goldberger, S. Havlin, M. Simons, and H. E. Stanley, Phys. Rev.E 47, 3730 (1993). S. V. Buldyrev, A. L. Goldberger, S. Havlin, C.-K. Peng, H. E. Stanley, and M. Simons, Biophys. J. 65, 2673 (1993). H. E. Stanley, S. V. Buldyrev, A. L. Goldberger, S. Havlin, C.-K. Peng, and M. Simons, Physica A 273, 1 (1999). E. Koscielny-Bunde, A. Bunde, S. Havlin, H. E. Roman, Y. Goldreich, and H. J. Schellnhuber, Phys. rev. Lett. 81, 729 (1998). P. Talkner and R. O. Weber, Phys. Rev. E 62, 150 (2000). M. Ausloos and K. Ivanova, Phys. Rev. E 63, 047201 (2001). P. Ch. Ivanov, M. G. Rosenblum, C.-K. Peng, J. E. Mietus, S. Havlin, H. E. Stanley, and A. L. Goldberger, Nature (London) 383, 323 (1996). P. Ch. Ivanov, L.A. Amaral, A. L. Goldberger, S. Havlin, M. G. Rosenblum, Z. R. Struzik, and H. E. Stanley, Nature (London) 399, 461 (1999). M. A. Moret, G. F. Zebende, E. Noguiera, Jr., and M. G. Pereira, Phys. Rev. E 68, 041104 (2003). N. Vandewalle, M. Ausloos, Physica A 246 (1997) 454. N. Vandewalle, M. Ausloos, Phys.Rev. E 58 (1998) 6832. Y. Liu, P. Gopikrishnan, P. Cizeau, M. Meyer, C.-K. Peng, and H. E. Stanley, Phys. Rev, E 60, 1390 (1999). N. Vandewalle, M. Ausloos, and P. Boveroux, Physica A 269, 170 (1999). M. Ausloos, K. Ivanova, Physica A 286 (2000) 353. M. Ausloos, K. Ivanova, Int. J. Mod. Phys. C 12 (2001) 169. M. Ausloos, K. Ivanova, Eur. Phys. J. B 20 (2001) 537. . H. E. Hurst, Trans. Am. Soc. Civ. Eng. 116 (1951) 770. B. B. Mandelbrot, J.R. Wallis, Water Resour. Res. 5, No.2, (1969) 321. B. B. Mandelbrot, J. W. van Ness, Fractional Brownian motions, fractional noises and applications, SIAM Review 10, 422 (1968). Z. Chen, P. Ch. Ivanov, K. Hu, H. E. Stanley, Phys. Rev. E 65, 041107 (2002). J. W. Kantelhardt, E. Koscielny-Bunde, H. H. A. Rego, S. Havlin, A. Bunde, Physica A 295, 441 (2001). K. Hu, P. Ch. Ivanov, Z. Chen, P. Carpena, H. E. Stanley, Phys. Rev. E 64, 011114 (2001). Z. Chen, K. Hu, P. Carpena, P. Bernaola-Galvan, H. E. Stanley, P. Ch. Ivanov, Phys. Rev. E 71, 011104 (2005). M. S. Taqqu, V. Teverovsky, W. Willinger, Fractals 3 (1995) 785. J. W. Kantelhardt, arXiv:0804.0747v1 \[physics.data-an\] D. Grech, Z. Mazur, Physica A 336 (2004) 133. D. Grech, G. Pamu[ł]{}a, Physica A 387 (2008) 4299. M. Ausloos, Physica A 285 (2000) 48. J. Alvarez-Ramirez, J. Alvarez, E. Rodriguez, G. Fernandez-Anaya, Physica A 387 (2008) 6159. . Czarnecki, D. Grech, G. Pamu[ł]{}a, Physica A 387 (2008) 6801. L. Kristoufek, Acta Phys. Pol. B 41 (2010) 1223. D. Grech and Z. Mazur, in preparation L. Xu, P. Ch. Ivanov, K. Hu, Z. Chen, A. Carbone, and H. E. Stanley, Phys. Rev. E 71, 051101 (2005). S. Michalski, Physica A 387 (2008) 217. A. Bashan, R. Bartsch, J.W. Kantelhardt, S. Havlin, Physica A 387 (2008) 5080. E. Alessio, A. Carbone, G. Castelli, and V. Frappietro, Eur. Phys. J. B 27, 197 (2002). A. Carbone, G. Castelli, and H. E. Stanley, Phys. Rev. E 69, 026105 (2004). A. Carbone, H. E. Stanley, Physica A 340, 544 (2004). A. Carbone, G. Castelli, H. E. Stanley, Physica A 344, 267 (2004). H. A. Makse, S. Havlin, M. Schwartz, and H. E. Stanley, Phys. Rev. E 53, 5445 (1996). M. S. Taqqu, V. Teverovsky, and W. Willinger, Fractals 3, 785 (1995)
[^1]: [email protected]
[^2]: the subtracted trend can also be mimicked by nonlinear polynomial function of order $k$ in so called DFA-$k$ schemes - we will not discus this issue in details here
[^3]: this property holds also for non-stationary, positively autocorrelated ($H>1/2$) time series [@DFA_nonstat]
[^4]: obviously $\lambda(L,u) \in \textsc{Z}$, so in fact only the integer part of RHS of Eq.(2) should be taken for determination of $\lambda(L,u)$
[^5]: [we considered $u_j=5\times 10^{-3}(1+j)$ where $j=1,2,...,9$ and $L_i$ covering the range from $L=5\times10^2$ up to $L=2\times 10^4$ as indicated on plots]{}
[^6]: we found that following pairs: $(L<3000, u_1), (L<1800, u_2), (L<1500, u_3), (L<1000, u_4)$,\
$(L<1000, u_5)$, $(L<800, u_6)$, $(L<800, u_7)$, $(L<600, u_8)$ do not match the $CL=97.5\%$ requirement, and: $(L<2400, u_1), (L<1800, u_2), (L<1200, u_3), (L<1000, u_4), (L<800, u_5), (L<800, u_6), (L<600, u_7)$ do not match the $CL=95\%$ demand
[^7]: the predicted scaling ranges from Eq.(3) were nevertheless lower in these cases than the ones coming from the direct simulation
|
---
abstract: 'The acceleration of electrons in 3C 279 is investigated through analyzing the injected electron energy distribution (EED) in a time-dependent synchrotron self-Compton + external Compton emission model. In this model, it is assumed that relativistic electrons are continuously injected into the emission region, and the injected EED \[$Q_e^\prime(\gamma^\prime)$\] follows a single power-law form with low- and high-energy cutoffs $\rm \gamma_{min}''$ and $\rm \gamma_{max}''$, respectively, and the spectral index $n$, i.e, $Q_e^\prime(\gamma^\prime)\propto\gamma^{\prime-n}$. This model is applied to 14 quasi-simultaneous spectral energy distributions (SEDs) of 3C 279. The Markov Chain Monte Carlo fitting technique is performed to obtain the best-fitting parameters and the uncertainties on the parameters. The results show that the injected EED is well constrained in each state. The value of $n$ is in the range of 2.5 to 3.8, which is larger than that expected by the classic non-relativistic shock acceleration. However, the large value of $n$ can be explained by the relativistic oblique shock acceleration. The flaring activity seems to be related to an increased acceleration efficiency, reflected in an increased $\gamma''_{\rm min}$ and electron injection power.'
author:
- |
Wen Hu,$^{1}$ Dahai Yan,$^{2}$[^1] Benzhong Dai,$^{3}$ Wei Zeng$^{3}$ and Qianglin Hu$^{1}$\
$^{1}$College of Mathematics and Physics, Jinggangshan University, Jiangxi Province, Jian 343009, People’s Republic of China\
$^{2}$Key Laboratory for the Structure and Evolution of Celestial Objects, Yunnan Observatory, Chinese Academy of Sciences,\
Kunming 650011, People’s Republic of China\
$^{3}$Department of Astronomy, Key Laboratory of Astroparticle Physics,Yunnan Province, Yunnan University, Kunming 650091,\
People’s Republic of China
date: 'Accepted 2020 January 23. Received 2020 January 21; in original form 2020 January 8'
title: On the Injection of Relativistic Electrons in the Jet of 3C 279
---
\[firstpage\]
galaxies: jets — gamma rays: galaxies — radiation mechanisms: non-thermal
Introduction {#sec:intro}
============
Blazars are a subclass of active galactic nuclei (AGNs), with their relativistic jets pointing very close to our line of sight [@Urry1995; @Ulrich1997]. The non-thermal radiation produced in the relativistic jet covers from radio up to $\gamma$-ray bands. The jet emission is highly variable, with variability timescales from years to several minutes. Blazar’s spectral energy distribution (SED) presents two humps. The low-energy hump which is believed to be produced by synchrotron radiation of relativistic electrons, peaks between infrared and X-ray bands. The high-energy bump which could be produced by inverse-Compton (IC) scattering of the relativistic electrons, peaks at gamma-ray energies.
Blazars are divided into flat spectrum radio quasars (FSRQs) and BL Lacertae objects (BL Lacs) based on the rest-frame equivalent width (EW) of their broad optical emission lines [@Stocke1991; @Stickel1991]. FSRQs have strong broad emission lines with $\rm EW>5$Å, while BL Lacs have weak or no emission lines. The synchrotron peak frequencies of FSRQ are usually $<10^{14}$ Hz, due to the strong cooling of relativistic electrons in intense external photon fields [@Ghisellini2008]. $\gamma$-rays from FSRQs could be ascribed to IC scattering of the relativistic electrons. The seed photons could be from $\sim$sub-parsec (pc) size broad-line-region (BLR) [e.g., @Sikora1994; @Zhang2012; @Bottcher2013; @Hu2015] and/or $\sim$pc-scale size dust torus (DT) [e.g., @Blazejowski2000; @Dermer2014; @Hu2017b; @Wu2018], depending on the location of the $\gamma$-ray emitting region [e.g., @Ghisellini2010]. The particle acceleration mechanism in blazar jets is still a hot question. By means of numerical simulations, particle accelerations in blazar jets were explored [e.g., @Sironi2009; @Sironi2015; @Summerlin2012; @Guo]. The studies of numerical simulation focus on micro-physics and acceleration efficiency. However, there is a gap between the numerical simulations and the observations.
Diffusive shock acceleration is the mostly discussed particle acceleration mechanism. The power-law form of particle distribution is a key feature of this mechanism. For non-relativistic shock acceleration, the index of the particle distribution only depends on the shock compression ratio $r$, i.e, the power-law index $n=(r+2)/(r-1)$ [e.g., @Drury1983; @Jones1991]. For strong shock with $r=4$, the canonical $n\simeq2$ is obtained [e.g., @Drury1983; @Jones1991]. For the accelerations at relativistic shocks, a wide variety of power-law indices is feasible, depending on the properties of the shock and the magnetic field [@Kirk1989; @Ellison1990; @Ellison2004; @Summerlin2012; @Baring2017].
By fitting observed data with a proper emission model, one can obtain emitting EED. It is the result of the competition between acceleration/injection and cooling, and it can be used to investigate the acceleration mechanism [e.g., @Massaro2006; @Yan2013; @Zhou2014]. However, this tactic is only suitable for the blazars in which the cooling effect does not significantly re-shape the accelerated/injected electron distribution, like Mrk 421 and Mrk 501 **[@Ushio2010; @Tramacere; @Yan2013; @Peng2014]** (i.e., the high-synchrotron-peaked BL Lacs).
In FSRQs, the strong radiative cooling of electrons due to the IC scattering off external photons has a big impact on the evolution of the emitting EED. Hence, the emitting EED cannot be directly connected to acceleration process.
Here, we investigate the acceleration process of the electrons in the FSRQ 3C 279 through analyzing the injected EEDs in a time-dependent radiative model. Throughout the paper, we adopt the cosmological parameters $\rm H_0=69.6 ~km~s^{-1}~Mpc^{-1}$, $\Omega_M=0.286$, and $\Omega_\Lambda = 0.714$. This results in the luminosity distance $d_L=3113.6 ~\rm Mpc$ for 3C 279 with redshift $z=0.536$.
Method {#model}
======
We adopt a one-zone homogeneous leptonic jet model. It is assumed that emissions are produced in a spherical blob of radius $R^\prime$ filled with a uniform magnetic field $B^\prime$. The blob moves with a relativistic speed $\beta_\Gamma c= c(1-1/\Gamma^2 )^{1/2}$ and an angle $\theta$ with respect to the line of sight, where $c$ is the speed of light and $\Gamma$ is the bulk Lorentz factor of the blob. The observed radiations are strongly boosted by the relativistic Doppler factor $\delta_D=1/[\Gamma(1-\beta_\Gamma\cos\theta)]$. It is assumed $\theta\sim1/\Gamma$, resulting in $\delta_D\sim\Gamma$. Here and throughout this paper, primed quantities refer to the frame comoving with the blob and unprimed quantities refer to the observer’s frame.
Solving emitting EED
--------------------
In the model, we assume that the accelerated electrons are continuously injected into the blob. The isotropic electrons loss energy through synchrotron radiation and IC scattering, and may also escape out of the blob. The evolution of the electrons in the comoving frame of the blob is governed by [e.g., @Coppi1990; @Chiaberge1999] $$\label{kineticEQ}
\frac{\partial{N_e^\prime(\gamma')}}{\partial{t}}+\frac{\partial}{\partial{\gamma^\prime}}\left[\dot{\gamma}^\prime N_e^\prime(\gamma')\right]+\frac{N_e^\prime(\gamma')}{t'_{\rm esc}}=\dot{Q}_e^\prime\ ,$$ where $N_e^\prime(\gamma')$ is the number of the electrons per unit $\gamma^\prime$, $\dot{\gamma}'$ is the total energy-loss rate of the electrons, $t'_{\rm esc}$ is the escape timescale of the electrons, and $\dot{Q}_e$ is the source term describing the injection rate of the electrons in units of $s^{-1}$.
The injected EED is assumed to be a single power-law distribution, $$\label{eq2}
Q_e^\prime(\gamma^\prime)=Q_0'\gamma^{\prime-n},~\gamma_{\rm min}'\le\gamma'\le\gamma_{\rm max}',$$ with $$\label{eq3}
Q_0' =\left\{
\begin{tabular}{l}
$\frac{P'_e}{m_ec^2}\frac{2-n}{{\gamma_{\rm max}'}^{2-n}-{\gamma_{\rm min}'}^{2-n}};~n\neq2$ \\
$\frac{P'_e}{m_ec^2\ln\left(\gamma_{\rm max}'/\gamma_{\rm min}'\right)};~n=2$ \\
\end{tabular}
\right. ,$$ where $\gamma_{\rm min}'$ and $\gamma_{\rm max}'$ are respectively the low and high energy cutoffs, and $P'_e$ is the injection power in the units of $\rm erg/s$, and $n$ is the spectral index [@Bottcher2002].
Three radiative energy losses of the electrons are considered:
\(1) synchrotron radiation cooling $$-\dot\gamma'_{\rm syn}=\frac{4\sigma_T}{3m_ec}{U_B'\gamma'}^2,$$ where $U_B'={B'}^2/8\pi$ is the magnetic field energy density.
\(2) synchrotron self-Compton radiation (SSC) cooling [e.g., @Jones1968; @Blumenthal1970; @Finke2008] $$-\dot\gamma'_{\rm ssc}=\frac{4\sigma_T}{3m_ec}{\gamma'}^2\int_0^\infty d\epsilon' u_{syn}'(\epsilon')f_{kn}(\epsilon',\gamma'),$$ where $u_{\rm syn}'(\epsilon')\simeq(\sigma_{\rm T}U_B')/(2\pi R'^2\epsilon')\gamma_s'^3N'_e(\gamma_s')$ is the spectral energy density of the synchrotron radiation. Here, $\gamma_s'=\sqrt{\epsilon'B_{\rm cr}/B'}$ is a synchrotron-emitting electron’s Lorentz factor where $B_{\rm cr}\simeq4.414\times10^{13}$ G is the critical magnetic field.
$$f_{kn}(\epsilon',\gamma')=\frac{9}{16}\int_{\gamma'_{\rm low}}^{\gamma'}d\gamma''F_c(x,q)\frac{\gamma'-\gamma''}{{\epsilon'}^2{\gamma'}^4},$$
where the lower limit for the integration is $\gamma'_{\rm low}\simeq\gamma'+\epsilon'-\frac{4{\gamma'}^2\epsilon'}{1+4\gamma'\epsilon'}$, and $$\label{fc}
F_{c}(x,q) = \Big[2q\ln{q}+q+1-2q^2 + \frac{(xq)^2}{2(1+xq)}(1-q)\Big]\ .$$ Here, $x=4\epsilon'\gamma'$, $q=\frac{\epsilon_\gamma'/\gamma'}{x(1-\epsilon_\gamma'/\gamma')}$, and $\epsilon_\gamma'=\gamma'+\epsilon'-\gamma''$ is the scattered photon energy required by the conservation of energy. The limits on $q$ are $\frac{1}{4\gamma'^2}\le q\le1$. (3) external-Compton (EC) cooling $$-\dot\gamma'_{\rm ec}=\frac{4\sigma_T}{3m_ec}\gamma^2\int_0^\infty d\epsilon u_{ext}(\epsilon)f_{kn}(\epsilon,\gamma)\ ,$$ where $u_{\rm ext}(\epsilon)$ is the spectral energy density of the external photon field. The quantities $\gamma=\delta_D\gamma'$ and $\epsilon$ refer to the stationary frame with respect to the black hole (BH). For the EC processes, we consider the seed photons from BLR and DT. In this work, BLR and IR DT radiations are assumed to be a dilute blackbody [e.g., @Liu2006; @Tavecchio2008], $$u_{ext}(\epsilon)=\frac{15U_0}{(\pi \Theta)^4}\frac{\epsilon^3}{\exp\left(\epsilon/\Theta\right)-1}\ ,$$ where $\Theta$ and $U_0$ are the dimensionless temperature and energy density of the BLR/DT radiation field, respectively. We consider the BLR radiation with $\Theta\simeq9.6\times10^4$ K/($5.93\times10^9$ K) (corresponding to $\sim2\times10^{15}$ Hz) and $\rm U_{0}\simeq2.7\times10^{-2}\ erg\ cm^{-3}$ [e.g., @Ghisellini2008], and the IR DT radiation with $\Theta\simeq1.4\times10^3$ K/($5.93\times10^9$ K) (corresponding to $\sim3\times10^{13}$ Hz) and $\rm U_{0}\simeq2.1\times10^{-4}\ erg\ cm^{-3}$ [e.g., @Ghisellini2009].
Therefore, the total cooling rate of the electrons is $\dot{\gamma}'=\dot\gamma'_{\rm syn}+\dot\gamma'_{\rm ssc}+\dot\gamma'_{\rm ec}$. We simply assume an energy-independent escape for the electrons, i.e., $t'_{\rm esc}=\eta_{\rm esc}R^\prime/c$, where it is required that $\eta>1$ [e.g., @Bottcher2002]. With the above information, Equation (\[kineticEQ\]) is solved by using the iterative scheme described by [@Graff2008] to obtain the steady-state EED.
Calculation of emission spectra
-------------------------------
The spectra of synchrotron radiation, SSC and EC are calculated with the formulas in @Finke2008 [@Dermer2009]. We here give the key formulas. The synchrotron spectrum is $$\nu f_{\nu}^{\rm syn}=\frac{ \delta_D^4\sqrt{3}e^3B^\prime}{4\pi hd_L^2}\chi(\tau)\epsilon^\prime\int_1^\infty d\gamma^\prime N_e^\prime(\gamma^\prime) R_s(\epsilon^\prime/\epsilon^\prime_c),$$ where $\epsilon'm_ec^2=(1+z)h\nu/\delta_D$, $e$ is the fundamental charge and $h$ is the Planck constant. In the spherical approximation, the factor $\chi(\tau)\equiv3u(\tau)/\tau$, where $\tau=2\kappa_{\epsilon'} R'$ is the synchrotron self-absorption (SSA) opacity and $u(\tau)=\frac{1}{2}\Big(1-\frac{2}{\tau^2}[1-(1+\tau)\exp(-\tau)]\Big)$. The SSA coefficient is given by $$\kappa_{\epsilon'}=-\frac{\sqrt{3}B'e^3\lambda_c^3}{8\pi hm_ec^3{\epsilon'}^2}\int_1^\infty d\gamma' R_s(\frac{\epsilon'}{\epsilon'_c})\Big[{\gamma'}^2\frac{\partial}{\partial\gamma'}\Big(\frac{N_e'(\gamma')}{{\gamma'}^2}\Big)\Big]\ ,$$ where $m_e$ is the rest mass of electron and $\lambda_c=h/m_ec=2.43\times10^{-10}~\rm cm$ is the electron Compton wavelength. Here, $\epsilon_c'=\frac{3eB'h}{4\pi m_e^2c^3}{\gamma^\prime}^2$ is the characteristic energy of synchrotron radiation in the units of $m_ec^2$, and $R_s(x)=(x/2)\int_0^\pi{d\theta}\sin\theta\int_{x/\sin\theta}^\infty{dtK_{5/3}(t)}$.
The SSC/EC spectrum is given by $$% \nonumber to remove numbering (before each equation)
\nu f_{\nu}^{\rm SSC/EC}=f_L\epsilon_\gamma'^2\int_0^\infty{}d\epsilon' \frac{u_{\rm syn/ext}'(\epsilon')}{\epsilon'^2}\int_{1}^{\infty} {}d\gamma^\prime{}\frac{N_e'(\gamma')}{\gamma'^2}F_{c}(x,q),$$ where $\epsilon_\gamma'm_ec^2=(1+z)h\nu/\delta_D$, $f_L=(3c\sigma_T\delta_D^4)/(16\pi d_L^2)$, and $u_{\rm ext}'(\epsilon')=\delta_{\rm D}^3u_{\rm ext}(\epsilon'/\delta_{\rm D})$ [e.g., @Dermer2009; @Ghisellini2009].
The model is characterized by eight parameters, i.e., $B', \delta_D, P'_e, n, \gamma_{\rm min}', \gamma_{\rm max}', \eta_{\rm esc}$ and $R'$. The radius of the emission region can be estimated from the minimum variability timescale $t_{\rm var}$, i.e., $R'=c\delta_D t_{\rm var}/(1+z)$.
MCMC fitting technique
----------------------
In order to unbiasedly constrain the model parameters, we adopt MCMC technique which is based on Bayesian statistics to perform fitting. The MCMC fitting technique is a powerful tool to explore the multi-dimensional parameter space in blazar science [@Yan2013; @Yan2015]. The details on MCMC technique can be found in [@Lewis2002; @Yuan2011; @Liu2012].
Application to 3C 279 {#results}
=====================
3C 279 is one of the best studied FSRQs. It has been intensively monitored from radio band to $\gamma$-ray energies [e.g., @Wehrle1998; @Hartman1996; @Bottcher2007; @Collmar2010; @Abdo2010; @Hayashida2012; @Hayashida2015; @Pacciani2014; @Aleksic2015]. 3C 279 shows rapid variabilities at all wavelengths. The radio and optical emissions are highly-polarized. The correlations between the optical polarization level/angle and $\gamma$-ray variabilities provide strong evidence for the SSC+EC model [e.g., @Abdo2010; @Paliya2015; @Hayashida2012]. [@Hayashida2012; @Hayashida2015] and [@Paliya2015] have constructed 16 high-quality SEDs for 3C 279 from (quasi-)simultaneous observations by [*Fermi*]{} satellite together with many other facilities. Note that there is a temporal overlap of Period H in [@Hayashida2012] with the low-activity state in [@Paliya2015], and the X-ray data are lacking in period B in [@Hayashida2015]. We therefore do not consider the SED in the low-activity state in [@Paliya2015] and the one in period B in [@Hayashida2015]. We apply the method described in Section \[model\] to the rest of 14 high-quality SEDs. In our fitting, the radio data of $\lesssim200\ $GHz are neglected, due to the fact that the low-frequency radio emission comes from the large-scale jet.
[@Paliya2015] showed that the $\gamma$-ray variability timescale $t_{\rm var}$ can be down to $\sim1-2$ hours, and [@Hayashida2015] reported $t_{\rm var} \sim2$ hours in the flare state of Period D. In addition, variabilities down to the timescale of a few hours were also reported in [@Hayashida2012]. Hence, to reduce the number of model parameters, we take $\rm t_{\rm var}$ = 2 hours in the fittings. In the process of testing our method, it is found that the observed data is insensitive to $\gamma'_{\rm max}$. We then fix it to a large value, $\gamma'_{\rm max} = 3\times10^{4}$. There are finally six free parameters in the fittings.
Following [@Poole2008] and [@Abdo2011], a relative systematic uncertainty, namely 5% of the data, is added in quadrature to the statistical error of the IR-optical-UV and X-rays data. This is due to the fact that the errors of these data are dominated by the systematic errors.
$B^\prime$ (G) $\delta_{\rm D}$ (10) $\eta_{\rm esc}$ (10) $P_e'\ (10^{41}\ \rm erg/s)$ $\gamma_{\rm min}'\ (10^2)$ $n$ $\chi_{\rm DT}^2\ (dof)$
------------------ ------------------- ----------------------- ----------------------- ------------------------------ ----------------------------- ------------------- --------------------------
$\rm Period~ A$ $ 1.02 \pm 0.02 $ $ 3.78 \pm 0.08 $ $ 7.67 $ $ 4.27 \pm 0.19 $ $ 2.48 \pm 0.21 $ $ 2.82 \pm 0.04 $ 0.69(34)
95% CI 0.97 - 1.08 3.64 - 3.94 $\ge2.72$ 3.97 - 4.71 2.04 - 2.86 2.74 - 2.90
$\rm Period~ B$ $ 0.67 \pm 0.06 $ $ 4.10 \pm 0.14 $ $ 2.30 $ $ 10.21\pm 1.46 $ $ 2.89 \pm 0.44 $ $ 2.49 \pm 0.10 $ 1.47(17)
95% CI 0.56 - 0.80 3.84 - 4.41 $\ge0.18$ 7.73 - 13.99 1.95 - 3.80 2.31 - 2.70
$\rm Period~ C$ $ 1.40 \pm 0.07 $ $ 4.68 \pm 0.13 $ $ 2.78 \pm 0.77 $ $ 4.62 \pm 0.27 $ $ 2.66 \pm 0.20 $ $ 3.16 \pm 0.06 $ 3.10(36)
95% CI 1.27 - 1.52 4.45 - 4.93 1.24 - 4.26 4.13 - 5.16 2.26 - 3.04 3.03 - 3.28
$\rm Period~ D$ $ 1.19 \pm 0.05 $ $ 4.75 \pm 0.15 $ $ 4.75 $ $ 5.17 \pm 0.56 $ $ 4.66 \pm 0.40 $ $ 3.62 \pm 0.09 $ 1.97(16)
95% CI 1.09 - 1.29 4.48 - 5.07 $\ge0.83$ 4.14 - 6.30 3.93 - 5.55 3.45 - 3.80
$\rm Period~ E$ $ 1.46 \pm 0.19 $ $ 3.91 \pm 0.15 $ $ 5.90 $ $ 3.74 \pm 0.38 $ $ 3.83 \pm 0.23 $ $ 3.44 \pm 0.04 $ 2.82(21)
95% CI 1.23 - 2.03 3.50 - 4.15 $\ge1.31$ 3.04 - 4.56 3.34 - 4.28 3.35 - 3.53
$\rm Period~ F$ $ 1.44 \pm 0.17 $ $ 3.46 \pm 0.24 $ $ 5.73 $ $ 5.31 \pm 0.65 $ $ 3.73 \pm 0.51 $ $ 3.49 \pm 0.24 $ 0.26(12)
95% CI 1.14 - 1.80 3.01 - 3.95 $\ge1.08$ 4.26 - 6.86 2.89 - 4.85 3.01 - 3.95
$\rm Period~ G$ $ 0.97 \pm 0.11 $ $ 3.81 \pm 0.29 $ $ 5.82 $ $ 9.13 \pm 1.86 $ $ 6.33 \pm 1.44 $ $ 3.31 \pm 0.11 $ 0.66(13)
95% CI 0.76 - 1.21 3.29 - 4.41 $\ge1.32 $ 6.25 - 13.73 4.03 - 9.42 3.10 - 3.52
$\rm Period~ H$ $ 0.86 \pm 0.15 $ $ 3.57 \pm 0.28 $ $ 5.29 $ $ 5.21 \pm 0.78 $ $ 2.84 \pm 0.33 $ $ 3.53 \pm 0.24 $ 0.45(16)
95% CI 0.61 - 1.20 3.08 - 4.18 $\ge0.98$ 3.94 - 6.94 2.21 - 3.49 3.07 - 4.00
$\rm Flare1$ $ 1.06 \pm 0.08 $ $ 3.72 \pm 0.15 $ $ 5.91 $ $ 10.15\pm 1.59 $ $ 8.77 \pm 1.41 $ $ 3.35 \pm 0.09 $ 1.88(19)
95% CI 0.91 - 1.23 3.44 - 4.02 $\ge1.72$ 7.51 - 13.75 6.34 - 11.88 3.19 - 3.55
$\rm Flare2\dag$ $ 0.87 \pm 0.04 $ $ 4.39 \pm 0.09 $ $ 1.05 \pm 0.54 $ $ 1.28 \pm 0.10 $ $ 6.48 \pm 0.61 $ $ 3.26 \pm 0.05 $ 2.00(20)
95% CI 0.79 - 0.95 4.23 - 4.57 0.43 - 2.53 1.10 - 1.49 5.35 - 7.80 3.17 - 3.36
$\rm Post-flare$ $ 2.00 \pm 0.32 $ $ 4.67 \pm 0.58 $ $ 5.07 $ $ 4.07 \pm 1.10 $ $ 3.31 \pm 1.48 $ $ 3.25 \pm 0.05 $ 0.92(18)
95% CI 1.39 - 2.64 3.59 - 5.82 $\ge0.58$ 2.56 - 7.12 1.48 - 7.42 3.14 - 3.36
$\rm Period~A15$ $ 1.52 \pm 0.12 $ $ 4.33 \pm 0.22 $ $ 20.01 $ $ 4.14 \pm 0.41 $ $ 2.76 \pm 0.53 $ $ 3.39 \pm 0.06 $ 0.68(34)
95% CI 1.28 - 1.78 3.78 - 4.64 $\ge8.97$ 3.50 - 5.14 2.12 - 4.11 3.28 - 3.51
$\rm Period~C15$ $ 1.09 \pm 0.06 $ $ 3.90 \pm 0.10 $ $ 0.68 \pm 0.30 $ $ 10.96\pm 0.60 $ $ 4.82 \pm 0.43 $ $ 3.41 \pm 0.06 $ 1.82(30)
95% CI 0.99 - 1.22 3.71 - 4.08 0.29 - 1.42 9.86 - 12.22 4.00 - 5.71 3.28 - 3.54
$\rm Period~D15$ $ 0.52 \pm 0.05 $ $ 4.34 \pm 0.18 $ $ 0.25 \pm 0.10 $ $ 38.7 \pm 7.3 $ $ 8.30 \pm 1.32 $ $ 3.28 \pm 0.05 $ 1.31(20)
95% CI 0.44 - 0.62 4.01 - 4.70 0.12-0.52 2.68 - 5.50 5.98 - 11.25 3.19 - 3.39
$B^\prime$ (G) $\delta_{\rm D}$ (10) $\eta_{\rm esc}$ (10) $P_e'\ (10^{42}\rm\ erg/s)$ $\gamma_{\rm min}'\ (10^2)$ $n$ $\chi_{\rm BLR}^2\ (dof)$
------------------ ------------------ ----------------------- ----------------------- ----------------------------- ----------------------------- ------------------- ---------------------------
$\rm Period ~A$ $5.57 \pm 0.15 $ $ 2.04 \pm 0.02 $ $ 5.86 $ $ 2.50 \pm 0.09 $ $ 1.64 \pm 0.08 $ $ 3.18 \pm 0.04 $ 2.10(34)
95% CI 5.28 - 5.88 2.00 - 2.08 $\ge0.20$ 2.36 - 2.76 1.48 - 1.80 3.10 - 3.27
$\rm Period ~B$ $2.11 \pm 0.70 $ $ 2.25 \pm 0.12 $ $ 4.71 $ $ 6.55 \pm 1.78 $ $ 1.92 \pm 0.38 $ $ 2.69 \pm 0.26 $ 1.95(17)
95% CI 1.36 - 3.83 2.10 - 2.52 $\ge0.16$ 3.53 - 9.57 1.02 - 2.50 2.32 - 3.26
$\rm Period ~C$ $7.06 \pm 0.32 $ $ 2.46 \pm 0.02 $ $ 5.62 $ $ 2.58 \pm 0.08 $ $ 2.19 \pm 0.09 $ $ 3.62 \pm 0.06 $ 2.71(36)
95% CI 6.41 - 7.69 2.42 - 2.51 $\ge1.03$ 2.42 - 2.75 2.02 - 2.34 3.49 - 3.74
$\rm Period~D$ $6.13 \pm 0.32 $ $ 2.79 \pm 0.04 $ $ 5.29 $ $ 2.62 \pm 0.12 $ $ 2.35 \pm 0.18 $ $ 4.02 \pm 0.09 $ 2.45(16)
95% CI 5.52 - 6.79 2.72 - 2.86 $\ge0.50$ 2.40 - 2.85 2.02 - 2.72 3.85 - 4.19
$\rm Period~E$ $8.54 \pm 0.66 $ $ 2.26 \pm 0.06 $ $ 5.00 $ $ 2.01 \pm 0.13 $ $ 1.83 \pm 0.09 $ $ 3.61 \pm 0.05 $ 2.91(21)
95% CI 7.37 - 9.84 2.14 - 2.38 $\ge0.37 $ 1.76 - 2.27 1.66 - 2.01 3.52 - 3.71
$\rm Period~F$ $7.93 \pm 0.96 $ $ 1.77 \pm 0.08 $ $ 5.19 $ $ 3.64 \pm 0.34 $ $ 2.54 \pm 0.28 $ $ 3.62 \pm 0.22 $ 0.40(12)
95% CI 6.15 - 9.78 1.59 - 1.93 $\ge0.54$ 3.11 - 4.45 2.08 - 3.18 3.20 - 4.06
$\rm Period~G$ $5.07 \pm 0.50 $ $ 2.36 \pm 0.07 $ $ 5.13 $ $ 3.72 \pm 0.28 $ $ 2.67 \pm 0.25 $ $ 3.56 \pm 0.11 $ 1.18(13)
95% CI 4.16 - 6.16 2.23 - 2.50 $\ge0.37 $ 3.22 - 4.31 2.23 - 3.19 3.35 - 3.77
$\rm Period~H$ $3.34 \pm 1.23 $ $ 1.58 \pm 0.11 $ $ 4.59 $ $ 4.53 \pm 1.42 $ $ 1.91 \pm 0.28 $ $ 3.34 \pm 0.35 $ 0.64(16)
95% CI 1.74 - 6.50 1.41 - 1.82 $\ge0.12 $ 2.56 - 8.06 1.24 - 2.37 2.82 - 4.16
$\rm Flare1$ $6.19 \pm 0.48 $ $ 2.68 \pm 0.05 $ $ 5.21 $ $ 3.11 \pm 0.20 $ $ 2.58 \pm 0.24 $ $ 3.58 \pm 0.05 $ 2.28(19)
95% CI 5.33 - 7.20 2.58 - 2.79 $\ge0.55 $ 2.74 - 3.54 2.13 - 3.11 3.48 - 3.68
$\rm Flare2$ $4.95 \pm 0.23 $ $ 3.14 \pm 0.03 $ $ 5.81 $ $ 3.80 \pm 0.16 $ $ 2.24 \pm 0.11 $ $ 3.74 \pm 0.04 $ 7.33(20)
95% CI 4.51 - 5.40 3.07 - 3.20 $\ge1.04 $ 3.49 - 4.13 2.02 - 2.46 3.66 - 3.82
$\rm Post-flare$ $10.13\pm 1.46 $ $ 2.55 \pm 0.11 $ $ 5.13 $ $ 2.41 \pm 0.20 $ $ 1.92 \pm 0.44 $ $ 3.44 \pm 0.06 $ 0.85(18)
95% CI 7.37 - 13.05 2.32 - 2.76 $\ge0.40 $ 2.12 - 2.89 1.26 - 2.97 3.32 - 3.55
$\rm Period~A15$ $5.61 \pm 0.38 $ $ 2.03 \pm 0.05 $ $ 6.14 $ $ 3.36 \pm 0.12 $ $ 3.10 \pm 0.16 $ $ 3.62 \pm 0.07 $ 1.35(34)
95% CI 4.95 - 6.45 1.92 - 2.13 $\ge1.46 $ 3.15 - 3.59 2.79 - 3.43 3.48 - 3.76
$\rm Period~C15$ $4.49 \pm 0.28 $ $ 2.27 \pm 0.05 $ $ 4.65 $ $ 5.21 \pm 0.22 $ $ 3.47 \pm 0.14 $ $ 3.76 \pm 0.07 $ 2.24(30)
95% CI 3.98 - 5.08 2.16 - 2.37 $\ge0.11 $ 4.81 - 5.70 3.20 - 3.71 3.61 - 3.90
$\rm Period~D15$ $2.86 \pm 0.22 $ $ 3.46 \pm 0.07 $ $ 5.24 $ $ 6.69 \pm 0.51 $ $ 2.81 \pm 0.25 $ $ 3.89 \pm 0.05 $ 3.55(20)
95% CI 2.46 - 3.34 3.32 - 3.60 $\ge 0.54 $ 5.78 - 7.75 2.36 - 3.33 3.80 - 3.98
state $\log_{10}\gamma_c'$ $\log_{10}L_B$ (erg/s) $\log_{10}L_r$ (erg/s)
------------------ ---------------------- ------------------------ ------------------------
$\rm Period~ A$ $ 0.82 \pm 0.16 $ $ 44.19 \pm 0.04 $ $ 44.17 \pm 0.02 $
95% CI $\le$ 1.29 44.11 $-$ 44.29 44.14 $-$ 44.20
$\rm Period~ B$ $ 1.40 \pm 0.39 $ $ 43.97 \pm 0.13 $ $ 44.59 \pm 0.02 $
95% CI $\le$ 2.22 43.71 $-$ 44.25 44.54 $-$ 44.64
$\rm Period~ C$ $ 0.98 \pm 0.16 $ $ 44.84 \pm 0.08 $ $ 44.38 \pm 0.01 $
95% CI 0.74 $-$ 1.36 44.68 $-$ 45.00 44.36 $-$ 44.40
$\rm Period~ D$ $ 0.82 \pm 0.32 $ $ 44.72 \pm 0.08 $ $ 44.45\pm0.02 $
95% CI $\le$ 1.53 44.56 $-$ 44.90 $44.41-44.49 $
$\rm Period~ E$ $ 0.88 \pm 0.24 $ $ 44.56 \pm 0.06 $ $ 44.13 \pm0.06 $
95% CI $\le$ 1.50 44.45 $-$ 44.69 $43.96-44.23 $
$\rm Period~ F$ $ 1.01 \pm 0.27 $ $ 44.33 \pm 0.19 $ $ 44.17 \pm 0.04 $
95% CI $\le$ 1.67 43.95 $-$ 44.69 44.10 $-$ 44.24
$\rm Period~ G$ $ 0.96 \pm 0.25 $ $ 44.15 \pm 0.22 $ $ 44.51 \pm 0.03 $
95% CI $\le$ 1.51 43.71 $-$ 44.60 44.45 $-$ 44.58
$\rm Period~ H$ $ 1.12 \pm 0.29 $ $ 43.94 \pm 0.28 $ $ 44.18 \pm 0.04 $
95% CI $\le$ 1.84 43.41 $-$ 44.50 44.11 $-$ 44.26
$\rm Flare1$ $ 0.94 \pm 0.21 $ $ 44.20 \pm 0.13 $ $ 44.54\pm0.04 $
95% CI $\le$ 1.44 43.94 $-$ 44.46 $44.48-44.62 $
$\rm Flare2\dag$ $ 1.54 \pm 0.20 $ $ 44.32 \pm 0.07 $ $ 44.76 \pm 0.02 $
95% CI 1.10 $-$ 1.88 44.18 $-$ 44.46 44.73 $-$ 44.80
$\rm Post-flare$ $ 0.73 \pm 0.33 $ $ 45.12 \pm 0.35 $ $ 44.30 \pm 0.03 $
95% CI $\le$ 1.57 44.36 $-$ 45.77 44.24 $-$ 44.38
$\rm Period~A15$ $ 0.21 \pm 0.17 $ $ 44.78 \pm 0.16 $ $ 44.27\pm0.02 $
95% CI $\le$0.62 44.42 $-$ 45.02 $44.24-44.32 $
$\rm Period~C15$ $ 1.83 \pm 0.16 $ $ 44.31 \pm 0.06 $ $ 44.55 \pm 0.03 $
95% CI 1.50 $-$ 2.14 44.19 $-$ 44.43 44.48 $-$ 44.60
$\rm Period~D15$ $ 2.20 \pm 0.18 $ $ 43.85 \pm 0.14 $ $ 45.16 \pm 0.04 $
95% CI 1.81 $-$ 2.49 43.59 $-$ 44.13 45.07 $-$ 45.25
: Mean values and marginalized 95% CI of the derived parameters for the SED fittings with the DT photons.[]{data-label="t3"}
{width="\textwidth"}
{width="\textwidth"}
{width="\textwidth"}\
{width="\textwidth"}\
Fitting the SEDs
----------------
In the upper panels of Figures \[figure1\]-\[figure4\], we show the best-fitting results for the 14 SEDs. Each SED is fitted with two models: the model with the BLR photons and the model with the DT photons. The corresponding reduced $\rm\chi^2_{DT/BLR}$ is reported in each panel. One can see that all the fittings with the DT photons, except for Period C in [@Hayashida2012], are better than that with the BLR photons. The highest-energy X-ray/gamma-ray data can be fitted better with the model of SSC+EC-DT. In the SSC+EC-BLR model, the Klein-Nishina (KN) effect becomes important and suppresses the gamma-ray emission, leading to the mismatch between the data and the model (e.g., the first panel in Fig. \[figure1\]). In addition, the high-energy hump in the SSC+EC-BLR model locates at higher energies, which causes the worse fitting to the X-ray data (e.g., the third panel in Fig. \[figure4\])
The one-dimensional (1D) probability distributions of the free parameters are shown in Figures \[figure9\]-\[figure13\] in Appendix \[appA\]. The uncertainties on the parameters in the two cases are given in Tables \[MT\_para\] and \[BLR\_para\], respectively. We also report the marginalized 95% confidence intervals (CIs) of the parameters.
One can see that all parameters except for $\eta_{\rm esc}$ are well constrained. In the EC-DT model, $\eta_{\rm esc}$ in four states are well constrained (see Table \[MT\_para\]). In the EC-BLR model, $\eta_{\rm esc}$ in all states are poorly constrained (see Table \[BLR\_para\]). The constraint on $\eta_{\rm esc}$ arises from the X-ray data. $\eta_{\rm esc}$ determines the minimum Lorenz factor in the emitting EED $\gamma_{\rm c}'$, i.e., $\gamma'_{\rm c}/|\dot{\gamma'}(\gamma'_{\rm c})|=\eta_{\rm esc}R'/c$. $\gamma'_{\rm c}$ has a big impact on the low energy part (X-ray band) of the EC component. If the EC emission contributes to the observed X-ray emission, $\gamma'_{\rm c}$ or $\eta_{\rm esc}$ could be well constrained. In the EC-BLR model, the EC component peaks at higher energies, and the X-ray data are dominated by SSC emission. Therefore, $\eta_{\rm esc}$ is poorly constrained.
In addition, it is worth pointing out that our model fails to fit the $\gamma$-ray spectrum in the Period C in [@Hayashida2012], likely due to the simplification of the external photon fields. Complex external photon fields [@Cerruti2013] may correct the discrepancy between the model and the data.
Injected EEDs
-------------
The injected EEDs obtained in the fittings are shown in the lower panels of Figures \[figure1\]-\[figure4\]. It can be seen that the parameters describing the injected EEDs, i.e., $P'_e$, $\gamma_{\rm min}'$ and $n$, are well constrained. In the EC-DT model, $\gamma_{\rm min}'$ is in the range from 248 to 877, and $P'_e$ varies from $3.7\times10^{41}$ to $3.9\times10^{42}$ erg/s, and $n$ is in the range of $\sim2.5-3.6$. It is noted that $n$ is larger than 3 except for the Periods A and B in [@Hayashida2012].
Looking at the injected EEDs and the emitting EEDs in Figures \[figure1\]-\[figure4\], one can find that the radiative cooling of the electrons occurs in the fast-cooling regime, i.e., $\gamma'_{\rm c}<\gamma'_{\rm min}$. In the fast cooling regime, $\gamma'_{\rm min}$ is the break Lorentz factor of the emitting EED. The spectral index $s$ between $\gamma'_{\rm c}$ and $\gamma'_{\rm min}$ in steady-state emitting EED depends on the cooling rate of the electrons with $\gamma'>\gamma'_{\rm min}$. In the case of Thomson scattering or synchrotron energy-loss of the form $\dot{\gamma'}\sim\gamma'^2$, $s=2$. If the dominant energy-loss rate is not the form of $\dot{\gamma'}\sim\gamma'^2$ (e.g., IC in KN regime), $s$ should differ from 2 [see @Yan2016b for a detailed investigation on $s$ in different energy-loss processes in 3C 279]. For the electrons with $\gamma'>\gamma_{\rm min}'$, the index of the distribution changes to be $n+1$, when the form of $\dot\gamma'\propto\gamma'^2$ holds. It is noted that the EC-BLR model requires a larger injected electron power $P'_e$, which is several times of that obtained in the EC-DT model. This is caused by the KN effect.
Properties of the $\gamma$-ray emission region
----------------------------------------------
The magnetic field strength $B'$ and the Doppler factor $\delta_{\rm D}$ are two important physical quantities. In Figures \[figure9\]-\[figure13\], it is found that the two quantities are constrained very well with the current data. In Table \[MT\_para\], one can see that $B'$ varies in the range of \[0.5-2.0\] G, and $\delta_D$ varies in the range of \[34.6-47.5\]. With the values of $\delta_D$, we find that the values of $R'$ are in the range of $\sim(4.8-6.6)\times10^{15}$ cm. They are consistent with that derived in previous works [e.g., @Dermer2014; @Yan2016a] where static emitting EEDs were used. The large values of $\delta_D$ are also suggested by the VLBI study of the kinematics of the jet in 3C 279 [@Lister1997; @Jorstad2004].
Correlations between model parameters and observed $\gamma$-rays
----------------------------------------------------------------
The model parameters as a function of the observed $\gamma$-ray flux $F_\gamma$ [@Hayashida2012; @Hayashida2015; @Paliya2015] are showed in Figure \[corr\]. We calculate the Pearson’s probability for a null correlation, namely the p-value, which is reported in the corresponding panel of Figure \[corr\].
Our results show that the $\gamma$-ray activity is tightly correlated with $P_e'$ and $\gamma'_{\rm min}$, with the p-values of $p=8.72\times10^{-5}$ and $p=5.45\times10^{-4}$, respectively. It indicates that the $\gamma$-ray activity is associated with the injection of the accelerated electrons.
In addition, there is a weak correlation between $F_\gamma$ and $\delta_D$, with $p=0.04$. No correlation between $F_\gamma$ and $B'$ is found.
![Evolutions of the model parameters ($B'$, $\delta_D$, $\gamma_{\rm min}'$, and $P_e'$) as a function of the observed $\gamma$-ray flux $F_\gamma$. The black-dashed line is the linear best-fitting to the data. The red triangles, blue open squares and black filled squares are the results derived by fitting the three SEDs in Hayashida et al.(2015), the three SEDs in Paliya et al.(2015), and the eight SEDs in Hayashida et al.(2012), respectively. []{data-label="corr"}](correlation_all.eps){width="45.00000%"}
Jet powers {#jetpower}
----------
Using the model parameters, we can derived the powers carried by the jet in the form of radiation ($L_r$), magnetic field ($L_B$), electrons ($L_e$) and protons ($L_p$) [e.g., @Celotti1993]. However, the poorly constrained $\gamma'_{\rm c}$ leads to large uncertainties on $L_e$ and $L_p$. Here, we calculate $L_r$ and $L_B$ using our well constrained parameters (Table \[t3\]). One can find $L_r\sim L_B$ except for the Period D in [@Hayashida2015].
The jet power $L_{\rm kin}$ can also be estimated by the $L_{\rm kin}-L_{151}$ relation obtained by [@Godfrey2013], $$L_{\rm kin} = 3\times10^{44}\left(\frac{L_{151}}{10^{25} \rm W/Hz/sr}\right)^{0.67}~\rm erg/s,$$ where $L_{151}$ is the 151 MHz radio luminosity from the extended jet. The scaling relationship is roughly consistent with the theoretical relation presented in [@Willott1999]. This approach is widely used to estimate the jet kinetic energy in AGNs.
Using the relation $L_{151}=d_L^2F^{151}$, we have $L_{\rm kin}=3\times10^{44}(9.23F^{151})^{0.67}$ erg/s, where $F^{151}$ is in the units of Jy. With $F^{151}=22.08$ Jy [@Arshakian2010], we obtain $L_{\rm kin}=1.05\times10^{46}$ erg/s, which is dozens times of $L_r$.
Discussions {#discussion}
===========
At first, we would like to stress that our model implicitly assumes a small acceleration zone which cannot contribute significant photons to the observed radiations.
On the acceleration mechanism
-----------------------------
Obviously, the values of $n$ significantly depart from the canonical $n\simeq2$ predicted by the non-relativistic shock acceleration, and also differ from $n\simeq2.2$ expected by the classic relativistic shock acceleration [e.g., @Kirk2000; @Baring1999; @Achterberg2001; @Ellison2004]. Although a steeper distribution ($n\simeq2.5$) can be produced considering the modification of shock by the back-reaction of the accelerated particles [e.g., @Kirk1996], it still fails to account for the large values of $n$ we obtained.
Note that the above discussions are given in the frame of (quasi-)parallel shocks. Relativistic oblique shocks could produce much softer injection EED with $n>2.5$ [@Ellison2004; @Niemiec2004; @Sironi2009; @Summerlin2012]. In relativistic shocks, @Summerlin2012 showed that the spectral index $n$ varies dramatically from 1 to $>3$ with the changes of obliquity and magnetic turbulence. Steep electron distribution with $n\sim3$ can be produced in relativistic shocks with large obliquity and low turbulence. Therefore, our results indicate that the relativistic shocks with large obliquity and low turbulence may be responsible for the acceleration of electrons in 3C 279.
In relativistic shocks, the minimum Lorentz factor of the distribution is [e.g., @Sari1998; @Piran1999] $$\gamma_{\rm min}'\simeq\frac{m_p}{m_e}\frac{n-2}{n-1}\epsilon_e\Gamma_{\rm sh},~n>2$$ where $\Gamma_{\rm sh}$ is the bulk Lorentz factor across the shock front, and $\epsilon_e$ is the fraction of shock energy that goes into the electrons. [^2]. From our results, we use $n=3.4$, $\gamma'_{\rm min}=300$, and assume $\Gamma_{\rm sh}=10$ [@Ushio2010], then we obtain $\epsilon_e\simeq0.03$. This indicates that the acceleration is low-efficiency, which is consistent with the numerical simulation result in @Sironi2009.
The strongest evidence for an oblique shock can be found in the polarization maps of the jet emissions [@Lind1985; @Cawthorne1990; @Cawthorne2006; @Nalewajko2009; @Nalewajko2012]. [@Abdo2010] discovered a dramatic change in the optical polarization associated with the $\gamma$-ray flare in Period D in [@Hayashida2012]. They suggested that the observed polarization behavior may be the result of the jet bending. Jet bending has been observed in a number of AGNs [e.g., @Graham2014]. In this scenario, an oblique shock could be formed due to the interaction of the jet with the external medium. [@Denn2000] have organized an extensive VLBI monitoring. They revealed the existent of the oblique shocks in the knots through the observed linear polarization behavior. [@Lister2005] have shown that the distribution of the electric vector position angles (EVPA) offsets is similar to that predicted by an ensemble of oblique shocks with random orientations [@Lister1998]. [@Dulwich2009] have shown that the high-resolution data from the Very Large Array, Hubble Space Telescope and Chandra observatories support the presence of an oblique shock in the kiloparsec-scale jet of the powerful radio galaxy 3C 346. Very recently, some authors proposed that radio-to-$\gamma$-ray variabilities may be caused by the oblique shocks in AGN jets [@Hughes2011; @Hovatta2014; @Aller2014; @Hughes2015]. In particular, Using a relativistic oblique shock acceleration + radiation-transfer model, @Bottcher2019 successfully explained the SEDs and variabilities of 3C 279 during flaring activity in the period December 2013 - April 2014 reported in @Hayashida2015.
Our results show that the $\gamma$-ray activities strongly correlated with the injection of electrons. This indicates that the $\gamma$-ray activities could be caused by the acceleration of the electrons in the relativistic oblique shock.
On the Magnetization and Radiative Efficiency
---------------------------------------------
The most promising scenario for launching blazar powerful jets involves the central accumulation of large magnetic flux and the formation of magnetically arrested/choked accretion flows (MACF) [@Narayan2003; @Igumenshchev2008; @Tchekhovskoy2009; @Tchekhovskoy2011; @McKinney2012; @Chen2018]. In this scenario, the jet is powered by the Blandford-Znajek (BZ) mechanism that extracts BH rotational energy, and the jet production efficiency for maximal BH spin is estimated by $\eta_j\simeq1.9(\phi_{\rm BH}/50)^2$ where $\phi_{\rm BH}$ is the dimensionless magnetic flux threading the BH [@Blandford1977; @Tchekhovskoy2010; @Sikora2013; @Sikora2013b]. The value of $\phi_{\rm BH}$ is typically on the order of 50 according to the numerical simulations by [@McKinney2012], although it depends on the details of the model.
Assuming $L_{\rm jet}=L_{\rm kin}$, one can find $\eta_j\equiv \epsilon L_{\rm jet}/L_d\simeq1.6$ for $\epsilon=0.3$[^3] [@Thorne1974] and $L_d\sim2\times10^{45}$ erg/s [@Pian1999]. It is in good agreement with $\eta_j=1.9$ in the MCAF scenario for a typical value of $\phi_{\rm BH}=50$. Therefore, our result supports the BZ mechanism for jet launching
The magnetization and radiative efficiency are usually considered to be the indicator of acceleration mechanism occurring in blazar jets. We derive the magnetization parameter $\sigma_B$ and radiative efficiency $\eta_r$ [@Kang2014; @Sikora2016; @Fan2018],
$$\begin{aligned}
% \nonumber to remove numbering (before each equation)
\sigma_B &=& L_B/(L_{\rm kin}-L_B), \\
\eta_r &=& L_r/(L_{\rm kin}+L_r) .
% n_e/n_p &=& L_p/(L_{kin}-L_B-L_e),\end{aligned}$$
Since $L_{\rm kin}$ is the time-averaged kinetic power of a source with the radio flux $F^{151}$, we use the average values of $L_B$ and $L_r$ and get $\sigma_B\simeq\eta_r\simeq0.02$. @Baring2017 showed that electrons would be efficiently accelerated by relativistic shocks in blazar jets with $\sigma_B$ changing from $\sim10^{-4}$ to 0.06.
Summary
=======
Using a time-dependent one-zone SSC+EC model and the MCMC fitting technique, we analyzed 14 high-quality SEDs of 3C 279. We assume that the $\gamma$-ray emission region is either in the BLR or in the DT. The results show that the SEDs are better fitted in the latter case. The injected EED is well constrained in each state. The index of the injected EED is large, ranging from 2.7 to 3.8, which cannot be produced in (quasi-)parallel shocks. We argue that the steep injected EED may be the result of the acceleration of relativistic oblique shocks. According to the correlations of $F_{\gamma}$ and $\gamma_{\rm min}'$, $P'_e$, the $\gamma$-ray flares are caused by the acceleration.
Acknowledgements {#acknowledgements .unnumbered}
================
We thank the referee for helpful suggestions. We acknowledge the National Natural Science Foundation of China (NSFC-11803081, NSFC-U1738124) and the joint foundation of Department of Science and Technology of Yunnan Province and Yunnan University \[2018FY001(-003) and 2018FA004\]. BZD acknowledges funding support from National Key R&D Program of China under grant No. 2018YFA0404204. WH acknowledges funding supports from Key Laboratory of Astroparticle Physics of Yunnan Province (No. 2016DG006) and the Scientific and Technological Research Fund of Jiangxi Provincial Education Department (No. GJJ180584). DHY is also supported by the CAS “Light of West China” Program and Youth Innovation Promotion Association.
[99]{} Abdo A. A., Ackermann M., Ajello M., et al., 2010, Natur, 463, 919 Abdo A. A., Ackermann M., Ajello M., et al., 2011, ApJ, 736, 131 Achterberg A., Gallant Y. A., Kirk J. G., Guthmann A. W., 2001, MNRAS, 328, 393 Aller M. F., Hughes P. A., Aller H. D., Latimer G. E., Hovatta T., 2014, ApJ, 791, 53A Aleksić J. et al., 2015, A&A, 578, 22 Arshakian T. G., Torrealba J., et al., 2010, A&A, 520A, 62A Baring M. G., Ellison D. C., Reynolds S. P., Grenier I. A., Goret P., 1999, ApJ, 513, 311 Baring M. G., Böttcher M., Summerlin E. J., 2017, MNRAS, 464, 4875 Blumenthal G. R. Gould R. J., 1970, RvMP, 42, 237 Blandford R. D., Znajek R. L., 1977, MNRAS, 179, 433 Blazejowski M., Sikora M., Moderski R., Madejski G. M., 2000, ApJ, 545, 107 Böttcher M. Chiang J., 2002, ApJ, 581, 127 Böttcher M., Basu, S. Joshi, M. et al., 2007, ApJ, 670, 968 Böttcher M., Reimer A., Sweeney K., Prakash A., 2013, ApJ, 768, 54 Böttcher M., Baring M. G., 2019, ApJ, 887, 133. Cawthorne T. V. Cobb W. K., 1990, ApJ, 350, 536C Cawthorne T. V., 2006, MNRAS, 367, 851C Celotti A., Fabian A. C., 1993, MNRAS, 264, 228 Cerruti M., Dermer C. D., Lott B., Boisson C., Zech A., 2013, ApJL, 771, L4 Chiaberge M. Ghisellini G., 1999, MNRAS, 306, 551 Chen L., 2018, ApJS, 235, 39 Coppi P. S. Blandford R. D., 1990, MNRAS, 245, 453 Collmar W., Böttcher M., Krichbaum T. P., et al., 2010, A&A, 522, A66 Denn G. R., Mutel R. L., Marscher A. P., 2000, ApJS, 129, 61 Dermer C. D., Fink J. D., Krug H., Böttcher M., 2009, ApJ, 692, 32 Dermer C. D., Cerruti M., Lott B., Boisson C., Zech A., 2014, ApJ, 782, 82 Drury L. O’C., 1983, Rep. Prog. Phys., 46, 973 Dulwich F., Worrall D. M., Birkinshaw M., et al., 2009, MNRAS, 398, 1207D Ellison D. C., Jones F. C., Reynolds S. P., 1990, ApJ, 360, 702 Ellison D. C., Double G. P., 2004, Astropart. Phys., 22, 323 Fan Xu-Liang, Wu Qingwen, Liao, Neng-Hui, 2018, ApJ, 861, 97 Finke J. D., Dermer C. D., Böttcher M., 2008, ApJ, 686, 181 Ghisellini G. Tavecchio F., 2008, MNRAS, 387, 1669 Ghisellini G., Tavecchio F., 2009, MNRAS, 397, 985 Ghisellini G., Tavecchio F., Foschini L., et al., 2010, MNRAS, 402, 497 Godfrey L. E. H., Shabala S. S., 2013, ApJ, 767, 12 Graff P. B., Georganopoulos M., et al., 2008, ApJ, 689, 68G Graham P. J., Tingay S. J., 2014, ApJ, 784, 159 Guo F., Liu Y. H., Daughton W., Li H., 2015, ApJ, 806, 167 Hartman R. C., et al., 1996, ApJ, 461, 698 Hayashida M., Madejski G. M., Nalewajko K. et al., 2012, ApJ, 754, 114 Hayashida M., Nalewajko K., Madejski G. et al., 2015, ApJ, 807, 79 Hovatta Talvikki, Aller Margo F., et al., 2014, AJ, 147, 143H Hu W., Fan Z. H., Dai, B. Z., 2015, RAA, 15, 1455 Hu W., Zeng W., Dai B. Z., 2017b, arXiv1711.05494 Hughes Philip A., Aller Margo F., Aller Hugh D., 2011, ApJ, 735, 81H Hughes Philip A., Aller Margo F., Aller Hugh D., 2015, ApJ, 799, 207H Igumenshchev I. V., 2008, ApJ, 677, 317 Jones F. C., 1968, PhRv, 167, 1159J Jones F. C., Ellison D. C., 1991, Space Sci. Rev., 58, 259 Jorstad S. G., Marscher A. P., Lister M. L., et al., 2004, AJ, 127, 3115 Kang S. J., Chen L., Wu Q., 2014, ApJS, 215, 5 Kirk J. G., Heavens A. F., 1989, MNRAS, 239, 995 Kirk J. G., Duffy P., Gallant Y. A., 1996, A&A, 314, 1010 Kirk J. G., Guthmann A. W., Gallant Y. A., Achterberg A., 2000, ApJ, 542, 235 Lewis A., Bridle, S., 2002, Phys. Rev. D, 66, 103511 Lind K. R., Blandford R. D., 1985, ApJ, 295, 358 Lister M. L., Marscher A. P., 1997, ApJ, 476, 572 Lister M. L., Marscher A. P., Gear W. K., 1998, ApJ, 504, 702 Lister M. L., Homan D. C., 2005, AJ, 130, 1389L Liu J., Yuan Q., Bi X. J., Li H., Zhang X.M., 2012, Phys. Rev. D, 85, d3507 Liu H. T., Bai J. M., 2006, ApJ, 653, 1089L Massaro E., Tramacere A., Perri M., Giommi P., Tosti G., 2006, A&A, 448, 861 McKinney J. C., Tchekhovskoy A., Blandford R. D., 2012, MNRAS, 423, 3083 Narayan R., Igumenshchev I. V., Abramowicz M. A., 2003, PASJ, 55, L69 Nalewajko K., 2009, MNRAS, 395, 524N Nalewajko K., Sikora M., 2012, A&A, 543A, 115N Niemiec J., Ostrowski M., 2004, ApJ, 610, 851 Pacciani L., Tavecchio F., Donnarumma I., et al., 2014, ApJ, 790, 45 Paliya V. S., Sahayanathan S., Stalin C. S., 2015, ApJ, 803, 15 Peng Ya-ping, Yan Da-hai, Zhang Li, 2014, MNRAS, 442, 2357 Pian E., Urry C. M., Maraschi L., et al., 1999, ApJ, 521, 112 Piran T., 1999, PhR, 314, 575 Poole T. S., Breeveld A. A., Page M. J., et al., 2008, MNRAS, 383, 627P Sari R., Piran T., Narayan R., 1998, ApJL, 497, L17 Sikora M., Begelman M. C., Rees M. J., 1994, ApJ, 421, 153 Sikora M., Begelman M. C., 2013, ApJL, 764, L24 Sikora M., Stasińska G., Kozie[ł]{}-Wierzbowska D., Madejski G. M., Asari N. V., 2013, ApJ, 765, 62 Sikora M., 2016, Galax, 4, 12 Sironi L., Spitkovsky A., 2009, ApJ, 698, 1523 Sironi L., Keshet U., Lemoine M., 2015, SSRv, 191, 519 Stickel M., Padovani P., Urry C. M., Fried J. W., Kuehr H., 1991, ApJ, 374, 431 Stocke J. T., Morris S. L., Gioia I. M., et al., 1991, ApJS, 76, 813 Summerlin E. J., Baring M. G., 2012, ApJ, 745, 63 Tavecchio F., Ghisellini G., 2008, MNRAS, 386, 945T Tchekhovskoy A., McKinney J. C., Narayan R., 2009, ApJ, 699, 1789 Tchekhovskoy A., Narayan R., McKinney J. C., 2010, ApJ, 711, 50 Tchekhovskoy A., Narayan R., McKinney J. C., 2011, MNRAS, 418, L79 Thorne K. S., 1974, ApJ, 191, 507 Tramacere A., Massaro E., Taylor A. M., 2011, ApJ, 739, 66 Ulrich M. H., Maraschi L., Urry C. M., 1997, ARA&A, 35, 445 Urry C. M., Padovani P., 1995, PASP, 107, 803 Ushio M., Stawarz [Ł]{}., Takahashi T., et al., 2010, ApJ, 724, 1509 Wehrle A. E., et al., 1998, ApJ, 497, 178 Willott C. J., Rawlings S., Blundell K. M., Lacy M., 1999, MNRAS, 309, 1017 Wu Lin-hui, Wu Qingwen, Yan Da-hai, et al., 2018, ApJ, 852, 45 Yan D. H., Zhang L., Yuan Q., Fan Z. H., Zeng H. D., 2013, ApJ, 765, 122 Yan D. H., Zhang L., Zhang S. N., 2015, MNRAS, 454, 1310 Yan D. H., He J. J., Liao J. Y. et al., 2016a, MNRAS, 456, 2173 Yan D. H., Zhang L. Zhang S. N., 2016b, MNRAS, 459, 3175 Yuan Q., Liu S., Fan Z., Bi X., Fryer C., 2011, ApJ, 735, 120 Zhang J., Liang E. W., Zhang S. N., Bai J. M., 2012, ApJ, 752, 157 Zhou Y., Yan D. H., Dai B. Z., Zhang, L., 2014, PASJ, 66, 12
One-dimensional probability distributions of the free model parameters {#appA}
======================================================================
{width="45.00000%"}{width="45.00000%"} {width="45.00000%"}{width="45.00000%"} {width="45.00000%"}{width="45.00000%"}
{width="45.00000%"}{width="45.00000%"} {width="45.00000%"}{width="45.00000%"} {width="45.00000%"}{width="45.00000%"}
{width="45.00000%"}{width="45.00000%"} {width="45.00000%"}{width="45.00000%"}
{width="45.00000%"}{width="45.00000%"} {width="45.00000%"}{width="45.00000%"} {width="45.00000%"}{width="45.00000%"}
{width="45.00000%"}{width="45.00000%"} {width="45.00000%"}{width="45.00000%"} {width="45.00000%"}{width="45.00000%"}
One-dimensional probability distributions of the derived parameters obtained from SED fittings with the DT photons {#appB}
==================================================================================================================
{width="40.00000%"}{width="40.00000%"} {width="40.00000%"}{width="40.00000%"} {width="40.00000%"}{width="40.00000%"} {width="40.00000%"}{width="40.00000%"}
{width="45.00000%"}{width="45.00000%"} {width="45.00000%"}{width="45.00000%"} {width="45.00000%"}{width="45.00000%"}
[^1]: E-mail: [email protected]
[^2]: It should be pointed out that the formula mentioned above is suitable for parallel shocks. For relativistic oblique shock, exact calculation of the injection minimum energy only could be made through kinetic particle-in-cell simulation that beyond the scope of this work. However, it should be valid for the purpose of our analysis, if we assume that for relativistic oblique shock the injection minimum energy is larger than that expected by the parallel shock [@Niemiec2004; @Summerlin2012].
[^3]: $\epsilon\equiv L_d/\dot{M}c^2$ is the radiation efficiency of an accretion disk with $\dot{M}$ denoting the mass accretion rate.
|
---
abstract: 'Metal organic framework (MOF) materials have attracted a lot of attention due to their numerous applications in fields such as hydrogen storage, carbon capture, and gas sequestration. In all these applications, van der Waals forces dominate the interaction between the small guest molecules and the walls of the MOFs. In this review article, we describe how a combined theoretical and experimental approach can successfully be used to study those weak interactions and elucidate the adsorption mechanisms important for various applications. On the theory side, we show that, while standard density functional theory is not capable of correctly describing van der Waals interactions, functionals especially designed to include van der Waals forces exist, yielding results in remarkable agreement with experiment. From the experimental point of view, we show examples in which IR adsorption and Raman spectroscopy are essential to study molecule/MOF interactions. Importantly, we emphasize throughout this review that a combination of theory and experiment is crucial to effectively gain further understanding. In particular, we review such combined studies for the adsorption mechanism of small molecules in MOFs, the chemical stability of MOFs under humid conditions, water cluster formation inside MOFs, and the diffusion of small molecules into MOFs. The understanding of these phenomena is critical for the rational design of new MOFs with desired properties.'
address:
- 'Department of Physics, Wake Forest University, Winston-Salem, NC 27109, USA.'
- 'Department of Physics, Wake Forest University, Winston-Salem, NC 27109, USA.'
- 'Department of Materials Science and Engineering, University of Texas at Dallas, TX 75080, USA.'
- 'Department of Materials Science and Engineering, University of Texas at Dallas, TX 75080, USA.'
- 'Department of Physics, Wake Forest University, Winston-Salem, NC 27109, USA.'
author:
- Sebastian Zuluaga
- Pieremanuele Canepa
- Kui Tan
- 'Yves J. Chabal'
- Timo Thonhauser
bibliography:
- 'biblio.bib'
title: Study of van der Waals bonding and interactions in metal organic framework materials
---
Introduction
============
Metal organic framework (MOF) materials are nano-porous materials comprised of metal centers, which are linked by organic ligands. Over the past decade, MOFs have attracted a surge of attention due to their extraordinary properties, useful for hydrogen storage [@h_storage; @h_storage2; @h_storage3; @h_storage4], CO$_2$ capture [@CO2_capt; @CO2_capt2; @CO2_capt3; @CO2_capt4; @Pera_2013], catalysis [@Farha_MOFcatal2; @Lee_MOFcatal3; @Shultz_MOFcatal], and sensing [@Kreno12] among others [@Stroppa13; @Stroppa11]. Part of the success of MOFs also has to do with their often simple synthesis, i.e.by combining the organic ligands and the metallic salt in a solvothermal reaction [@mof_app; @mof_prep]. Most practical applications of MOFs rely on a specific interaction of the MOF with small molecules. This interaction—typically of a weak van der Waals type—has thus been at the center of many experimental and theoretical studies. It is exactly the understanding of this interaction that will allow us to interpret the properties of current MOFs better and design new and improved MOFs with desired properties. For example, we do know that, in general, the MOFs surface area and the binding strength to the metal centers are the two main factors controlling the uptake of small molecules. However, the exact correlation between those properties is unclear [@h_storage5]. Another example concerns much current research, trying to address the problem of low stability of MOFs under humid conditions. While some progress has been made [@Yang_2013; @Taylor_2013; @Li_2013; @Han_2010; @Demessence_2009; @wikipaper55], the newly found water-resistant MOFs often lack the desired specific molecular uptakes that are needed. Overall, progress has been slow to address such questions due to a lack of appropriate methods to study the molecular interactions inside MOFs. In the present review article we highlight a strategy, combining experiment and theory, that overcomes these problems and has been particularly successful in unraveling van der Waals interactions in MOFs.
The experimental study of those interactions often relies on powerful vibrational spectroscopy such as infrared (IR) absorption and Raman scattering, which indirectly provide information about the molecular adsorption process taking place in the MOF. The theoretical description with *ab initio* methods, due to the typical size of MOF unit cells and their extended nature, rules out most highly accurate quantum-chemistry approaches and leaves density functional theory (DFT) as the only viable option. Historically, however, standard exchange-correlation functionals within DFT such as LDA and GGA only poorly capture van der Waals interactions. We will show here that the specially designed functional vdW-DF [@VdW1; @VdW2; @VdW3] is in fact capable to describe van der Waals interactions reliably and gives results in remarkable agreement with experiment.
This review article aims to showcase the importance of IR and Raman spectroscopy techniques combined with *ab initio* simulations at the DFT level (utilizing vdW-DF) as a promising way to study and rationally design complex systems where van der Waals bonding plays a major role. To this end, this work is divided into several sections. In Section \[sec:IR\_Raman\], we give a description of the success and failures of vibrational spectroscopic techniques to study van der Waals interactions. Then, in Section \[sec:comp\], we present a description of the computer simulations used to describe and interpret complex spectroscopic experiments. In Section \[sec:cases\], we present several relevant examples where the combination of experiment and theory explains the behavior of various MOF systems and provides much needed understanding. We conclude with a short summary and outlook in Section \[sec:summary\].
Success and Failure of Vibrational Spectroscopies to study van der Waals Interactions {#sec:IR_Raman}
=====================================================================================
IR and Raman spectroscopy of small molecule adsorption in MOFs
--------------------------------------------------------------
IR and Raman spectroscopy provide complementary information about bonding configurations through their vibrational spectra. IR spectra reflect photon absorption during transitions from ground- to first-excited vibrational levels ($\nu =0 \to 1$) in the electronic ground state, requiring a dynamic dipole moment (associated with a change in the dipole moment during the vibrational motion) [@Nakamoto_2009_UTD1]. In contrast, Raman spectroscopy is based on photon scattering by molecules and has its origin in the electronic polarization caused by monochromatic visible radiation [@Nakamoto_2009_UTD1; @Ferraro_2003_UTD2]. Therefore, a vibrational mode is Raman active if the polarizability is modulated during the vibration [@Nakamoto_2009_UTD1; @Ferraro_2003_UTD2]. Strict selection rules exist for both spectroscopies, sometimes leading to complementary detection [@Ferraro_2003_UTD2]. For example, the vibration of the homopolar diatomic molecule H$_2$ is not IR active (due to the absence of a fluctuating dipole associated with the symmetric stretching), but strongly Raman active. However, once the molecule interacts with the MOF, it undergoes a perturbation that slightly polarized the originally symmetric molecule and makes it weakly IR active. This perturbation is usually accompanied by a red-shift of the H–H stretching modes, located at 4161.1 cm$^{-1}$ and 4155 cm$^{-1}$ for para and ortho H$_2$, respectively [@Welsh_1969_UTD3]. For the linear molecule CO$_2$, the symmetric stretch mode ($\nu _{1}$) is Raman active but not IR active, whereas the antisymmetric modes ($\nu _{2}$ and $\nu _{3}$) are IR active [@Ferraro_2003_UTD2].
Based on these principles, IR and Raman spectroscopy can be very useful tools to characterize the nature of host/guest interactions [@Lamberti_2010_UTD4; @Vimont_2007_UTD5; @Gascon_2009_UTD6; @Stavitski_2011_UTD7] in MOFs. Particularly valuable information can be gained by identifying perturbations of the IR active modes. For example, the first spectroscopic evidence for the formation of an electron-donor acceptor (EDA) complex between CO$_2$ and functional groups of MOFs was observed in a MOF of type MIL–53 and reported in later studies of adsorption of CO$_2$ in amino-based MOFs [@Vimont_2007_UTD5; @Gascon_2009_UTD6]. The adsorption of CO$_2$ molecules in MIL–53 leads to a modest red-shift from $-10$ cm$^{-1}$ to $-15$ cm$^{-1}$ of the stretching mode $\nu _{3}$ and to a splitting of the bending mode $\nu _{2}$ due to the removal of degeneracy of the in-plane and out-of-plane bends [@Vimont_2007_UTD5]. A similar splitting of $\nu _{2}$ modes is common in many electron-donor acceptor complexes of CO$_2$ with organic solvents or polymers possessing electron-donating functional groups—e.g., carbonyl groups—due to the interaction of the carbon atom of CO$_2$ as the electron acceptor [@Dobrowolski_1992_UTD8; @Kazarian_1996_UTD9]. Moreover, significant perturbations of both $\nu$(OH) and $\sigma$(OH) bands of hydroxyl groups ($\nu$(OH) = 19 cm$^{-1}$ and $\sigma$(OH) = 30 cm$^{-1}$) suggest that oxygen atoms of the framework hydroxyl group act as the electron donor [@Vimont_2007_UTD5].
As evident from these examples (and many others), it is clear that IR and Raman spectroscopy, by themselves and even without the aid of theoretical calculations, can often provide insight into the interactions between guest molecules and the MOF. However, as we will see in the next section, in other cases the “blind” application of these spectroscopic techniques can lead to a significant misinterpretation of the experimental data obtained. This can happen when IR and Raman spectroscopy are used as *indirect* probes—i.e. deducing other physical properties of the system from a simple red- or blue-shift in the spectrum. In such cases, theory and computer simulations are essential to derive a complete understanding, as they provide *direct* access to many properties of the system, and often provide interpretations that are unexpected from simple correlations in the experimental data.
Difficulty of IR and Raman spectroscopy to describe small molecule adsorption in MOFs
-------------------------------------------------------------------------------------
Despite the high sensitivity of spectroscopy to molecular interactions with the MOF, attention must be paid when interpreting the data to extract information about the interaction from vibrational frequency shifts, intensities, and line-widths [@Nijem_2010_UTD10; @Nijem_2010_UTD11]. For example, it is commonly accepted that the magnitude of the IR shift of small adsorbed molecules in MOFs is directly related to their adsorption energy, and thus the IR shift can be used indirectly to estimate the relative adsorption energies. However, in our recent IR spectroscopy study of molecular hydrogen in a number of different MOF compounds [@Nijem_2010_UTD10] , we find that there is no clear correlation between H$_2$ adsorption energies (determined by isotherm measurements) and the magnitude of the H$_2$ stretch shift. In fact, metal-formate M$_3$(HCOO)$_6$ \[M = Co, Ni and Mn\] compounds with the highest adsorption energy have the lowest hydrogen IR shift. In this case, we find that the IR shift is dominated by the environment (organic ligand, metal center, and structure) [@Nijem_2010_UTD10], rather than by the adsorption energy to the metal.
Similarly, integrated areas for the specific IR bands were long considered to be directly correlated with the amount (loading) of adsorbed molecules, assuming that the dipole moment of the adsorbed species is not affected by loading or site geometry [@Bordiga_2007_UTD12; @Vitillo_2008_UTD13; @Garrone_2005_UTD14]. Based on this assumption, variable temperature IR was used to measure the absorbance of IR bands (including that of H$_2$ molecules) and estimate the adsorption energy [@Bordiga_2007_UTD12; @Garrone_2005_UTD14]. However, our theoretical and experimental findings for H$_2$ molecules in MOF74 with unsaturated metal centers indicate that large variations in the induced dipole moment take place as a function of loading, due to the interaction among adsorbed molecules [@Nijem_2010_UTD11]. In the case of Mg-MOF74, the effective charge of H$_2$ at the metal sites weakens (from 0.021 $e$ to 0.015 $e$ as the loading increases from 1 to 12 H$_2$/primitive cell) i.e. as the neighboring sites are occupied. Thus, the IR intensity is reduced by 50$\%$, since it is proportional to the square of the effective charge or the dynamic dipole moment [@Nijem_2010_UTD11]. These findings suggest that the integrated areas of IR bands do not always correlate with the amount of H$_2$ adsorbed and possible variations in dynamic dipole moments have to be taken into account.
In summary, IR and Raman spectroscopy can be very helpful tools when studying small molecule adsorption in MOFs. But, extreme caution is necessary when utilizing those methods to make assumptions about adsorption energies or loadings, as illustrated in the examples given above. Under these circumstances, theoretical input using first principles calculations—specifically capable to deal with van der Waals interactions—is critical to interpret experimental observations correctly.
Experimentation
---------------
Zecchina and coworkers first used transmission IR spectroscopy to study the fundamental aspects of the interaction between H$_2$ and MOFs, mainly in the low temperature ($<$300 K) and pressure regime. By means of the variable temperature infrared (VTIR) spectroscopy method, the adsorption enthalpy was derived by measuring the intensity of absorption bands as a function of temperature [@Bordiga_2005; @Vitillo_2008_UTD13]. However, caution must be taken when using the VTIR method since the dipole moment might change as a function of loading, as pointed out above. More recent work [@Nijem_2010_UTD10; @Nijem_2010_UTD11; @wikipaper55; @wikipaper45] has investigated a series of small molecules (H$_2$, CO$_2$, CH$_4$, SO$_2$, H$_2$O, etc.) using [*in situ*]{} IR absorption spectroscopic to quantify the effect of their interaction with different types of MOFs in a wide range of pressures (from 50 mTorr to 55 bar) and temperatures (10 K to 423 K). In order to perform the IR measurements at and above room temperature, a portion ($\sim$10 mg) of MOF was lightly pressed onto a KBr support and mounted into a high-temperature high-pressure cell (Specac product number P/N 5850c) and further heated in vacuum for activation. During the annealing, the removal of solvent molecules was monitored by [*in situ*]{} IR spectroscopy. Then, the activated sample was cooled to specific temperatures in order to perform the measurements at specific pressure gas exposures. Measurements were performed in transmission using a cooled InSb/MCT detector. Similar measurements were also performed in a Janis PTSHI series cold refrigerator (CCR) system for low temperature studies ($<$298 K). In addition to transmission IR, diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS) was employed to investigate the dynamics of H$_2$ molecules adsorbed within the MOF74 compounds [@Gascon_2009_UTD6; @Stavitski_2011_UTD7]. Furthermore, DRIFTS has also been used to study the interactions between CO$_2$ and functional groups on the organic ligands of some MOFs under the controlled [*in situ*]{} cell environment [@FitzGerald_2011; @FitzGerald_2010; @Windisch_2009].
Most recently, [*in situ*]{} Raman spectroscopy was also used to study the structural response mechanism of flexible metal organic frameworks Zn$_2$(bpdc)$_2$bpee \[bpdc = 4,40-biphenyl dicarbox-ylate and bpee = 1,2-bis(4-pyridyl)ethylene\] upon CO$_2$, N$_2$, and hydrocarbon molecules adsorption [@CO2_capt8; @wikipaper47]. In this case, Raman spectroscopy is more suitable because the phonon modes of the MOFs do not overwhelm the spectra as they do in the case of IR spectra, which include a large number of combination and overtone bands. By integrating a Linkam FTIR600 cooling/heating stage, the activated sample was measured under the controlled temperature and gas environment. The changes on specific bonds in the MOF structure, monitored by Raman spectroscopy, were correlated to the MOF structural changes and the guest-host interactions.
Computer Simulations as a Tool to Interpret Complex Spectroscopic Experiments {#sec:comp}
=============================================================================
Ab initio modeling of materials
-------------------------------
While very successful classical modeling techniques exist, such as force field simulations, which are suitable for studying very large systems, they are not capable of describing the electronic structure of materials and the intricate role it plays in many processes. In the case of MOF materials, we are most interested in electronic-structure changes during the adsorption and desorption of small molecules in their cavities, as well as a number of catalytic processes. As such, unless cost prohibitive, *ab initio* modeling techniques are the methods of choice. For an overview of widely-used materials-modeling techniques, ranging from classical approaches to high-level quantum-chemistry methods, see Ref. [@Kolb2012a].
Modeling MOF materials with *ab initio* methods presents a particular challenge. The adsorption/desorption of small molecules in MOFs is often governed by physisorption, i.e. weak van der Waals forces, which are difficult to capture correctly with *ab initio* methods. Correlated high-level quantum-chemistry approaches, such as M[ø]{}ller-Plesset perturbation theory and coupled-cluster methods [@Szabo_96], can describe van der Waals interactions, but their computational cost limits them to small systems ($\sim$100 and $\sim$30 atoms, respectively [@Head-Gordon]) and their application to large periodic systems, such as the MOFs of interest here, is unpractical [@Marsman_2009; @Booth_2012; @Gruneis_2010; @Usvya_2011; @Ayala_2001; @Maschio_2007].
Density functional theory (DFT) [@Kohn_64], on the other hand, has a much more favorable computational cost and can be used for systems with up to 1000 atoms—in linear-scaling DFT even up to 1,000,000 atoms [@Head-Gordon]. It is also easily implemented with periodic plane-wave basis sets, such that treating periodic systems becomes trivial. Unfortunately, with standard exchange-correlation functionals, DFT cannot reliably describe van der Waals interactions [@Shevlin_09; @Hobza_08; @Sponer_08; @Cerny_07], a phenomenon where charge fluctuations in one part of the system correlate with fluctuations in another, resulting in an attractive force that is a *truly nonlocal correlation effect* [@Langreth_09]. It follows that standard local and semi-local functionals, such as LDA and GGA, cannot reliably account for these nonlocal effects and yield qualitatively erroneous predictions [@French_10; @Kristyan_94; @Perez-Jorda_94; @Meijer_96]. While very promising extensions exist [@French_10], most notably DFT-D [@Grimme1; @Grimme2], DFT-SAPT [@Hesselmann_03; @Misquitta_02; @Williams_01], and $C_6$-based methods [@Tkatchenko_09; @Tkatchenko_09_2], they are semi-empirical, perturbative, and not seamlessly self-consistent density functionals.
vdW-DF: a good compromise between cost and accuracy
---------------------------------------------------
We have overcome this problem and include van der Waals forces self-consistently in DFT [@VdW2] in the form of a van der Waals density functional (vdW-DF). Its accuracy is comparable to high-level quantum-chemistry approaches [@Cooper_08; @Copper_08_2; @Li_08]. vdW-DF goes beyond standard DFT to include a *truly nonlocal correlation* $E^{\rm nl}_c$ in the exchange-correlation energy $E_{xc}$, $$\begin{aligned}
\label{equ:functional}
E_{xc}[n] &=& E^{\rm revPBE}_x[n] + E^{\rm LDA}_c[n] +
E^{\rm nl}_c[n]\;,\\[2ex]
E_c^{\rm nl}[n] &=& \frac{1}{2}\int d^3r\,d^3r'\,\,
n(\vec{r}\,)\phi(\vec{r},\vec{r}\,')n(\vec{r}\,')\;,\end{aligned}$$ where $n$ is the electron density and revPBE [@revPBE] and LDA [@LDA] are standard functionals. $E_c^{\rm nl}$ is determined by the kernel $\phi$, which is a complicated function of the densities and their gradients at $\vec{r}$ and $\vec{r}\,'$, first developed by Langreth et al. [@VdW1]. With the corresponding exchange-correlation potential $v^{\rm nl}_c(\vec{r}\,) = \delta E^{\rm
nl}_c[n]/ \delta n(\vec{r}\,)$ [@VdW2], the method becomes self-consistent and permits calculation of atomic forces—essential for structural optimization and molecular-dynamics simulations. Like high-level quantum-chemical methods, vdW-DF precludes bias through a full, self-consistent solution of the coupled Schrödinger equations for all valence electrons. Calculations automatically include direct and induced electrostatic effects, charge transfer effects, and effects due to the non-nuclear centricity of the dispersion interaction as well as its deviations from the inverse sixth power at smaller than asymptotic separations.
vdW-DF can be implemented in standard plane-wave electronic-structure codes exploiting an efficient Fast Fourier Transform algorithm [@Soler_09]. This algorithm scales like standard DFT, and for large systems compute times are negligibly longer than for GGA calculations. We routinely use our own implementation to study hydrogen-storage materials with 300 to 400 atoms per unit cell [@Bil_11; @our_wiki_40]. In particular, we have successfully used vdW-DF to study a variety of phenomena in MOF materials, achieving remarkable agreement with experiment [@wikipaper43; @wikipaper45; @wikipaper47; @wikipaper48; @wikipaper50; @Piero_PRL; @wikipaper51; @wikipaper53; @wikipaper55; @Tan_2013; @Canepa2013].
Comparison between vdW-DF simulations and experiment
----------------------------------------------------
In MOF adsorption studies, there are ample opportunities for theory and experiment to interact. It is almost straight forward to compare vdW-DF optimized structures with diffraction experiments [@Walker_2010; @Kleis_2007; @Ziambaras_2007]. Of more interest is the comparison of calculated adsorption energies with measured heats of adsorption [@Lee_2012; @Poloni_2012; @Chakarov_2006; @Kong_2009; @Zhou_2008]. As pointed out above, IR spectroscopy can be a very powerful method to study the loading of MOFs, but caution is neccesary. From the theoretical side, while the full calculation of IR spectra is possible [@PhysRevB.83.121402], it is much easier and typically sufficient to calculate the IR peak positions—this has been done in a number of studies and shows very good agreement with experiment [@wikipaper50; @Kong_2009; @wikipaper43]. Comparison with IR experiments has also been made for vdW-DF calculations of small molecule diffusion [@Piero_PRL]. vdW-DF calculations for an exhaustive list of elastic and transport properties of MOFs are also compared with experiment [@Kleis_2007; @Langreth_2005; @Londero_2010].
Examples of Successful Combined Experimental/Theoretical studies {#sec:cases}
================================================================
Studying adsorption mechanisms of small molecules in MOFs
---------------------------------------------------------
It has been shown that MOFs with unsaturated metal centers, such as MOF74 and HKUST-1, exhibit a fast and specific CO$_2$ absorption, which is a desirable property for capturing applications [@CO2_capt5; @CO2_capt6; @CO2_capt7; @CO2_capt8]. Therefore, understanding their absorption mechanisms is critical for the rational design of improved MOFs. In this subsection we discuss and analyze the CO$_2$ absorption in MOF74. We will show that the vdW-DF approach is critical in order to understand and correctly explain the corresponding experimental results. As an example, we review the CO$_2$ absorption in Zn-MOF74 and Mg-MOF74 and show how the frequency of the asymmetric and stretching modes are modified upon absorption [@wikipaper43].
![\[CO2\_IR\_shift\_paper43\_fig2\] IR absorption spectra of CO$_2$ absorbed into Zn-MOF74 (top) and in Mg-MOF74 (bottom) at changing CO$_2$ pressure (1$-$6 Torr). (Reprinted with permission from Ref. [@wikipaper43]. © 2012 American Physical Society).](Fig2_a_PRB_85_064302 "fig:"){width="2.7in"}\
![\[CO2\_IR\_shift\_paper43\_fig2\] IR absorption spectra of CO$_2$ absorbed into Zn-MOF74 (top) and in Mg-MOF74 (bottom) at changing CO$_2$ pressure (1$-$6 Torr). (Reprinted with permission from Ref. [@wikipaper43]. © 2012 American Physical Society).](Fig2_b_PRB_85_064302 "fig:"){width="2.7in"}
The experimental IR absorption spectra results in Figure \[CO2\_IR\_shift\_paper43\_fig2\] show that the unperturbed asymmetric stretch mode of CO$_2$ (2349 cm$^{-1}$) undergoes a shift of $-11$ cm$^{-1}$ and $+3$ cm$^{-1}$ upon adsorption on Zn-MOF74 and Mg-MOF74, respectively. But, what causes this shift? To answer this question, *ab initio* calculations were performed utilizing vdW-DF, finding three factors contributing to this shift, i.e. (i) the change in the CO$_2$ molecule length, (ii) the asymmetric distortion of the CO$_2$ molecule, and (iii) the direct influence of the metal center.
In Table \[paper43T2\], we compare the IR spectroscopy data with results from frozen-phonon vdW-DF calculations, where the CO$_2$ molecule was adsorbed at the metal site of MOF74. In particular, the frozen-phonon calculations for the bending mode of CO$_2$ give a change in frequency of approximately $-9$ cm$^{-1}$ during adsorption on either metal, in excellent agreement with the experimental results. Furthermore, the calculations show that the asymmetric stretch mode of the CO$_2$ molecule exhibits a red-shift of $-0.5$ cm$^{-1}$ and $-8.1$ cm$^{-1}$ when adsorbed on Mg-MOF74 and Zn-MOF74, respectively, in reasonable agreement with the change of $+3$ cm$^{-1}$ and $-11$ cm$^{-1}$ measured in experiment.
System Bending mode ($\nu_2$) Asym. stretch mode ($\nu_3$)
------- ------------- ------------------------ ------------------------------
exp. Free CO$_2$ 667 2349
Mg-MOF74 658 2352
Zn-MOF74 658 2338
calc. Free CO$_2$ 646.6 2288.5
Mg-MOF74 636.6 2288.0
Zn-MOF74 637.6 2280.4
: \[paper43T2\]Vibrational frequencies (cm$^{-1}$) of CO$_2$ physiadsorbed in MOF74. Data taken from Ref. [@wikipaper43].
According to vdW-DF calculations [@wikipaper43], the CO$_2$ molecule binds stronger to Mg-MOF74 than to Zn-MOF74, in agreement with experimental findings. Furthermore, the distance between the metal center and the CO$_2$ molecule is smaller in Mg-MOF74 than in Zn-MOF74. Also, the CO$_2$ molecule experiences a larger distortion upon adsorption in Mg-MOF74, see Table 1 in Ref. [@wikipaper43]. Therefore, it is surprising that the frequency shift of the asymmetric stretching mode (see $\nu_3$ in Table \[paper43T2\]) for CO$_2$ in Mg-MOF74 is smaller compared with that in Zn-MOF74, and a deeper investigation of what causes this peculiar result is warranted. As mentioned above, this result can be explained with the help of theory.
We will start with the change in the molecule length: in order to analyze this effect, phonon calculations of the free CO$_2$ molecule were performed, where its length was set to the value when adsorbed in the MOF, keeping the carbon atom centered. Using this approach, frequency shifts of $-1.6$ cm$^{-1}$ and $-3.7$ cm$^{-1}$ were obtained for the case of Mg- and Zn-MOF74, respectively. It is interesting to see that in the case of Mg-MOF74, the molecule experiences a marginal elongation of 0.0003 , while in the case of Zn-MOF74 an elongation of 0.0009 takes place. That is, the molecule that experiences the larger elongation, exhibits the larger red-shift, as suggested by common sense.
The effect corresponding to the molecule’s asymmetric distortion was studied by placing the CO$_2$ molecule exactly at the same geometry as when adsorbed in the MOF, but removing the surrounding MOF. By doing this, the only contributions to the change in frequency come from the elongation of the CO$_2$ molecule and the asymmetric distortion of the carbon atom. The former has been reported in the paragraph above, so that the latter can easily be calculated. In this way, we find the shift corresponding to the induced asymmetry of the CO$_2$ molecule to be $1.1$ cm$^{-1}$ and $0.7$ cm$^{-1}$ for Mg-MOF74 and Zn-MOF74, respectively.
Finally, the effect of the metal center was studied by placing the free, undistorted CO$_2$ molecule at the metal adsorption site with the same position and angle of the adsorbed system. By doing this, the change in frequency has its highest contribution coming from the oxygen–metal interaction. Using this configuration, the results show a frequency shift of the asymmetric stretching mode of the CO$_2$ molecule of $-5$ cm$^{-1}$ for the Zn-MOF74 system. On the other hand, for the Mg-MOF74 the frequency shift has a negligible value of $-0.6$ cm$^{-1}$. This is a striking result, since Mg and Zn have a very similar valence structure with $3s$ and $4s$ electrons as the outermost valence states. This result shows that the fully occupied semi-core $3d$ electrons in Zn have an important effect on the interaction with the adsorbed CO$_2$ molecules. Similar results are found in Co- and Ni-MOF74 structures. To shed more light on this situation, a charge-density analysis was performed, finding a depletion of electrons around the Zn atom upon adsorption of the CO$_2$ molecule, while this depletion was not present for Mg-MOF74. Thus, the depletion of charge is an effect of the Zn $d$ orbitals, which, in turn, also influences the charge distribution in the adsorbed CO$_2$ molecule. Via this mechanism, the Zn $d$ orbitals indirectly affect the IR frequency shift of the adsorbed CO$_2$ molecule—explaining the differences between Mg-MOF74 and Zn-MOF74.
In summary, this van der Waals study of small molecule adsorption on MOFs is driven by experimental IR data. But, it is clear that the reasons for the observed IR frequency shifts are not necessarily intuitive and can only be explained with the help of detailed first-principles simulations.
Studying the chemical stability of MOFs under humid conditions
--------------------------------------------------------------
The stability of MOFs under humid conditions [@Yang_2011; @Wu_2010; @Greathouse_2006; @Ma_2011; @Low_2009; @Saha_2010] is of great importance for the implementation of these systems in various applications and devices. For example, the MOF5 structure is very sensitive to water and its hydrogen uptake properties become compromised when it is exposed to humidity in ambient air. So, how can we design new MOFs that keep their desired properties while being water resistant? In the case of MOF5, Yeng et al. [@Yang_2011] reported the synthesis of methyl- and 2,5-dimethyl-modified versions. By introducing methyl into the linkers, the structure becomes less reactive to water and retains the same hydrogen uptake properties of MOF5 up to 4 days after being exposed to ambient air. While this is a specific case, resting on the specific interaction of water and H$_2$ with the methyl-modified linkers, it can easily be generalized and it is again the interaction of small molecules—in this case water—with the MOF that is the focus of much ongoing research.
![\[wiki\_paper45\_fig2\] Scheme adopted for water insertion in M(bdc)(ted)$_{0.5}$ \[M = Zn, Ni\], where the ted group has been substituted by two water molecules. (Reprinted with permission from Ref. [@wikipaper45]. © 2012 American Chemical Society).](Fig14_Chem_Mat_24_3153){width="5.15in"}
In this subsection, we review efforts to understand the MOF–water interaction, using as an example the prototypical metal organic framework M(bdc)(ted)$_{0.5}$ \[M = Cu, Zn, Ni, Co; bdc = 1,4-benzenedicarboxylate; ted = triethylenediamine\]. This MOF has shown promising properties towards the adsorption of gases, such as H$_2$, CO$_2$, and CH$_4$ [@Chen_2010; @Lee_2007; @Dybtsev_2004]. M(bdc)(ted)$_{0.5}$ exhibits thermal stability up to 282$^{\circ}$C, is highly porous, the H$_2$ adsorption is exceptionally high, and it can also adsorb a large amount of hydrocarbons. This system was first synthesized and reported by Dybtsev et al. in Ref. [@Dybtsev_2004] and we will review here its water stability, as originally studied in Ref. [@wikipaper45]. A characteristic building block of this particular MOF is the incorporated “paddle wheel” building-block ted (triethylenediamine), which acts as a linker. In the presence of water, this paddle wheel structure can be extracted from the framework and be replaced by water molecules, forming M-MOF2 (we will refer to it as MOF2), as can be seen in Figure \[wiki\_paper45\_fig2\]. Obviously, with its normal linker missing, the M(bdc)(ted)$_{0.5}$ structure looses stability and, in most cases, undergoes an non-reversible phase transition.
Figure \[Tan\_2012\_fig8\] shows the powder X-ray diffraction (XRD) pattern of four hydrated M(bdc)(ted)$_{0.5}$ systems \[M = Cu, Zn, Co, and Ni\] after exposure to 9.5 Torr of D$_2$O vapor and the corresponding activated (pristine) M(bdc)(ted)$_{0.5}$ samples. Concerning the Cu(bdc)(ted)$_{0.5}$ system, the XRD pattern confirms that the system is stable after exposure to D$_2$O gas up to a pressure of 6 Torr, see Figure S10 in the supporting information of Ref. [@wikipaper45]. However, the top left panel of Figure \[Tan\_2012\_fig8\] shows that all the X-ray peaks are shifted to higher angles, except for the \[001\] peak, indicating that the Cu(bdc)(ted)$_{0.5}$ system is partially hydrolyzed by the D$_2$O molecules. Even though the structure is only partially hydrolyzed, the original Cu(bdc)(ted)$_{0.5}$ structure cannot be recovered after evacuation of water at a temperature of 150$^\circ$C. In contrast, the left bottom panel of Figure \[Tan\_2012\_fig8\] clearly indicates that the Zn(bdc)(ted)$_{0.5}$ system transforms into MOF2 after hydration. This transformation starts with the detachment of the ted group and the subsequent bonding of the D$_2$O molecules to the Zn$^{2+}$ apical sites of the paddle-wheel building units through their oxygen atoms. Concerning the Ni(bdc)(ted)$_{0.5}$ and Co(bdc)(ted)$_{0.5}$ systems under humid conditions, the right bottom and right upper panels of Figure \[Tan\_2012\_fig8\] indicate that the Ni(bdc)(ted)$_{0.5}$ maintains its structure after been exposed to 9.5 Torr of D$_2$O vapor, while the Co(bdc)(ted)$_{0.5}$ is completely destroyed after exposure. Furthermore, the Co(bdc)(ted)$_{0.5}$ structure cannot be recovered after annealing in vacuum up to 150$^\circ$C, see Figure S13 in the supplemental material of Ref. [@wikipaper45].
![\[Tan\_2012\_fig8\]Powder X-ray patterns of activated (pristine) and hydrated M(bdc)(ted)$_{0.5}$ \[M = Cu, Zn, Co and Ni\] after exposure to 9.5 Torr of D$_2$O vapor. (Reprinted with permission from Ref. [@wikipaper45]. © 2012 American Chemical Society).](fig8_chemmat_24_3153){width="5.15in"}
In order to explain the previous experimental results and give a clear explanation of how water interacts with the M(bdc)(ted)$_{0.5}$, we review computational results obtained in Ref. [@wikipaper45] concerning the Ni(bdc)(ted)$_{0.5}$ and Zn(bdc)(ted)$_{0.5}$ systems. The energy $\Delta E$ needed to extract the paddle wheel and replace it with water molecules was calculated using the vdW-DF formalism as $$\begin{aligned}
\label{wikipaper45_Eq_1}
\Delta E_{\mathrm{M(bdc)(ted)}_{0.5}} &=& E[\mathrm{M(bdc)(ted)}_{0.5}]+\,E[n\,\mathrm{H}_{2}\mathrm{O}] \nonumber \\
&-& E[\mathrm{MOF2}+n\,\mathrm{H}_{2}\mathrm{O}] -1/2\,E[(\mathrm{ted})]\;,\end{aligned}$$ where $n$ is the number of water molecules in the MOF, $E[\mathrm{M(bdc)(ted)}_{0.5}]$ is the energy of the MOF with no water molecules in it (as seen in the left panel of Figure \[wiki\_paper45\_fig2\]), $E[n\,\mathrm{H}_{2}\mathrm{O}]$ is the energy of $n$ water molecules, $E[\mathrm{MOF2}+n\,\mathrm{H}_{2}\mathrm{O}]$ is the energy of the M(bdc)(ted)$_{0.5}$ where the “exiting" ted has been replaced with $n$ water molecules (right panel of Figure \[wiki\_paper45\_fig2\]) and $E[(\mathrm{ted})]$ is the energy of the ted. Table \[wiki\_paper45T2\] shows the energies required to substitute the ted in the Zn and Ni(bdc)(ted)$_{0.5}$ structures by 2, 4, 6, 8, and 10 water molecules. Note that negative $\Delta E$ values indicate that the replacement is energetically favorable. The table shows that Ni(bdc)(ted)$_{0.5}$ is more resistant to water than Zn(bdc)(ted)$_{0.5}$, as found in the spectra in Figure \[Tan\_2012\_fig8\], and the hydration of the latter is a spontaneous process. This is due to the strong H bonds between the water molecules, which stabilizes the coordination of the Zn metal centers. On the other hand, in the case of Ni(bdc)(ted)$_{0.5}$, $\Delta E_{\mathrm{M(bdc)(ted)}_{0.5}}$ becomes negative only when the number of water molecules is 6 or greater.
Alternatively, one can calculate the energy $\Delta E_{\mbox{\scriptsize M-MOF2}}$ required for hydration of the MOF2 structure with $n$ water molecules, using: $$\begin{aligned}
\label{wikipaper45_Eq_2}
\Delta E_{\mbox{\scriptsize M-MOF2}} &=& E[\mathrm{MOF2}]+\,E[n\,\mathrm{H}_{2}\mathrm{O}] \nonumber \\
&-& E[\mathrm{MOF2}+n\,\mathrm{H}_{2}\mathrm{O}]\;.\end{aligned}$$ Here, $E[\mathrm{MOF2}]$ is the energy of the M(bdc)(ted)$_{0.5}$, where the ted has been replaced by two molecules of water; the other terms in the equation have been previously defined, see Equation \[wikipaper45\_Eq\_1\]. The right hand side of Table \[wiki\_paper45T2\] shows that for MOF2 the hydration of the Zn and Ni systems is a spontaneous process with an energy gain of approximately $-55$ kJ mol$^{-1}$ cell$^{-1}$ for higher loadings. This trend is almost independent of the metal.
In conclusion, the computational results explain the experimental findings in Ref. [@wikipaper45], indicating that the structural stability of the system depends on the amount of water present in the MOF. At lower loadings the system is stable, while at higher loadings the interaction of water with the paddle wheel leads to the irreversible decomposition of the structure.
----------------------- ------- ------- ------------ -- ------- ------- ------------
\[0.5ex\] H$_2$O/cell Zn Ni $ \Delta $ Zn Ni $ \Delta $
2 43.1 85.5 42.4 — — —
4 –5.3 4.2 9.5 –53.6 –77.1 –23.5
6 –21.4 –17.1 4.3 –53.7 –68.4 –14.7
8 –31.3 –24.0 7.3 –56.1 –52.4 3.7
10 –35.0 –45.2 –10.2 –54.5 –55.3 –0.8
----------------------- ------- ------- ------------ -- ------- ------- ------------
: \[wiki\_paper45T2\]Computed $\Delta E_{\mathrm{M(bdc)(ted)}_{0.5}}$ and $\Delta E_{\mbox{\scriptsize M-MOF2}}$ (kJ mol$^{-1}$ cell$^{-1}$) as a function of the number of water molecules per cell. Note that the basic MOF2 structure already contains two water molecules. Data taken from Ref. [@wikipaper45].
Studying the formation of water clusters in fluorinated MOFs
------------------------------------------------------------
The large internal surface area of MOFs makes them ideal for catalysis and fuel cell applications, which have attracted a surge of interest [@Kitagawa_FC; @Shultz_MOFcatal; @Farha_MOFcatal2; @Lee_MOFcatal3; @mof_app; @Xamena_MOFphotoCatal]. While some progress has been made—for example, Hurd et al. [@PEM_MOF] show intriguing results for $\beta$-PCMOF2 (proton conducting metal organic framework 2), capable of proton transport under anhydrous conditions at $150^{\circ}\mathrm{C}$—in general, the low hydrothermal and chemical stability of MOFs prevents their implementation in catalytic and fuel-cells systems. In the recent past, thus, concerted efforts have focused on increasing the hydrothermal and chemical stability of MOFs [@Ma_2008; @sun_2005; @Low_2009].
A promising approach to increase the chemical and hydrothermal stability is fluorinated MOFs (FMOFs), where the H atoms have been replaced by F atoms [@Yang_FMOF; @Yang_FMOF2; @Yan_2011]. Yang et al. report interesting results for FMOF1, showing that the hydrogen-desorption isotherm does not follow the path of the adsorption isotherm [@Yang_FMOF], in fact, it shows an abrupt drop in the adsorption density at 14 bar. The authors highlight the fact that this behavior would allow FMOF1 to adsorb H$_2$ at high pressures and stored it at low pressures.
In general, the walls of FMOF systems are hydrophobic, leading to an interesting side effect: the weak interaction of water molecules with the FMOF enhances the creation of water clusters inside its pores. In this subsection, we review the formation and behavior of water clusters inside FMOF1, as reported in Ref. [@wikipaper55]. As in previous sections, an understanding of the weak molecular interactions inside this system was gained by a combination of vdW-DF calculations and IR absorption spectra of water exposed FMOF1 as a function of pressure. Note that the interaction between water molecules has a significant van der Waals component, which is well captured with vdW-DF [@Kolb2011], while the electrostatic interaction is suppressed by the wall hydrophobicity of FMOF1.
Experimental isotherm measurements of FMOF1 show that the adsorption of water is negligible compared to the water adsorption in other systems [@Yan_2011]. Furthermore, at low water pressures (800 mTorr to 3 Torr), the IR adsorption measurements of H$_2$O adsorbed on FMOF1 show two peaks corresponding to red ($-13$ cm$^{-1}$) and blue-shifts ($+9$ cm$^{-1}$) of the unperturbed scissor vibration mode (1621 cm$^{-1}$) of the water molecule, as can be seen Figure \[Figure\_2\_wikipaper55\]. On the other hand, as the pressure is increased to 9 Torr, new peaks associated with scissor vibration modes appear at 1639 cm$^{-1}$ and 1676 cm$^{-1}$, as can be seen in the top panel of Figure \[Figure\_3\_wikipaper55\].
![\[Figure\_2\_wikipaper55\] IR absorption spectra of water exposure in FMOF1 as a function of pressure. Absorption spectra are referenced to dehydrated FMOF1 in vacuum. The top panel shows exposure at 15 Torr, while the bottom part shows exposure at lower pressures (800 mTorr to 3 Torr). (Reprinted with permission from Ref. [@wikipaper55]. © 2013 American Chemical Society).](Fig2_JACS_135_12615){width="3.5in"}
![\[Figure\_3\_wikipaper55\]IR absorption spectra of H$_2$O adsorbed in FMOF1 showing the bending modes of adsorbed water as a function of pressure. Top part shows IR absorption spectrum at 9 Torr. (Reprinted with permission from Ref. [@wikipaper55]. ©2013 American Chemical Society).](Fig3_JACS_135_12615){width="3.5in"}
In order to elucidate the appearance and nature of these peaks, vdW-DF vibration calculations were performed for various water clusters, i.e.the water dimer, trimer, tetramer, pentamer, and ice; the results are shown in Figure \[Figure\_6\_wikipaper55\]. Figure \[Figure\_6\_wikipaper55\]a) shows the calculated modes convoluted by Gaussian functions of 20 cm$^{-1}$ bandwidth, while panel b) shows single frequency values represented by peaks of 1 cm$^{-1}$ width. As expected, from the figure it can be seen that the bigger the water cluster, the higher the number of scissors modes. It is also important to note that for pressures under 3 Torr, the scissor vibrational modes in Figure \[Figure\_2\_wikipaper55\] span from 1600 cm$^{-1}$ to 1650 cm$^{-1}$. This matches the theoretical frequency windows of both the tetramer and pentamer, as seen in the top panel of Figure \[Figure\_6\_wikipaper55\]. It follows that the water clusters formed inside FMOF1 under low pressures ($<3$ Torr) are comprised of no more than 5 water molecules. This conclusion is also supported by the water adsorption energies on the FMOF1, see Table 2 in Ref. [@wikipaper55]. Note that, in principle, up to 61 water molecules can be accommodated inside the pores of FMOF1. On the other hand, the experimentally observed peak located at 1676 cm$^{-1}$ in Figure \[Figure\_3\_wikipaper55\] can be associate with hydrogen-bonded water molecules or water clusters larger than five water molecules—see the orange line in the top panel of Figure \[Figure\_6\_wikipaper55\]. It is important to note that this peak is only visible at high pressures.
In summary, while the IR spectroscopy data of water-exposed FMOF1 showed the appearance of new peaks, it was only with the help of vdW-DF calculations that a clear assignment to particular water clusters could be made. Note that this finding is likely to have a tremendous impact on atmospheric sciences, which seek to study the existence and properties of such clusters. In the normal atmosphere, water cluster concentration decays exponentially with the aggregate size, making clusters larger than the trimer often difficult to observe. FMOF1 solves this problem and provides a simple environment to create and confine even larger clusters.
![\[Figure\_6\_wikipaper55\]a) Gaussian convolution (with bandwidth of 20 cm$^{-1}$) of bending mode frequencies for various cluster sizes. b) Single frequency values represented by peaks of 1 cm$^{-1}$ width, as reported by previous vdW-DF calculations on gas-phase water clusters [@Kolb2011]. (Reprinted with permission from Ref. [@wikipaper55]. © 2013 American Chemical Society).](Fig6_JACS_135_12615){width="4in"}
Studying small molecule diffusion in MOFs
-----------------------------------------
MOFs have attracted a lot of attention due their promising properties concerning the storage of hydrogen and capture of CO$_2$ [@Pera_2013], among others. However, for the effectiveness of all such applications, it is necessary to get guest molecules deep into the bulk of the MOF, or vice versa, have them diffuse out. As such, the diffusivity of the guest molecule through the porous material plays a major role in these processes and is critical for the understanding and rational design of new MOFs. The topic of small molecule diffusion in MOFs has thus been the target of many theoretical studies [@Skoulidas_2005; @Haldoupis_2012; @Yang_2005; @Amirjalayer_2007; @Skoulidas_2004; @Haldoupis_2010; @Liu_2008]. For example, in Ref. [@Haldoupis_2010] Haldoupis et al. identified key elements in the MOF’s pore structure and via molecular dynamic simulations they were able to predict the Henry constant and the activation energy for several guest molecules. In particular, the authors were able to identify several materials with promising properties towards the separation of gases, such as H$_2$, CO$_2$, and CH$_4$. However, in their study, the authors assume that the MOFs are rigid structures, which can be a serious limitation, as we know that some MOFs experience a significant change in their structure upon adsorption of the guess molecules or other external stimuli due to their high flexibility.
In this subsection we review a combined [*in situ*]{} IR/vdW-DF study of small molecule diffusion in Mg-MOF74, as described in [@wikipaper51]. MOF74 was chosen for this study due to its unsaturated metal centers, which makes it highly reactive towards the adsorption of small molecules. Furthermore, Mg-MOF74 has shown promising properties towards the adsorption of CO$_2$ compared to other MOFs.
We start by showing results concerning the adsorption energies of H$_2$, CO$_2$, and H$_2$O in the Mg-MOF74 structure, see Table \[PRL\_110\_026102\_T1\]. This table shows that for low to moderate loadings the interaction between adsorbate molecules is negligible, except for H$_2$O adsorption, where the repulsion between the H atoms of the water molecules slightly debilitates the H$_2$O binding to the MOF. The adsorption energies for the adsorption of H$_2$ and CO$_2$, obtained using the vdW-DF approach, are in excellent agreement with the experimental values of $-0.11$ $\pm$ 0.003 eV [@Zhou_2008] and $-0.49$ $\pm$ 0.010 eV [@CO2_capt7], respectively. Although not the focus of that particular study, Table \[PRL\_110\_026102\_T1\] also reveals a common problem of many MOFs: the adsorption energy of water (due to its large dipole moment) is typically significantly higher compared to e.g. H$_2$ and CO$_2$. Thus, the presence of even small traces of water is a serious impediment upon possible applications and devices, as anticipated in the previous section. Details about this problem are discussed in Ref. [@Canepa2013]. In addition to adsorption energies, calculations of the vibrational spectra show a frequency change after adsorption of $\Delta \nu_{\rm H_2} =
-30$ cm$^{-1}$, $\Delta \nu_{\rm CO_2} = -13$ cm$^{-1}$, and $\Delta
\nu_{\rm H_2O} = -103$ cm$^{-1}$, in remarkable agreement with the IR spectroscopy measurement of $\Delta \nu_{\rm H_2} = -36$ cm$^{-1}$, $\Delta \nu_{\rm CO_2} = -8$ cm$^{-1}$, and $\Delta \nu_{\rm H_2O} =
-99$ cm$^{-1}$. Experimentally, there is also a small difference in frequency change between low and high loading, resulting in a red-shift of $-3$ cm$^{-1}$ and $-15$ cm$^{-1}$ for the asymmetric stretch modes of CO$_2$ and H$_2$O (see Supplemental Material of Ref. [@wikipaper51]). Computationally, we find $\Delta\nu_{\rm CO_2}
= -1$ cm$^{-1}$ and $\Delta\nu_{\rm H_2O} = -18$ cm$^{-1}$, again in excellent agreement with experiment.
Molecule Loading $\Delta E$ $\Delta E_{\rm ZPE}$ $H_{298}$
---------- --------- ------------ ---------------------- -----------
H$_2$ 1 –0.15 –0.15 –0.15
6 –0.16 –0.16 –0.16
CO$_2$ 1 –0.50 –0.49 –0.50
6 –0.50 –0.49 –0.50
H$_2$O 1 –0.79 –0.76 –0.76
6 –0.76 –0.73 –0.73
: \[PRL\_110\_026102\_T1\]Adsorption energies $\Delta E$ of molecules in Mg-MOF74 in eV. Two different loading are considered, i.e.one molecule per unit cell (low loading) and 6 molecules per unit cell (high loading). In addition, adsorption energies corrected for the zero-point energy ($\Delta E_{\rm ZPE}$) and thermal contribution ($\Delta H_{298}$ at 298 K) are given in eV. Data taken from Ref. [@wikipaper51].
The diffusion of small molecules (H$_2$, CO$_2$, and H$_2$O) through the MOF is a complex process. An appropriate description of such processes typically requires computationally expensive first-principle molecular dynamic simulations. However, here we were able to avoid the use of molecular dynamics by finding four different diffusion paths that capture important molecular transport mechanisms responsible for the macroscopic diffusion of H$_2$, CO$_2$, and H$_2$O in the MOF structure. These four paths are: a) The guest molecule, adsorbed on one metal center, travels circularly from one metal center to the next. Note that this mechanism is not responsible for molecular transport into the MOF, but nevertheless is an important process for redistributing the molecular load. b) The guest molecule, adsorbed on the metal center, diffuses along the $c$-axis to the next metal center. c) The guest molecule travels through the center of the MOFs channel, where all the metal centers are occupied by the same type of guest molecules. And, d) the guest molecule, adsorbed on one of the metal centers, travels along the $c$-axis through a barrier made by adsorbed molecules and is adsorbed at the equivalent metal center two unit cell further down. See Figure \[Fig1\_PRL\_110\_026102\] for a graphical representation of these four diffusion paths. For these paths, diffusion barriers were then calculated utilizing vdW-DF combined with the climbing-image nudged elastic band (NEB) approach.
![\[Fig1\_PRL\_110\_026102\]Graphical representation of the diffusion mechanisms considered in this study, shown for the case of CO$_2$. a) and a’) are views directly along the $c$-axis of the hexagonal Mg-MOF74 cell, where one (low loading) and six CO$_2$ (high loading) are adsorbed. b), c), and d) are views perpendicular to the $c$-axis. In panel b) the guest molecule, adsorbed on a metal center, diffuses along the $c$-axis to the next metal center. In panel c) the guest molecule travels along the center of the MOF channel, while all the metal centers are occupied by the same type of guest molecule. In panel d) the guest molecule, adsorbed on one of the metal centers, travels along the $c$-axis through a barrier made of other adsorbed molecules and is adsorbed again at the equivalent metal center two unit cells further down. Dashed lines indicate the diffusion paths. (Reprinted with permission from Ref. [@wikipaper51]. © 2012 American Physical Society).](Fig1_PRL_110_026102){width="2.25in"}
The energy barriers of the four diffusion paths are plotted in Figure \[Fig2\_PRL\_110\_026102\]. Note that diffusion barriers corrected for the zero-point energy were also calculated, but are not reproduced here. From the figure it can be seen that water has the highest energy barrier for diffusion. Again, the presence of water inside the MOF is a serious issue, as the barrier for it to diffuse out is rather large. As expected, the energy barriers are comparable to the adsorption energies. In panel a), it can be seen that a local minimum is located at 58$\%$ of the path for CO$_2$ diffusion. This local minimum has its origins in the presence of a secondary adsorption site in the MOF. Due to its low depth (5 meV), the secondary adsorption site is only occupied at high loadings. This secondary adsorption site for the CO$_2$ molecule was first reported by Queen et al. in Ref. [@Queen_2011], where the authors conducted neutron powder diffraction experiments on the Mg-MOF74 as a function of the CO$_2$ loading. Paths b), c), and d) aim to simulate the diffusion of the guest molecule into the MOF. Note that the diffusion barrier in path c) are ten times lower than the ones obtained in paths a), b) and d). This indicates that the interaction between the guest molecules in the middle of the channel and the ones adsorbed at the metals sites is small. Furthermore, it is important to highlight that the diffusion energy barrier of CO$_2$ in Figure \[Fig2\_PRL\_110\_026102\]c), i.e.0.04 eV, becomes 0.03 eV when corrected for the zero-point energy. This value is in excellent agreement with the 0.03 eV energy barrier measured experimentally by Bao et al. in Ref. [@Bao_2011].
![\[Fig2\_PRL\_110\_026102\]Diffusion profiles (in eV) for the diffusion processes of H$_2$, CO$_2$, and H$_2$O in Mg-MOF74 according to the mechanisms in Figure \[Fig1\_PRL\_110\_026102\]. (Reprinted with permission from Ref. [@wikipaper51]. © 2012 American Physical Society).](Fig2_PRL_110_026102){width="3.5in"}
In addition to vdW-DF calculations, [*in situ*]{} IR time-resolved spectroscopy measurements of the diffusion of CO$_2$ and H$_2$O in Mg-MOF74 were performed. When the experiments were first performed, the results were difficult to understand. In the case of CO$_2$, at first we observed a red-shift in the vibration frequency (asymmetric mode) of the guest molecule. As time passes, the IR spectrum measurements show a second shift (blue-shift) in the vibration frequencies of the guest molecules, leading to the original IR spectrum. The analogous behavior is observed for the corresponding experiment with H$_2$O. With the help of theory, we were able to construct a model that explains these effects. At first, molecules entering the MOF mostly adsorb in the pores close to the surface and “clog” those. This causes the first experimentally observed red-shift. Those pores become highly loaded, which we were able to deduce from the calculated difference in frequency shift from the low- and high-loading situations. Then, after some time, molecules start to diffuse deeper into the MOF using mechanism c), diffusing from pores with high concentration of guest molecules to pores with lower concentration. This results in the second shift observed, i.e. blue shifting back to the original spectrum.
To test our model, we compare the experimental timescale for the CO$_2$ and H$_2$O cases. The experiments show that it takes approximately two hours for the H$_2$O molecules and 22 minutes for the CO$_2$ molecules to go from the high-loading regime to the low-loading regime. The ratio between these two times is 5.45. On the other hand, having calculated the corresponding diffusion barriers (and calculating the pre-exponential factor in the Arrhenius equation with the help of transition-state theory), we can compute the same ratio and find purely based on our *ab initio* calculations a value of 5.43, validating our theoretical accuracy and transport model.
In summary, as in the previous subsections, only the combination of experiment and theory was able to present a complete picture of small molecule diffusion in MOF74. The theoretical atomistic model for the molecular transport explains experimental IR macroscopic evidence. The simulations also clarify the two-state mechanism, observed experimentally, which controls the macroscopic diffusion of these molecules in MOF74.
Summary and Outlook {#sec:summary}
===================
In this work, we have shown several examples of how the synergy of IR and Raman spectroscopy techniques, together with *ab initio* calculations at the DFT level utilizing vdW-DF, allow us to give a complete description of the van der Waals binding and interaction between guest molecules and the MOF. While originally many studies of MOFs focused on adsorption of H$_2$ and CO$_2$, at the moment we see a vast expansion of this field, including many other molecules of interest, such as SO$_2$ and NO$_2$ [@Tan_2013; @Ebrahim_2013; @Yu_2012; @Yu_2013; @Ding_2012]. Interesting effects are also being studied, such as a pressure dependent gate opening of MOFs [@wikipaper47; @Coudert_2009; @Canan_2010; @Pera_2012] and the response of MOFs to a variety of external stimuli. Due to the versatile building-block nature of MOFs, an almost innumerable number of MOFs might exist. But, more fundamental research is necessary to understand their properties and tailor them according to our needs. Nonetheless, and at this point we probably only have seen a glimpse of their applicability for future applications and devices.
Acknowledgment {#acknowledgment .unnumbered}
==============
This work was entirely supported by Department of Energy Grant No.DE-FG02-08ER46491.
References {#references .unnumbered}
==========
|
---
abstract: 'Neutrino fast pairwise conversions have been postulated to occur in the dense core of a core-collapse supernova (SN), possibly having dramatic consequences on the SN mechanism and the observable neutrino signal. One crucial condition favoring pairwise conversions is the presence of crossings between the electron neutrino and antineutrino angular distributions (i.e., electron neutrino lepton number crossings, ELN crossings). A stationary and spherically symmetric SN toy-model is constructed to reproduce the development of the neutrino angular distributions in the dense SN core in the absence of perturbations induced by hydrodynamical instabilities. By iteratively solving the neutrino Boltzmann equations including the collisional term, our model predicts that ELN crossings can develop only in the proximity of the decoupling region and for a shallow radial evolution of the baryon density, when the electron neutrino and antineutrino number densities are comparable. Such conditions are likely to occur only in the late SN stages. Interestingly, flavor instabilities induced by spatial or temporal perturbations are unlikely to generate ELN crossings dynamically.'
author:
- Shashank Shalgar and Irene Tamborra
bibliography:
- 'crossings.bib'
title: Criteria for the occurrence of Crossings Between the Angular Distributions of Electron Neutrinos and Antineutrinos in the Supernova Core
---
Introduction
============
Core collapse supernovae (SNe) are among the densest sources in neutrinos [@Mirizzi:2015eza; @Janka:2012wk]. According to our current understanding, neutrinos are emitted as matter accretes on the proto-neutron star; they transport energy to finally revive the stalled shock-wave and power the SN explosion [@Janka:2017vcp]. However, a detailed picture of the role of neutrinos remains unclear.
Understanding the impact of neutrino flavor conversions on the SN inner working is one of the crucial unsolved problems. In fact, neutrino flavor conversions have been traditionally neglected in SN transport, since they were expected to occur at radii larger than the shock radius and assumed to have a negligible impact on the shock revival, see e.g. [@Dasgupta:2011jf].
Recent work suggests that the large density of neutrinos in the proximity of the neutrino decoupling region may favor the development of pairwise conversions [@Sawyer:2015dsa; @Sawyer:2008zs; @Izaguirre:2016gsx]. Contrary to our intuition, neutrino pairwise conversions could occur even in the absence of a hierarchy among the neutrino mass eigenstates and are independent on the neutrino energy; however, fast pairwise conversions crucially depend on the exact shape of the neutrino angular distributions and on the local neutrino density. The scale ruling such phenomenon is determined by $(G_F |n_{\nu_e}-n_{\bar{\nu}_e}|)^{-1} \simeq \mathcal{O}(10)$ cm, with $G_F$ being the Fermi constant and $n_{\nu_e(\bar{\nu}_e)}$ the local (anti)neutrino number density. As a direct consequence of the “fast” interaction rate, neutrino pairwise conversions could possibly lead to flavor decoherence [@Abbar:2018beu; @Dasgupta:2016dbv; @Capozzi:2018clo; @Richers:2019grc], affecting the neutrino heating within the gain radius as well as the SN nucleosynthesis, and the neutrino signal observable at Earth.
The criteria leading to the development of fast pairwise conversions were investigated in Refs. [@Izaguirre:2016gsx; @Capozzi:2018rzl; @Capozzi:2017gqd]. A non-negligible flux of neutrinos streaming backwards towards the proto-neutron star core as well as the presence of a crossing between the angular distributions of electron neutrinos and antineutrinos (hereafter dubbed electron neutrino lepton number crossings, ELN crossing) have been pinpointed as crucial ingredients to trigger fast instabilities in the neutrino flavor space.
While ELN crossings are bound to occur in the case of compact binary mergers because of the particular merger geometry and the natural overall excess of electron antineutrinos with respect to electron neutrinos [@Wu:2017qpc; @Wu:2017drk], the situation seems to be more complex in the SN context. In fact, self-consistent spherically symmetric SN simulations in 1D did not find any evidence for ELN crossings over a range of progenitor masses [@Tamborra:2017ubu]. On the other hand, multi-D hydrodynamical simulations have reported mixed results [@Abbar:2018shq; @Azari:2019jvr] depending on the different degree of approximation adopted in the implementation of the neutrino transport. In particular, the occurrence of LESA, the lepton-emission self-sustained asymmetry [@Tamborra:2014aua; @Tamborra:2014hga] has been recently confirmed by a larger set of independent simulations [@Janka:2016fox; @Walk:2019ier; @Vartanyan:2018iah; @OConnor:2018tuw; @Glas:2018vcs]. LESA has been also invoked as a possible ingredient favoring the development of the ELN crossings, see e.g. [@Izaguirre:2016gsx; @Dasgupta:2016dbv].
The recent developments described above hint towards the need of a full implementation of the neutrino quantum kinetic equations, including collisions and flavor conversions [@Volpe:2013jgr; @Stirner:2018ojk; @Sigl:1992fn; @Vlasenko:2013fja; @Blaschke:2016xxt; @Cirigliano:2017hmk]. However, given the major numerical complications involved in the problem, this has not been tackled numerically in a self-consistent manner yet; see [@Richers:2019grc] for a recent simplified attempt in this direction.
Hydrodynamical simulations of the core collapse do not include the physics of neutrino flavor conversions. Despite that, the simulations tracking the evolution of the neutrino angular distributions, see e.g. Refs. [@Tamborra:2017ubu; @Sumiyoshi:2014qua; @Nagakura:2017mnp], are extremely expensive in terms of computational resources. As a consequence, it is challenging to explore the conditions leading to the development of ELN crossings and to track to the behavior of the neutrino angular distributions in the later stages of the core collapse.
In this work, by assuming spherical symmetry and that a stationary solution is reached, we develop a simplified model to mimic the SN core microphysics and iteratively solve the neutrino Boltzmann equations with the inclusion of the collisional term. Our goal is to estimate the neutrino angular distributions of $\nu_e$ and $\bar{\nu}_e$ in the neutrino trapping region and follow their evolution until neutrinos are fully decoupled in the different SN phases. By neglecting complications coming from the SN hydrodynamics, we aim at identifying the microphysical ingredients leading to the development of ELN crossings and discuss their implications for the physics of flavor conversions.
The outline of the paper is as follows. In Sec. \[sec:collisions\], we present our stationary and spherically symmetric SN model and the iterative method employed to solve the neutrino Boltzmann equations by taking into account the collision term. In Sec. \[sec:crossings\], we investigate the occurrence of ELN crossings by introducing a simple evaluation criterion; we first test the latter for a simplified scenario and then discuss its implications through a more realistic framework based on inputs from a 1D hydrodynamical SN model. Section \[sec:oscillations\] focuses on exploring whether ELN crossings can be generated dynamically by the occurrence of neutrino flavor conversions because of spatial or temporal perturbations. A discussion on our findings and conclusions are reported in Sec. \[sec:conclusions\].
Neutrino Equations of Motion in the dense proto-neutron star {#sec:collisions}
============================================================
In this Section, we first introduce the neutrino kinetic equations and then describe the method employed to solve them in spherical symmetry and derive the neutrino angular distributions.
Neutrino Kinetic Equations
--------------------------
In the interior of a SN, the neutrino number density as well as the baryon density are very large and neither neutrino-neutrino self-interactions (coherent forward scattering) nor neutrino-matter scatterings (incoherent scattering) can be ignored. The coherent and incoherent parts of the neutrino evolution can have a significant feedback on each other. In fact, the neutrino-matter cross-section is flavor dependent, while the non-linear neutrino flavor evolution depends on the spatial distribution of neutrinos, which is determined by the neutrino matter scattering cross-section. However, the interplay between the non-linearity arising from the neutrino self-interaction and scatterings with the nuclear matter remains enigmatic.
Throughout this paper, for the sake of simplicity, we assume that ($\nu_e, \nu_x$) describes the neutrino flavor basis with $\nu_x$ being a mixture between $\nu_\mu$ and $\nu_\tau$ and similarly for antineutrinos. The equation of motion describing the neutrino flavor evolution, including coherent and incoherent components, can be conveniently expressed in terms of a $2\times 2$ density matrix, $\rho$ (and $\bar{\rho}$ for antineutrinos).
The neutrino kinetic equation, for each neutrino momentum mode $\vec{p}$, is given by $$\begin{aligned}
\label{geneom1}
& & i\frac{\partial \rho(\vec{x},\vec{p},t)}{\partial t} + \vec{v}\cdot \nabla \rho(\vec{x},\vec{p},t)
+ \vec{F} \cdot \nabla_{p} \rho(\vec{x},\vec{p},t) \nonumber\\
&=& [H,\rho(\vec{x},\vec{p},t)] + i \mathcal{C}[\rho(\vec{x},\vec{p},t)]\ ,\end{aligned}$$ where $\vec{x}$ and $t$ describe the spatial and temporal coordinates, and $\vec{v}$ is the neutrino velocity. The Hamiltonian $H$ contains three components, a term embedding the neutrino mixing in vacuum, a term describing the interactions of neutrinos with the SN matter background, and a term describing the neutrino-neutrino interactions, see e.g. [@Mirizzi:2015eza]. Since we are interested in exploring the development of the ELN crossings, we will neglect neutrino flavor conversions in the following, unless otherwise stated.
The $\mathcal{C}[\rho]$ describes the scattering of neutrinos with the medium. While $\vec{F} \cdot \nabla_{p} \rho$ tracks the effect of external forces on the neutrino field; we will assume that the latter has a negligible contribution in the following. Hereafter we will also use $\hbar = c = 1$.
The density matrix is normalized such that the terms on the diagonal give the total number density of neutrinos for fixed $(\vec{r},t)$. Notably, deep in the SN core the electron (anti-)neutrinos are highly degenerate and are in thermal equilibrium with the SN medium. Therefore, the (anti-)neutrino number densities are well described by a Fermi-Dirac distribution: $$\label{eq:F-D}
\frac{dn_{\nu}}{dE} = \frac{1}{2 \pi^2} \frac{E^2}{e^{(E-\mu_{\nu})/T} + 1}\ ,$$ where the $\mu_{\nu}$ is the neutrino chemical potential, such that $\mu_{\nu_e} = - \mu_{\bar{\nu}_e}$ and $\mu_{\nu_x} = \mu_{\bar{\nu}_x} = 0$, and $T$ is the temperature of the SN matter. The off-diagonal terms of the density matrix, instead, are related to the probability of having a flavor transition.
Solving Eq. \[geneom1\] may not appear to be a daunting task at first glance. However, it should be noted that the collision term $\mathcal{C}[\rho]$ (and $\bar{\mathcal{C}}[\bar\rho]$) entails a six dimensional integral for each $(\vec{x}, \vec{p}, t)$. The partial differential equation cannot be solved using standard discretization techniques in a 7D space due to the limitations of available computational power. Imposing symmetries on the system is not easy due to spontaneous breaking of symmetries in momentum and real space [@Mirizzi:2015eza].
The left-hand-side of the Eq. \[geneom1\] contains the convective term, which can be reduced to \[$\cos\theta {\partial \rho(\vec{x},\vec{p},t)}/{\partial r}$\] under the assumption of spherical symmetry and stationarity. It should be noted that, due to the non-linearity of the neutrino-neutrino term, the convective term may lead to spontaneous breaking of spherical symmetry [@Raffelt:2013rqa; @Duan:2014gfa]. For the sake of simplicity, in the following, we will assume that spherical symmetry is preserved and ignore the convective term.
The exact shape of the neutrino angular distributions in the SN core depends on the interactions described by the collision term. The collision term, $\mathcal{C}[\rho]$, contains two components: the loss and gain terms. The loss term accounts for the reduction of the neutrino number flux for a given direction and momentum due to scattering to another direction and momentum; the gain term, instead, accounts for scattering of neutrinos into a given direction and momentum state.
It should be noted that the equations of motion discussed in this Section do not fall in the category of initial value problem. In the following Section, we outline the technique we use to tackle this problem numerically.
Stationary and Spherically Symmetric Supernova Model {#sec:SNmodel}
----------------------------------------------------
Collisions of neutrinos with matter determine the spatial distribution of neutrinos in the SN interior. However, there is a feedback between the flavor dependent neutrino cross-section and the neutrino flavor evolution. As we will see in the following, the radial profile of the baryon density strongly affects the development of ELN crossings because of the neutrino-matter interactions. To first order, we can ignore the flavor evolution of the neutrinos and focus on scatterings to investigate the conditions under which ELN crossings develop. We make the following assumptions:
1. [The number density of neutrinos is locally conserved.]{}
2. [Each scattering is locally isotropic.]{}
3. [Energy-averaged, flavor-dependent neutrino distributions are adopted.]{}
4. [Only two neutrino flavor eigenstates are considered.]{}
![Schematic representation of the stationary and spherically symmetric SN model. The boundary conditions are set at the innermost radius ${r_{\textrm{min}}}$ and at the outermost radius ${r_{\textrm{max}}}$. Each point $P$ is characterized by a global set of coordinates ($r$, $\theta_0$) with $\theta_0$ defined with respect to the outermost spherical surface. For each point $P$, we also introduce a local system of coordinates through the angle $\theta$. The angles $\theta$ and $\theta_0$ are related through Eq. \[eq:angles\].[]{data-label="cartoon"}](plots/Fig1.pdf){width="49.00000%"}
As discussed in the previous Section, Eq. \[geneom1\] requires to define boundary conditions instead of initial conditions. In order to obtain a stationary solution in the presence of collisions, we adopt an iterative method. Note that we assume that our model does not depend on $t$ and the azimuthal angle $\phi$.
Our spherically symmetric model is sketched in Fig. \[cartoon\]. We assume that neutrinos are radiated by an inner surface of radius ${r_{\textrm{min}}}$ and propagate until they reach an outermost surface of radius ${r_{\textrm{max}}}$. Each point $P$ in the SN sphere is characterized by the radius $r$ and the angle $\theta_0$, where $\theta_{0}$ has been defined with respect to the outermost surface. For each $P$, we further introduce a set of coordinates $\theta$ to characterize the local angular distribution of neutrinos. Moreover, given that the energy-dependent quantities entering the neutrino equation of motion are averaged over the neutrino energy distribution, we will effectively solve Boltzmann equations depending on the neutrino scattering angle, but not on their momentum.
In the first step of our iteration method, we ignore the backward flux of neutrinos (i.e., assume a null neutrino flux for $\cos\theta<1$) and evolve the following equation of motion $$\begin{aligned}
\cos\theta_i \frac{d\rho(r)^{\uparrow}_{i}}{dr} &=& \sum_{j}\left(-{\mathcal{C}^{\textrm{loss}}}\rho(r)^{\uparrow}_{i} + {\mathcal{C}^{\textrm{gain}}}\rho^{\uparrow}_{j}(r)\right)\Delta \cos\theta_{j}
\label{iter1}\end{aligned}$$ where the subscripts ($i$, $j$) denote the indexes of the angular bins. $\Delta\cos\theta_{j}$ is the width of the $j^{\textrm{th}}$ angular bin, which depends on the radius and has to be calculated at each radial step, as we will see in the following.
Equation \[iter1\] is solved from ${r_{\textrm{min}}}$, arbitrarily set to 5 km, up to ${r_{\textrm{max}}}$, fixed at 60 km. Note that the choice of ${r_{\textrm{min}}}$ guarantees that the collisional rate is large enough to re-distribute the neutrinos in $\theta$ homogeneously, while ${r_{\textrm{max}}}$ has been fixed outside of the neutrino trapping region and is large enough to not affect the neutrino angular distributions at decoupling. During the first iteration, the resulting flux of $(e,x)$ neutrinos and antineutrinos in the forward direction is stored at regular radial intervals.
In order to locally conserve the neutrino number densities, we set the loss coefficient, ${\mathcal{C}^{\textrm{loss}}}$, equal to the gain coefficient, ${\mathcal{C}^{\textrm{gain}}}$. The loss and gain coefficients are proportional to the product of the number density of scatterers and the cross-section for each interaction channel averaged over the neutrino energy distribution, i.e. ${\mathcal{C}^{\textrm{loss}}}, {\mathcal{C}^{\textrm{gain}}}\propto n_{\mathrm{targets}} \langle F \sigma \rangle$, with $F$ being the Pauli blocking factor, and $\langle \sigma \rangle$ the cross section averaged over the neutrino energy distribution.
The cross-sections entering ${\mathcal{C}^{\textrm{loss}}}$ and ${\mathcal{C}^{\textrm{gain}}}$ have been calculated for the following reactions: $$\begin{aligned}
\nonumber
n + \nu &\leftrightarrow& n + \nu\ ,\\ \nonumber
p + \nu &\leftrightarrow& p + \nu\ ,\\ \nonumber
\nu(\bar{\nu}) + e^\pm &\leftrightarrow& \nu(\bar{\nu}) + e^\pm\ ,\\ \nonumber
n + e^+ &\leftrightarrow& p + \bar{\nu}_e\ ,\\ \nonumber
p + e^- &\leftrightarrow& n + \nu_e\ .\\ \nonumber\end{aligned}$$ Each cross-section has been implemented as prescribed in [@bowers:1982]. We also employed flavor-dependent neutrino distributions defined along the lines of Eq. \[eq:F-D\], and the respective mass-dependent Fermi-Dirac distributions have been adopted for the nucleons. In particular, the local density of nucleons has been defined through the baryon density $\rho_B$ and the electron fraction $Y_e$ \[$n_n \simeq \rho_B (1-Y_e)$ for neutrons and $n_p \simeq \rho_B Y_e$ for protons\]. For each reaction, the Pauli blocking factor $F$ has been computed following Refs. [@Raffelt:1996wa; @Bruenn:1985en] and included in the collision term. Note that, although neutrinos and antineutrinos undergo the same kind of neutral current interactions, a small difference appears in their cross sections. As we will discuss afterwords, this will also factor in in the eventual development of ELN crossings.
The first term on the right hand side of Eq. \[iter1\] is the loss term that depends on the number of neutrinos in the $i^{\textrm{th}}$ bin and on the phase space. However, we perform the integration numerically to ensure that the numerical error arising from the discretization of the gain term is canceled by that of the loss term. The $\cos\theta$-term on the left hand side of Eq. \[iter1\] takes into account the dependence of the path length on the zenith angle.
We bin the neutrino trajectories with respect to the global angle defined at ${r_{\textrm{max}}}$ denoted by $\theta_{0}$. In particular, in the numerical runs, the angular grid has been chosen to be uniform in $\cos^{2}\theta_{0}$. For all the results presented in this paper we used 200 angular bins in the global coordinate system. As the ratio ${r_{\textrm{max}}}/{r_{\textrm{min}}}$ increases, finer angular resolution is required to ensure there is a sufficient number of angular bins that intersect the ${r_{\textrm{min}}}$ sphere.
The angle in the global coordinate system $\theta_{0}$ is related to the local angle $\theta$ by the following relation, $$\begin{aligned}
\label{eq:angles}
\cos\theta = \sqrt{1-\left(\frac{{r_{\textrm{max}}}}{r}\right)^{2}\left(1-\cos^{2}\theta_{0}\right)}\ .\end{aligned}$$ For fixed $P$, if $\mathrm{Im}(\cos\theta) \neq 0$, then the neutrino trajectory is discarded.
After the completion of this first step of our iteration method, we evolve the equations of motion for the backward flux from ${r_{\textrm{max}}}$ to ${r_{\textrm{min}}}$ $$\begin{aligned}
\cos\theta_i\frac{d\rho(r)^{\downarrow}_{i}}{dr} &=& \sum_{j,j^{\prime}}\left(-{\mathcal{C}^{\textrm{loss}}}\rho(r)^{\downarrow}_{i} + {\mathcal{C}^{\textrm{gain}}}\rho^{\downarrow}_{j}(r)\right.
\nonumber\\
&+& \left. {\mathcal{C}^{\textrm{gain}}}\rho^{\textrm{pr}\uparrow}_{j^{\prime}}(r)\right) \Delta\cos\theta_{j}\ .
\label{iter1back}\end{aligned}$$ Here, the superscript ‘pr’ is used to denote that the values for the components of the density matrix are interpolated using the solutions of Eq. \[iter1\]. The width of the angular bins is the same for forward and backward going neutrinos ($\Delta\cos\theta_{j} = \Delta\cos\theta_{j^{\prime}}$). Notably, during this iteration, the gain term in the equation of motion receives a contribution from the forward flux stored during the previous iteration.
Upon completion of the first iteration (forward and backward), we obtain the initial conditions of the equations of motion for the next iteration round. Since ${r_{\textrm{min}}}$ is chosen to be in a region of extremely large matter density, the angular distribution is essentially uniform and we can set the forward flux at ${r_{\textrm{min}}}$ equal to the backward flux for all angular bins. After each backward iteration, we rescale the normalization of the backward flux by an amount that is proportional to the flux at ${r_{\textrm{min}}}$ to compensate for the loss of neutrinos by diffusion at $r_{\textrm{max}}$, and achieve a steady state solution. The relative normalization between $\nu_{e}$ and $\bar{\nu}_{e}$ is thus determined by the dynamics of collisions in our model. This is a crucial aspect, as the occurrence of ELN crossings (or lack thereof) is determined in large part by the relative normalization of the $\nu_{e}$ and $\bar{\nu}_{e}$ angular distributions.
Using the initial conditions obtained by the former backward iteration, the equations of motion are then evolved in the forward direction, while using the interpolated values from the solution of Eq. \[iter1back\]: $$\begin{aligned}
\cos\theta_i\frac{d\rho(r)^{\uparrow}_{i}}{dr} &=& \left(-{\mathcal{C}^{\textrm{loss}}}\rho(r)^{\uparrow}_{i} + \sum_{j}{\mathcal{C}^{\textrm{gain}}}\rho^{\uparrow}_{j}(r)\right. \nonumber\\
&+& \left.\sum_{j^{\prime}}{\mathcal{C}^{\textrm{gain}}}\rho^{\textrm{pr}\downarrow}_{j^{\prime}}(r)\right) \Delta\cos\theta_{j}\ .
\label{iter2}\end{aligned}$$
We repeatedly solve Eqs. \[iter1back\] and \[iter2\] for neutrinos and antineutrinos using interpolated values for the components of the density matrix that have the superscript ‘pr’. We find that around 15 iterations are sufficient to achieve numerical convergence of the results. Notably, this procedure guarantees that the steady state solution is independent on the initial conditions used in the first iteration and determined by self-consistency alone.
In order to be certain that our simple SN model reproduces flavor-dependent neutrino angular distributions in agreement with the literature, we adopted the inputs of the SN models employed in Ref. [@Tamborra:2017ubu] and computed the expected angular distributions. Our results are in agreement with the ones presented in [@Tamborra:2017ubu].
Figure \[radial\_evol\] shows an illustrative example of the resultant $\nu_e$ angular distribution in arbitrary units. The angular distribution has been plotted at different radii and has been obtained by using the inputs from a 1D hydrodynamical model of a $18.6\,M_\odot$ SN with SFHo nuclear equation of state [@Bollig2016] for $t_{\mathrm{p.b.}} = 0.25$ s, which we will use in Sec. \[sec:snmodel\]. One can see that the neutrino angular distribution is almost isotropic in $\cos\theta$ when neutrinos are trapped and becomes forward peaked as neutrinos approach the free streaming regime. In particular, the $\nu_e$ angular distribution becomes more peaked than the $\bar\nu_e$ as expected by the different interaction rates of electron neutrinos and antineutrinos. We refer the reader to Refs. [@Tamborra:2017ubu; @Ott:2008jb; @Sarikas:2012vb] for more details on the radial evolution of the flavor-dependent neutrino angular distributions.
![Illustrative example of the $\nu_e$ angular distribution as a function of $\cos\theta$ in arbitrary units and for different radii ($r$) before and after decoupling. The neutrino angular distribution is almost isotropic in $\cos\theta$ in the trapping regime in the SN core; as $r$ increases and neutrinos approach the free-streaming regime, the neutrino angular distribution becomes forward peaked.[]{data-label="radial_evol"}](plots/Fig2.pdf "fig:"){width="49.00000%"}\
Criteria for the appearance of crossings in the electron neutrino lepton number distribution {#sec:crossings}
============================================================================================
In this Section, we explore the microphysical conditions leading to the occurrence of ELN crossings. We first focus on a simple toy model in order to pinpoint the main ingredients favoring the development of ELN crossings and present a criterion to predict the crossing occurrence in the absence of SN hydrodynamical instabilities. The development of ELN crossings in a more realistic SN setup is then investigated by employing inputs from a 1D hydrodynamical simulation of the core collapse.
Toy-model example
-----------------
In order to explore what are the conditions leading to the occurrence of ELN crossings, we perform simulations by employing the iterative method described in Sec. \[sec:SNmodel\]. For simplicity, in this Section, we assume a constant matter temperature $T$, and a constant chemical potential for both neutrinos and matter; all set to 10 MeV. In order to characterize the neutrino interaction strength, the total energy-averaged neutrino and antineutrino mean free path is defined as $$\lambda_{\nu_e, \bar{\nu}_e}^{-1} \simeq \sum_l \left(\frac{1}{\langle\sigma n_{\mathrm{targets}}\rangle}\right)_l^{-1}\ ,$$ where $l$ denotes the various neutrino-matter reaction channels listed in Sec. \[sec:SNmodel\]; the neutrino number densities and cross-sections have been modelled as described in Sec. \[sec:SNmodel\]. One can easily show that the dominant interaction rates setting the difference between the $\nu_e$ and $\bar{\nu}_e$ angular distributions, eventually leading to ELN crossings, are the charged current interactions with a subleading contribution coming from the neutral current interactions. Hence, $\lambda_{\nu_e}/\lambda_{\bar{\nu}_e}$ and the resultant local number density of $\nu_e$ and $\bar\nu_e$ are the crucial quantities in setting the relative ratio between the angular distributions of $\nu_e$ and $\bar{\nu}_e$. In our toy-model, we assume $\lambda_{\nu_e}/\lambda_{\bar{\nu}_e} \simeq 0.3$.
We assume a baryon density profile that falls exponentially with respect to the radius and distinguish between two cases. A first case involves a shallow baryon density profile (“case A”), $$\begin{aligned}
\label{rho1}
\rho_{B, \mathrm{case A}}(r) = 10^{14} \exp\left(0.25(5-r)\right)\ \textrm{gm/cc}\ ,\end{aligned}$$ and a second case that includes a steeply falling baryon density profile (“case B”) $$\begin{aligned}
\rho_{B, \mathrm{case B}}(r) = 10^{14}\exp\left(0.5(5-r)\right)\ \textrm{gm/cc}\ ;\end{aligned}$$
![[*Top*]{}: Angular distributions for toy model “case A” (see text for details) for $\nu_e$ in red and $\bar{\nu}_e$ in green as a function of $\cos\theta$ in arbitrary units. The angular distributions have been extracted after neutrino decoupling. The shallow baryon density profile ensures that the decoupling radius of $\nu_{e}$ is significantly larger than the one of $\bar{\nu}_{e}$ given the difference in their interaction strength. [*Bottom*]{}: Angular distributions for toy model “case B.” The rapidly falling baryon density profile implies the close vicinity of the neutrino decoupling radii and the formation of the ELN crossing. The bottom panels for “case A” and “case B” show the ELN distribution as a function of $\cos\theta$.[]{data-label="toy1"}](plots/Fig3a.pdf "fig:"){width="49.00000%"}\
![[*Top*]{}: Angular distributions for toy model “case A” (see text for details) for $\nu_e$ in red and $\bar{\nu}_e$ in green as a function of $\cos\theta$ in arbitrary units. The angular distributions have been extracted after neutrino decoupling. The shallow baryon density profile ensures that the decoupling radius of $\nu_{e}$ is significantly larger than the one of $\bar{\nu}_{e}$ given the difference in their interaction strength. [*Bottom*]{}: Angular distributions for toy model “case B.” The rapidly falling baryon density profile implies the close vicinity of the neutrino decoupling radii and the formation of the ELN crossing. The bottom panels for “case A” and “case B” show the ELN distribution as a function of $\cos\theta$.[]{data-label="toy1"}](plots/Fig3b.pdf "fig:"){width="49.00000%"}
![Radial evolution of the ELN$(\cos\theta=1)$ for toy model “case A” in violet and “case B” in blue. The radius of onset of free streaming is marked by two vertical lines for each case to guide the eye; the dashed line is for $\nu_e$ and the dotted one for $\bar{\nu}_e$. Neutrinos and antineutrinos decouple at smaller radii for “case B” and the onset of free streaming radius of $\nu_e$ is closer to the one of $\bar\nu_e$ for “case B” than for “case A.”[]{data-label="toy1_rad"}](plots/Fig4.pdf){width="49.00000%"}
Figure \[toy1\] show the stationary solution obtained for the $\nu_e$ (in red) and $\bar{\nu}_e$ (in green) angular distributions as a function of $\cos\theta$ for “case A” on the top and for “case B” on the bottom in arbitrary units. Note that the angular distributions of $\nu_x$ and their antineutrinos are also computed, but we do not show them here for simplicity. The plotted angular distributions have been extracted at $r \simeq 40$ km and $r \simeq 30$ km, respectively; i.e., after the neutrino decoupling. One can see that while “case A” does not lead to the formation of an ELN crossing, “case B” does. For completeness, the bottom panels of “case A” and “case B” in Fig. \[toy1\] show the resultant ELN distribution ($\nu_e-\bar{\nu}_e$) in arbitrary units and as a function of $\cos\theta$.
Figure \[toy1\_rad\] shows the radial evolution of ELN$(\cos\theta=1)$ for “case A” (in violet) and “case B” (in blue). For each case, the onset of the free-streaming regime of $\nu_e$ and $\bar\nu_e$ is marked by two vertical lines to guide the eye; since we only need an approximate estimation of the region where the neutrinos start to stream freely and are fully decoupled from matter, we define the radius of onset for the free-streaming regime as the radius where the forward flux ($\cos\theta>0$) is 10 times larger than the backward flux ($\cos\theta<0$). One can see that the free-streaming regime is reached at larger $r$ for “case A.” Moreover, the onset radius of free-steaming for $\nu_e$ is always smaller than the one of $\bar\nu_e$ given the difference in the interaction strengths. The two radii are separated by a larger distance in “case A.”
By looking at Figs. \[toy1\] and \[toy1\_rad\], one can see that ELN crossings originate in the proximity of the neutrino free-streaming region in “case B.” Moreover, because of the different baryon profiles employed in “case A” and “case B,” the decoupling regions of $\nu_e$ and $\bar{\nu}_e$ occur in very different spatial regions for “case A” while they are close to each other for “case B.” This implies that the $\nu_e$ and $\bar{\nu}_e$ angular distributions are more similar to each other in the proximity of the decoupling region in “case B” than in “case A.” This is also proved by the fact that the total local number densities of $\nu_e$ and $\bar{\nu}_e$ are more similar to each other in the proximity of the decoupling region in “case B,” as shown in Table \[table1\].
$n_{\nu_e}/n_{\bar{\nu}_e}$
-------- -----------------------------
case A $2.11$
case B $1.24$
: Ratio of the electron neutrino and antineutrino number densities ($n_{\nu_e}/n_{\bar{\nu}_e}$) at the radius of the onset of free streaming of $\nu_e$ for “case A” and “case B,” see text for details.
\[table1\]
This toy-model example shows that ELN crossings can develop only in the proximity of the neutrino decoupling region. Moreover, the decoupling of $\nu_{e}$ should occur in a radial region close to where the one of $\bar{\nu}_{e}$ happens and their local number densities should be comparable. If those conditions are fulfilled, ELN crossings are likely to develop.
Supernova model example {#sec:snmodel}
-----------------------
We now extend our findings to a more complex case involving the radial dependence of the main SN quantities. We base our estimations on the inputs from a 1D hydrodynamical model of a $18.6\,M_\odot$ SN with SFHo nuclear equation of state and gravitational mass of $1.4\,M_\odot$ [@Bollig2016] that we adopt as benchmark case, and select post-bounce time snapshots representative of the different SN phases. Note that the hydrodynamical simulation does not provide the neutrino angular distributions that are instead estimated iteratively through our stationary and spherically symmetric model.
The top panel of Fig. \[density\] shows the baryon density as a function of the radius for three different post-bounce times $t_{\mathrm{p.b.}} = 0.25, 0.5$ and $1$ s, in violet, blue and cyan respectively. The radii of the onset of free-streaming of $\nu_e$ and $\bar\nu_e$ are marked through the vertical lines to guide the eye. One can see that, as $t_{\mathrm{p.b.}}$ increases, neutrinos start to free-stream at smaller radii, closer to the SN core. Moreover, as time progresses, the baryon density profile becomes steeper in the proximity of the region of the onset of free streaming. Another interesting aspect is that, for fixed $t_{\mathrm{p.b.}}$, the free-streaming radii of $\nu_e$ and $\bar{\nu}_e$ differ from each other at earlier post-bounce times and become similar to each other at later times. For earlier times, the electron fraction $Y_e$ is larger at radii smaller than the free-streaming one as shown in the middle panel of Fig. \[density\]. The bottom panel of Fig. \[density\] shows the ratio of the mean free paths $\lambda_{\nu_e}/\lambda_{\bar\nu_e}$ as a function of the radius; in the proximity of the free-streaming radius, $\lambda_{\nu_e}/\lambda_{\bar\nu_e} \le 1$. The ratio between the $\nu_e$ and $\bar\nu_e$ number densities at the radius of onset of free streaming is reported in Table \[table2\] for the three studied $t_{\mathrm{p.b.}}$. One can see that $n_{\nu_e}/n_{\bar{\nu}_e} \rightarrow 1$ as $t_{\mathrm{p.b.}}$ increases.
$n_{\nu_e}/n_{\bar{\nu}_e}$
------------------------------ -----------------------------
$t_{\mathrm{p.b.}} = 0.25$ s $1.31$
$t_{\mathrm{p.b.}} = 0.5$ s $0.96$
$t_{\mathrm{p.b.}} = 1$ s $1.08$
: Ratio of the electron neutrino and antineutrino number densities ($n_{\nu_e}/n_{\bar{\nu}_e}$) at the radius of the onset of free streaming of $\nu_e$ for three different post-bounce times $t_{\mathrm{p.b.}} = 0.25, 0.5$ and $1$ s of our benchmark SN model.
\[table2\]
![[*Top*]{}: Baryon density as a function of radius for $t_{\mathrm{p.b.}} = 0.25, 0.5$ and $1$ s, in violet, blue and cyan respectively, for the $18.6\,M_\odot$ SN model adopted as benchmark case in this work. The radii of onset of free streaming are marked by two vertical lines, the dashed one is for $\nu_e$ and the dotted one for $\bar{\nu}_e$, to guide the eye. In the region of decoupling the baryon density falls off much more rapidly as $t_{\mathrm{p.b.}}$ increases. [*Middle*]{}: Electron abundance as a function of radius for $t_{\mathrm{p.b.}} = 0.25, 0.5$ and $1$ s. $Y_e$ is smaller in the proximity of the decoupling radius for larger $t_{\mathrm{p.b.}}$. [*Bottom*]{}: Ratio of mean free paths of $\nu_{e}$ and $\bar{\nu}_{e}$. It should be noted that the mean free path of $\bar{\nu}_{e}$ is larger than that of ${\nu}_{e}$ before decoupling.[]{data-label="density"}](plots/Fig5a.pdf "fig:"){width="49.00000%"}\
![[*Top*]{}: Baryon density as a function of radius for $t_{\mathrm{p.b.}} = 0.25, 0.5$ and $1$ s, in violet, blue and cyan respectively, for the $18.6\,M_\odot$ SN model adopted as benchmark case in this work. The radii of onset of free streaming are marked by two vertical lines, the dashed one is for $\nu_e$ and the dotted one for $\bar{\nu}_e$, to guide the eye. In the region of decoupling the baryon density falls off much more rapidly as $t_{\mathrm{p.b.}}$ increases. [*Middle*]{}: Electron abundance as a function of radius for $t_{\mathrm{p.b.}} = 0.25, 0.5$ and $1$ s. $Y_e$ is smaller in the proximity of the decoupling radius for larger $t_{\mathrm{p.b.}}$. [*Bottom*]{}: Ratio of mean free paths of $\nu_{e}$ and $\bar{\nu}_{e}$. It should be noted that the mean free path of $\bar{\nu}_{e}$ is larger than that of ${\nu}_{e}$ before decoupling.[]{data-label="density"}](plots/Fig5b.pdf "fig:"){width="49.50000%"}\
![[*Top*]{}: Baryon density as a function of radius for $t_{\mathrm{p.b.}} = 0.25, 0.5$ and $1$ s, in violet, blue and cyan respectively, for the $18.6\,M_\odot$ SN model adopted as benchmark case in this work. The radii of onset of free streaming are marked by two vertical lines, the dashed one is for $\nu_e$ and the dotted one for $\bar{\nu}_e$, to guide the eye. In the region of decoupling the baryon density falls off much more rapidly as $t_{\mathrm{p.b.}}$ increases. [*Middle*]{}: Electron abundance as a function of radius for $t_{\mathrm{p.b.}} = 0.25, 0.5$ and $1$ s. $Y_e$ is smaller in the proximity of the decoupling radius for larger $t_{\mathrm{p.b.}}$. [*Bottom*]{}: Ratio of mean free paths of $\nu_{e}$ and $\bar{\nu}_{e}$. It should be noted that the mean free path of $\bar{\nu}_{e}$ is larger than that of ${\nu}_{e}$ before decoupling.[]{data-label="density"}](plots/Fig5c.pdf "fig:"){width="49.50000%"}
Figure \[real1\] shows the resultant angular distributions in arbitrary units for $\nu_e$ and $\bar{\nu}_e$ as a function of $\cos\theta$ for $t_{\mathrm{p.b.}} = 0.25, 0.5$ and $1$ s from top to bottom, respectively. Those angular distributions have been obtained with our iterative method by employing the SN inputs shown in Fig. \[density\], together with the radial profiles of the chemical potentials for neutrinos and nucleons and the medium temperature extracted from the hydrodynamical simulation. We plot the angular distributions at an arbitrary radius of $40$ km, i.e. after the neutrino decoupling occurred. For $t_{\mathrm{p.b.}} = 0.25$ s, no ELN crossing is found as the number density of $\nu_{e}$ remains larger than the one of $\bar{\nu}_{e}$ due to the shallow baryon density profile. At late times ($t_{\mathrm{p.b.}} \gtrsim 0.5$ s), the baryon density profile becomes steeper; as a result the free-streaming radii of $\nu_{e}$ and $\bar{\nu}_{e}$ become closer and the number densities of $\nu_e$ and $\bar{\nu}_e$ become comparable. Hence, ELN crossings are favored.
![[*Top*]{}: Angular distributions of $\nu_{e}$ in red and $\bar{\nu}_{e}$ in green in arbitrary units, for $t = 0.25$ s as a function of $\cos\theta$ for our benchmark SN model. The angular distributions have been extracted at 40 km, i.e. after decoupling. The lower panel shows the angular distribution of the ELN. The excess in the total number of $\nu_{e}$ prevents an ELN crossing. [*Middle*]{}: Angular distributions of $\nu_{e}$ and $\bar{\nu}_{e}$ for $t = 0.5$ s. An ELN crossing occurs as the decoupling regions of $\nu_e$ and $\bar{\nu}_e$ become closer to each other and their number densities are comparable. [*Bottom*]{}: Angular distributions of $\nu_{e}$ and $\bar{\nu}_{e}$ for $t = 1$ s. The ELN becomes negative in the forward direction implying that a crossing occurs. []{data-label="real1"}](plots/Fig6a.pdf "fig:"){width="49.00000%"}\
![[*Top*]{}: Angular distributions of $\nu_{e}$ in red and $\bar{\nu}_{e}$ in green in arbitrary units, for $t = 0.25$ s as a function of $\cos\theta$ for our benchmark SN model. The angular distributions have been extracted at 40 km, i.e. after decoupling. The lower panel shows the angular distribution of the ELN. The excess in the total number of $\nu_{e}$ prevents an ELN crossing. [*Middle*]{}: Angular distributions of $\nu_{e}$ and $\bar{\nu}_{e}$ for $t = 0.5$ s. An ELN crossing occurs as the decoupling regions of $\nu_e$ and $\bar{\nu}_e$ become closer to each other and their number densities are comparable. [*Bottom*]{}: Angular distributions of $\nu_{e}$ and $\bar{\nu}_{e}$ for $t = 1$ s. The ELN becomes negative in the forward direction implying that a crossing occurs. []{data-label="real1"}](plots/Fig6b.pdf "fig:"){width="49.00000%"}\
![[*Top*]{}: Angular distributions of $\nu_{e}$ in red and $\bar{\nu}_{e}$ in green in arbitrary units, for $t = 0.25$ s as a function of $\cos\theta$ for our benchmark SN model. The angular distributions have been extracted at 40 km, i.e. after decoupling. The lower panel shows the angular distribution of the ELN. The excess in the total number of $\nu_{e}$ prevents an ELN crossing. [*Middle*]{}: Angular distributions of $\nu_{e}$ and $\bar{\nu}_{e}$ for $t = 0.5$ s. An ELN crossing occurs as the decoupling regions of $\nu_e$ and $\bar{\nu}_e$ become closer to each other and their number densities are comparable. [*Bottom*]{}: Angular distributions of $\nu_{e}$ and $\bar{\nu}_{e}$ for $t = 1$ s. The ELN becomes negative in the forward direction implying that a crossing occurs. []{data-label="real1"}](plots/Fig6c.pdf "fig:"){width="49.00000%"}
Figure \[real1\_rad\] shows the radial evolution of ELN$(\cos\theta = 1)$ for $t_{\mathrm{p.b.}} = 0.25, 0.5$ and $1$ s. As discussed for the toy-model, ELN crossings only occur in the proximity of the neutrino free-streaming region.
![Radial evolution of ELN$(\cos\theta = 1)$ as a function of the radius in arbitrary units, for $t_{\mathrm{p.b.}} = 0.25, 0.5$ and $1$ s, in violet, blue, and cyan respectively. For $t_{\mathrm{p.b.}} = 0.5$ and $1$ s, ELN crossings appear in the proximity of the neutrino free-streaming region, i.e., ELN$(\cos\theta = 1)$ changes sign.[]{data-label="real1_rad"}](plots/Fig7.pdf){width="49.00000%"}
In conclusion, our stationary and spherically symmetric SN model strongly suggests that ELN crossings can only occur within the spatial regions where neutrinos and antineutrinos decouple and start to free stream. Moreover, a steep drop of the baryon density profile, as typical of the late SN stages, together with comparable number densities of $\nu_e$ and $\bar{\nu}_e$ and $\lambda_{\nu_{e}}/\lambda_{\bar{\nu}_{e}} \le 1$ in the decoupling region, would favor the development of angular distributions that are similar to each other; as a consequence, ELN crossings develop given the different interaction rates of $\nu_e$ and $\bar{\nu}_e$. Our results confirm the findings of Refs. [@Tamborra:2017ubu; @Azari:2019jvr] where only the early SN stages were analyzed and no crossing was found. Note that, although we only show the results for three selected snapshots here, we have estimated the angular distributions for $t_{\mathrm{p.b.}} \in [0.25, 0.5]$ s and found that $t_{\mathrm{p.b.}} = 0.5$ s is the first snapshot for which the ELN crossing develops for the adopted SN model.
Our stationary and spherically symmetric SN model does not include any macroscopic asymmetry eventually induced by the hydrodynamical instabilities. The occurrence of the LESA instability has been pinpointed as a possible favorable condition leading to ELN crossings [@Izaguirre:2016gsx; @Dasgupta:2016dbv]. Our stationary SN model hints that, excluding the transition regions when the ELN changes sign in the presence of LESA, in the SN angular regions of net $\bar{\nu}_e$ excess, the $\bar{\nu}_e$ and $\nu_e$ number densities would roughly “swap” and our criteria should still hold. Hence, we expect that, only if the LESA instability is sustained until the late SN phases, then ELN crossings may develop, otherwise the $\bar{\nu}_e$ excess should not be a sufficient condition for crossings to arise.
Our model cannot fully test the conditions leading to crossings in the presence of the LESA instability self-consistently, since this would require a break of the spherical symmetry. Therefore, our conjectures remain to be tested thorough self-consistent 3D hydrodynamical simulations and will be subject of further work.
Dynamical generation of crossings in the electron neutrino lepton number distribution {#sec:oscillations}
=====================================================================================
The presence of ELN crossings, or lack thereof, is determined by the radial profile of $\lambda_{\nu_{e}}/\lambda_{\bar{\nu}_{e}}$, the baryon density profile, as well as $n_{\nu_e}/n_{\bar{\nu}_e}$, which is related to the former two, in the trapping region. However, as discussed in Sec. \[sec:crossings\], the presence of ELN crossings is not only determined by the local conditions but it indirectly feels the effects of distant regions in the SN through collisions. If neutrino conversions occur in the SN core due to a local fluctuation triggering instabilities in the flavor space (independently from the existence of ELN crossings), then our analysis should be modified. In this Section, we analyze whether the existence of flavor conversions may impact the conditions leading to the development of ELN crossings and therefore dynamically modify the neutrino angular distributions.
It should be noted that the neutrino interaction rate in matter is dominated by the neutrino-nucleon cross-section, and the ratio $\lambda_{\nu_{e}}/\lambda_{\bar{\nu}_{e}}$ cannot be changed significantly by the neutrino conversions on a global scale. In the deep interior of a SN, the effective average energy of $\nu_{e}$ is larger than that of $\bar{\nu}_{e}$ due to the non-null chemical potential. This contributes to a larger $\nu_{e}$ interaction rate. Neutrino conversions could in principle reduce the average energy of $\nu_{e}$ and bring it closer the one of $\bar{\nu}_{e}$.
We take the inputs of the SN shapshot at $t_{\mathrm{p.b.}} = 0.25$ s used in Sec. \[sec:snmodel\] as benchmark case. This case did not exhibit ELN crossings in the absence of pre-existing flavor conversions. We then impose that at a certain radius $r_\star$ flavor conversions are triggered possibly leading to flavor decoherence; the latter is one of the most extreme scenarios that one could expect. This effect can be mimicked by assuming that $\lambda_{\nu_{e}}/\lambda_{\bar{\nu}_{e}} \rightarrow 1$ for $r \ge r_\star$. We then change $r_\star$ in order to test whether the radius of the onset of flavor conversions would affect the ELN crossing development. In none of the studied cases, we find a significant modification of the ELN evolution and ELN crossings do not develop.
Although a more thorough analysis of the problem is required, we suspect that the non-locality of the conditions required for an ELN crossing is responsible for making the dynamical generation of ELN crossings unfeasible in a geometry that is spherically symmetric.
Discussion and Conclusions {#sec:conclusions}
==========================
The development of crossings between the angular distributions of $\nu_e$ and $\bar{\nu}_e$ (ELN crossings) is of relevance because it can possibly lead to fast pairwise conversions of neutrinos deep in the SN core with major consequences on the SN physics. It is vital to gain a qualitative understanding of this phenomenon. In this paper, using a simple yet insightful technique, we have qualitatively addressed this question focusing on the microphysics of neutrino-matter collisions in the SN core.
The highly non-linear nature of the neutrino flavor evolution along with the feedback on the flavor dynamics coming from neutrino-matter collisions makes a general self-consistent analysis impossible within current means. However, we have aimed to provide a rule of thumb for the occurrence of ELN crossings. To that purpose, we have constructed a simplified stationary and spherically symmetric SN model that takes into account the physics of collisions through an iterative approach, but neglects any asymmetry and further complications coming from the SN hydrodynamical instabilities. It is should be noted that our assumption of spherical symmetry may be substantially broken either due to SN hydrodynamics [@Tamborra:2014hga; @Janka:2016fox], or because the nature of the neutrino flavor evolution [@Duan:2014gfa; @Mirizzi:2015fva; @Abbar:2015mca].
We have shown that the conditions affecting the development of ELN crossings are not local in nature. In particular, the appearance of ELN crossings is determined by the slope of the baryon density profile together with the requirement that the $\nu_e$ and $\bar\nu_e$ number densities are comparable in the proximity of the decoupling region. Our simple spherically symmetric SN model hints that ELN crossings can only occur in the late stages of the accretion phase and in the cooling phase, under the assumption that a stationary configuration is reached. In fact, at earlier post-bounce times, a large excess of the $\nu_{e}$ number density over the $\bar{\nu}_{e}$ one prevents ELN crossings from occurring. The latter effect is determined by a baryon density profile that slowly varies with the radius, disfavoring the $\nu_e$ and $\bar\nu_e$ distributions from becoming similar. However, in the late accretion phase and cooling phase the distributions of $\nu_e$ and $\bar\nu_e$ naturally become more similar to each other, $\nu_e$ and $\bar\nu_e$ decouple in closer spatial regions, and favorable conditions for ELN crossings arise.
Due to the numerical challenges involved in solving the equations of motion that include neutrino-neutrino interactions, most of the focus has been on the linear stability analysis of the conditions under which instabilities in the flavor space can occur. However, if flavor instabilities are triggered in a small localized region of space, is it not clear if and under which conditions the flavor instability would spread, see e.g. [@Capozzi:2017gqd; @Yi:2019hrp]. One aspect of the question is whether the flavor evolution changes the neutrino interaction rates therefore leading to a dynamical generation of ELN crossings. Our stationary and spherically symmetric SN model suggests that ELN crossings cannot be generated dynamically, unless favorable conditions already exists in the SN core.
Our model neglects perturbations coming from global asymmetries induced by the hydrodynamic instabilities occurring in SNe. However, it still provides good insights on the generation of ELN crossings under stationary conditions.
A concrete list of necessary and sufficient conditions under which fast pairwise conversions of neutrinos can occur in the SN core still remains unsettled. Our work provides new insights on the solution of this intriguing jigsaw.
We acknowledge insightful discussions with Thomas Janka, Georg Raffelt and Anna Suliga, and are grateful to Robert Bollig for granting access to the data of the $18.6 M_\odot$ SN model adopted in this work. SS and IT acknowledge support from the Villum Foundation (Project No. 13164). The work of IT has also been supported by the Knud Højgaard Foundation and the Deutsche Forschungsgemeinschaft through Sonderforschungsbereich SFB 1258 “Neutrinos and Dark Matter in Astro- and Particle Physics (NDM).
|
---
abstract: 'Flow diagram of $(\sigma_{xx}, \sigma_{xy})$ in finite-frequency ($\omega$) regime is numerically studied for graphene quantum Hall effect (QHE) system. The ac flow diagrams turn out to show qualitatively similar behavior as the dc flow diagrams, which can be understood that the dynamical length scale determined by the frequency poses a relevant cutoff for the renormalization flow. Then the two parameter flow is discussed in terms of the dynamical scaling theory. We also discuss the larger-$\omega$ regime which exhibits classical flows driven by the raw frequency $\omega$.'
author:
- Takahiro Morimoto
- Hideo Aoki
title: 'Two parameter flow of $\sigma_{xx}(\omega) - \sigma_{xy}(\omega) $ for the graphene quantum Hall system in ac regime'
---
Introduction
============
In the quantum Hall effect (QHE), one standard and graphically clear way to grasp the physics involving the localization effect is the $\sigma_{xx} - \sigma_{xy}$ diagram, in which we look at the scaling flow (trajectories when the sample size is varied) of the longitudinal conductivity and Hall conductivity $(\sigma_{xx}, \sigma_{xy})$. The scaling property of the static QHE system, especially the quantization of the Hall conductivity into the multiples of $e^2/h$ and vanishing longitudinal conductivity, is beautifully captured with the $\sigma_{xx}-\sigma_{xy}$ diagram as originally discussed by Pruisken and Khmelnitskii in terms of the non-linear sigma model[@pruisken-flow; @pruisken-instanton; @khmelnitskii]. For the conventional two-dimensional electron gas (2DEG), there exist (i) stable fixed points at $(\sigma_{xx},\sigma_{xy})=(0,n) (n$: integer), along with (ii) unstable fixed points characterizing delocalization at $(\sigma_{xx},\sigma_{xy})=(\sigma_{xx}^c,n+1/2)$. The attraction into the former, quantum-Hall fixed point accounts for the Hall insulating state with quantized values of Hall conductivity, while the latter, unstable fixed point accounts for delocalized states at the center of each Landau level (LL), and dominates the behavior of the plateau-to-plateau transition in $\sigma_{xy}$.
Scaling properties of the Anderson transition have attracted both theoretical and experimental interests, since they should be universal and depend only on the symmetry class of the system [@huckestein]. The critical exponent has been numerically studied for the lowest LL [@huckestein-kramer], and later with the Chalker-Coddington network model [@slevin-ohtsuki] and experimentally confirmed by Li et al. [@Li-scaling-exp] The universal value of longitudinal conductances at the LL centers are intensively discussed with a tight binding lattice model [@schweitzer-markos] . Thus the scaling behavior at the plateau-to-plateau transition has been established.
On the other hand, rapid advances in the terahertz (THz) spectroscopy technique have made the optical responses of the quantum Hall system, such as cyclotron resonances and Faraday rotations, experimentally accessible[@hangyo; @ikebe2008cds]. Specifically, the Faraday rotation is proportional to the optical Hall conductivity $\sigma_{xy}(\omega)$, and we have an intriguing problem of how the static Hall conductivity, which may be regarded as a topological quantity[@tknn], evolves into the optical Hall conductivity, especially in the relevant (cyclotron) energy scale which falls upon the THz regime. [@mikhailov85; @gusynin-sxy; @morimoto-opthall; @Fialkovsky09] On the optical Hall conductivity, we have recently shown, numerically, that the plateau structure in $\sigma_{xy}(\omega)$ is unexpectedly retained in the ac (THz) regime in both 2DEG and in graphene, although the plateau height deviates from the quantized values in ac.[@morimoto-opthall]. Graphene is particularly interesting, since a massless Dirac system is realized as the low-energy physics, which accommodates a novel Dirac QHE is observed [@Nov05; @Kim-gr], for which the scaling theory of QHE in graphene has been formulated in terms of the non-linear sigma model [@ostrovsky08]. Experimentally, the ac plateau has been observed in a GHz Faraday rotation measurement for a 2DEG system [@hohls-kuchar02] and recently in THz regime [@ikebe-THz]. In the graphene QHE system, we expect plateau structures at the tail (small frequency) region, while Faraday rotation was measured in the region around cyclotron resonances [@crassee2010giant].
Considering these advances in spectroscopies of QHE systems, it is important to study the systematic behavior of plateau structure in ac and optical regimes. Robustness of the ac plateau structure against disorder as revealed in the numerical result can be understood if we consider the effect of localization, which dominates the physics of electrons around the centers of Landau levels in disordered QHE systems, in finite-frequency regime, following the scaling theory of the Anderson transition. Namely, a finite frequency put an effective cutoff (the dynamical length scale $L_\omega$) for the system, and the plateau in the ac Hall conductivity should be retained in the region where the localization length ($\xi \sim |\epsilon-\epsilon_c|^{-\nu}$) diverging toward LL center is smaller than the dynamically posed cutoff $L_\omega$. In the two parameter flow picture, this can be viewed as the dynamical length scale $L_\omega$ determining a scale where the renormalization is stopped.
The dynamical scaling behavior has been studied for the ac longitudinal conductivity [@gammel-brenig], and for the optical Hall $\sigma_{xy}(\omega)$ [@morimoto-ac-scaling]. Now, it is interesting to combine both the optical longitudinal and Hall $\sigma_{xy}(\omega)$ and numerically map out the two parameter ($\sigma_{xx} - \sigma_{xy}$) flow diagram in the ac regime.
In the present work we have calculated both optical longitudinal and Hall conductivities ($\sigma_{xx}, \sigma_{xy}$ ) for the graphene QHE system with a potential disorder, and combined them to numerically examine the two parameter $\sigma_{xx} - \sigma_{xy}$ flow diagram in the ac regime. We study $n=0$ LL in the graphene QHE system with exact diagonalization method to treat disorder effects. One particular point of interest is the behavior around fixed points in the $\sigma_{xx}(\omega) - \sigma_{xy}(\omega)$ diagram. There, we have focused on the $n=0$ Dirac Landau level, where the peculiarity of graphene appears as the property that $n=0$ is an electron-hole symmetric point. In a small-$\omega$ regime, we obtain numerical results which are coherent with the above picture that the $\sigma_{xx}(\omega) - \sigma_{xy}(\omega)$ flow obeys the Pruisken’s two-parameter flow, with $L_\omega$ as a relevant cutoff for the system in ac region, where the flows are between $\sigma_{xy} = \pm 2 e^2/h$ reflecting the graphene QHE including the valley and the spin degeneracies. We also discuss a large-$\omega$ regime where the frequency $\omega$ is comparable with the cyclotron frequency $\omega_c$ and exhibits classical flows driven by the raw frequency.
Formalism
=========
For graphene QHE system, we employ the two-dimensional effective Dirac model, $$H=v_F \tau_z {\boldsymbol \sigma} \cdot {\boldsymbol \pi} + V({\mbox{\boldmath$r$}}),$$ where $v_F$ is Fermi velocity, ${\boldsymbol \sigma}=(\sigma_x,\sigma_y)$ and $\tau_z$ the Pauli matrices acting on the space of two sublattices (A, B) and two valleys (K, K’), ${\boldsymbol \pi}={\mbox{\boldmath$p$}}+e{\mbox{\boldmath$A$}}$ with ${\mbox{\boldmath$p$}}=(p_x,p_y)$ the momentum, and ${\mbox{\boldmath$A$}}$ the vector potential. Disorder is introduced by a random potential, $$V({\mbox{\boldmath$r$}})=\sum_{i,j} u_{i,j} \exp(-|{\mbox{\boldmath$r$}}- {\mbox{\boldmath$R$}}_{i,j}|^2/2d^2)/(2\pi d^2),$$ composed of Gaussian scattering centers of range $d$ and $u_{i,j}$ takes a value in $(-u,u)$ randomly. Here we take $d=0.7\ell$, where $\ell=\sqrt{\hbar/eB}$ is the magnetic length. For numerical facility, impurity sites $R_{i,j}$ are periodically placed on $
R_{i,j}= (2\pi\ell^2/L)(i,j)
$ with $L$ being the linear dimension of the sample. A measure of disorder is given by the Landau level broadening[@ando], $
\Gamma = 2 u [N_{imp}/2\pi(\ell^2+2d^2) L^2]^{1/2},
$ with a number of impurity sites $N_{imp}$. We assume smooth potential disorders in the length scale of underlying lattice structure, and we neglect inter-valley scattering. The cyclotron energy is, for a Dirac particle, given by $\omega_c = \sqrt{2}v_F/\ell$.
Since we are interested in the dynamical $\sigma_{xx}(\omega) - \sigma_{xy}(\omega)$, which should be related to the localization physics, we obtain the eigenstates of the Hamiltonian with an exact diagonalization, which is done for a subspace spanned by a finite number of Landau levels (LL’s) around $n=0$ LL, for $L\times L$ systems with $L/\ell$ varied over $20, 30, 40$. Here we retain 5 LLs ($n= -2 \sim 2$), which poses an ultraviolet cutoff. [^1] In the Landau gauge ${\mbox{\boldmath$A$}}=(0, Bx)$, the basis function is $\psi_{n,k}=e^{-iky} \phi_n(x-\ell^2 k_y)$, where $\phi_n$ is the Dirac-Landau function in the $n$-th Landau level [@ZhengAndo], and wavenumbers k takes an integer multiples of $2\pi/L$ with a periodic boundary condition for $y$-direction. The number of discrete wavenumbers $N_k$ is related to $L$ with $N_k=L^2/2\pi \ell^2$ in a finite system. From the eigenfunctions $\psi_a$ and eigenenergies $\epsilon_a$ obtained with the exact diagonalization, the optical Hall conductivity[@morimoto-opthall] is evaluated from the Kubo formula [@kubo1965] as $$\sigma_{xy}(\omega) =
\frac{\hbar}{iL^2} \sum_{ab} j_x^{ab} j_y^{ba}
\frac{f(\epsilon_b) - f(\epsilon_a)}{\epsilon_b-\epsilon_a}
\frac{1}{\epsilon_b-\epsilon_a-\hbar\omega-i\eta},$$ where $f(\varepsilon)$ is the Fermi distribution, and $\eta$ a low-energy cutoff. The current matrix element, $j_x^{ab}$, has a selection rule peculiar to Dirac model ($n \leftrightarrow \pm n \pm 1$ with $n$ the Landau index), which is distinct from that ($n \leftrightarrow n\pm 1$) for 2DEG as $$\begin{aligned}
j_{x}^{n,n'}=e v_F C_{n} C_{n'}
\left[{\rm sgn}(n)\delta_{|n|-1,|n'|}+{\rm sgn}(n')\delta_{|n|+1,|n'|}\right],\\
j_{y} ^{n,n'}=i e v_F C_{n} C_{n'}\left[{\rm sgn}(n)\delta_{|n|-1,|n'|}-{\rm sgn}(n')\delta_{|n|+1,|n'|}\right],
\label{matrixelement}\end{aligned}$$ where $C_n= 1 (n=0)$ or $1/\sqrt{2}$ (otherwise) [@shon-ando; @ZhengAndo].
The longitudinal conductivity, on the other hand, is given by $$\mbox{Re} \sigma_{xx}(\omega)
=
\frac{\hbar}{L^2}
\sum_{\varepsilon_a, \varepsilon_b}
\frac{f(\varepsilon_b)-f(\varepsilon_a)}{\varepsilon_b-\varepsilon_a}
\frac{|j_{x}^{ab}|^2 \eta}{(\varepsilon_b-\varepsilon_a-\hbar \omega)^2+\eta^2}.$$ We note that the low-energy cutoff $\eta$, which affects the $\omega \sim 0$ behavior of $\sigma_{xx}(\omega)$, should be chosen close to the Thouless energy, which is typically of the order of the energy level spacing $\sim 1/L^2$.[@thouless-kirkpatrick; @nomura-ryu-koshino; @nomura2008qhe] The temperature in the Fermi distribution function $f(\epsilon)$ is set to be small as far as the low-frequency behavior of $\sigma_{xx}(\omega)$ is numerically stable, which is achieved if we put the temperature to be of the order of the level spacing $\sim 1/L^2$. For the scaling analysis the calculation is repeated for varied sample size $L$, Fermi energy $\varepsilon_F$ and frequency $\omega$. Throughout the paper the length, energy and frequency are respectively in units of $\ell, \hbar\omega_c$ and $\omega_c$.
$\sigma_{xx}-\sigma_{xy}$ diagram in ac regime
==============================================
First we discuss the behavior $\sigma_{xx}(\omega)$ and $\sigma_{xy}(\omega)$ separately. $\sigma_{xx}(\epsilon_F, \omega)$ in Fig.\[bare-sigma\](a) shows a ridge structure along the $\omega$ axis when the Fermi energy $\epsilon_F$ is around the delocalized region at each LL. When we increase $L$ in Fig.\[bare-sigma\](b), the width of $\sigma_{xx}(\omega=0)$ becomes narrower, while the peak height stays almost constant which is expected from the universal longitudinal conductance[@kivelson-global-phase-daigram; @schweitzer-markos; @wong-sxxc]. On the other hand, $\sigma_{xy}(\epsilon_F, \omega)$ plotted against $\epsilon_F$ in Fig.\[bare-sigma\](c) shows a transition from $\sigma_{xy}=-2$ plateau to $\sigma_{xy}=2$ plateau around $n=0$ LL with 2 valley and 2 spin degeneracies, where the transition width of $\sigma_{xy}(\omega=0)$ sharpens with increasing $L$.
![ (a) $\sigma_{xx}(\epsilon_F,\omega)$ plotted against the Fermi energy $\epsilon_F$ and the frequency $\omega$ for $L=30$. Lower panels depict $\sigma_{xx}(\omega=0)$ (b) and $\sigma_{xy}(\omega=0)$ (c) for various sample sizes with disorder strength $\Gamma/\hbar \omega_c=0.4$. []{data-label="bare-sigma"}](Fig1.eps){width="0.7\linewidth"}
![ The flow of $(\sigma_{xx}(\omega)-\sigma_{xy}(\omega))$ in the graphene quantum Hall system with a disorder strength $\Gamma/\hbar\omega_c=0.4$. (a) The flow in dc regime ($\omega= 0$) for various Fermi energy $\varepsilon_F$ and system size $L/\ell=20,30,40$. (b) The flow in ac regime for various values of Fermi energy $\varepsilon_F$ and frequency $\omega$ with a fixed system size $L/\ell=30$ []{data-label="smallw"}](Fig-smallw.eps){width="0.95\linewidth"}
![ (a) The flow of $(\sigma_{xx}(\omega)-\sigma_{xy}(\omega))$ in the graphene quantum Hall system for renormalized frequency $\omega L^z= 3$ with $z=2$ and $L=20,30,40$ with a disorder strength $\Gamma/\hbar\omega_c=0.4$. (b) Flows when the sample size is varied as $L=20\rightarrow 40$ for a fixed of $\omega L^z$. For each value of $\omega L^z$ we plot the flows corresponding to various values of Fermi energy $\epsilon_F$. The value of $\epsilon_F$ is indicated by different symbols (circles, squares, etc) that mark the smallest sample size ($L=20$). The results for various values (color-coded) of $\omega L^z=3-6$ are superposed. []{data-label="flow-scaling"}](Fig-scaling.eps){width="0.95\linewidth"}
Now we are in a position to examine $\sigma_{xx}-\sigma_{xy}$ diagram in Fig.\[smallw\]. First, the diagram for $\omega=0$ is depicted in Fig.\[smallw\](a), where almost all the points are attracted to the points $(\sigma_{xx},\sigma_{xy})=(0, \pm 2)e^2/h$ with increased sample size $L$, while the point sitting at $\sigma_{xy} = 0$ only exhibits a tiny, upward flow. This numerically calculated dc flow diagram is clearly understood in terms of Pruisken’s two parameter flow picture [@pruisken-flow; @pruisken-instanton; @khmelnitskii]. We can interpret the static result that the flows starting from $\sigma_{xy} \neq 0$ corresponding to those flowing into the stable fixed points at $(\sigma_{xx}, \sigma_{xy})=(0, \pm 2)e^2/h$ which describes Hall plateau for graphene, while the tiny upward flow around $\sigma_{xy} = 0$ corresponds to the flow that starts from the point with a $\sigma_{xx}$ smaller than $\sigma_{xx}^c$ (rather than larger $\sigma_{xx}$ expected from SCBA values), and shows a renormalization to the unstable fixed point at $(\sigma_{xx}, \sigma_{xy})=(\sigma_{xx}^c, 0)$. So we are seeing the region below the unstable fixed point. This upward flow toward the unstable fixed point would reflect the existence of the delocalized state at the LL center, since it is percolating through the sample and has a metallic nature. The longitudinal conductivity then increases with the sample size and converges to the universal conductance at the LL center. This dc result for graphene $n=0$ LL is consistent with Nomura et al who discusses the Thouless-number and the Hall conductivity for the dc flow diagram.[@nomura2008qhe]
If we now turn to the ac result that is plotted for a fixed system size $L/\ell=30$ varying frequency $\omega/\omega_c=0.0025 \sim 0.015$ in Fig.\[smallw\](b), where a behavior quite similar to the dc data is found. Namely, almost all the points away from the LL center are attracted to the QHE fixed points, while the point on $\sigma_{xy}= 0$ at the center of LL flows only slightly shift upwards. This behavior is understood that, in this small frequency regime, the relevant cutoff length scale for the critical behavior of localization length $\xi$ or the renormalization equation for two-parameter flow is posed by the frequency through the dynamical length scale $L_\omega $ instead of the sample size $L$ in the dc regime, so that the overall behavior is determined by the same two-parameter flow, where the effective cutoff alone is changed systematically with the frequency $\omega$ as $L_\omega \sim \omega^{-1/z}$ .
Dynamical scaling analysis
==========================
Now let us describe the dynamical scaling for $\sigma_{xx}(\varepsilon_F,\omega)$ and $\sigma_{xy}(\varepsilon_F,\omega)$. The scaling argument starts from an ansatz that the optical conductivity depends on Fermi energy $\varepsilon_F$ and frequency $\omega$ only through the ratios $L/\xi$ and $L_\omega/\xi$. The physical quantities should then be described in terms of the universal scaling function of the ratios $L/\xi$ and $L_\omega/\xi$. Here $\xi$ is the localization length with a critical behavior $\xi \sim 1/|\varepsilon_F - \varepsilon_c|^\nu$, where $\varepsilon_c$ is the critical energy which coincides with the center of the LL ($\varepsilon_c=0$ for $n=0$), and $\nu$ the localization critical exponent. The dynamical length scale, which is the distance over which an electron travels during one cycle, $1/\omega$, of the ac field, is assumed to behave as $L_\omega \sim 1/\omega^{1/z}$, where $z$ is the dynamical critical exponent, assuming $z= 2$ in this paper since we treat the non-interacting electrons [@huckestein-z], while $z=1$ is established for the case with electron-electron interaction [@shklovskii]. For these critical behaviors the dynamical scaling ansatz for the longitudinal and transverse conductivities amounts to [@chalker-scaling-ansatz] $$\begin{aligned}
\sigma_{xx}(\varepsilon_F,\omega,L)=\frac{e^2}{h}
F_{xx}(\delta\varepsilon_F L^{1/\nu},\omega L^z),
\nonumber\\
\sigma_{xy}(\varepsilon_F,\omega,L)=\frac{e^2}{h}
F_{xy}(\delta\varepsilon_F L^{1/\nu},\omega L^z),
\label{scaling-eq}\end{aligned}$$ where $F_{xx},F_{xy}$ are universal scaling functions, and $\delta\varepsilon_F \equiv \varepsilon_F-\varepsilon_c$ .
This ansatz is only valid for the critical region, where the deviation of Fermi energy from the LL center $\delta\varepsilon_F$ is assumed to be small and the frequency $\omega$ also to be small. In Sec.3 and Sec.4, we precisely consider this region with small $\delta\varepsilon_F$ and $\omega$, while in Sec.5 where we shall treat a large-frequency region this ansatz should be no longer applicable. The ansatz indicates that the flow of $(\sigma_{xx}(\omega),\sigma_{xy}(\omega))$ in the ac region depends on the frequency $\omega$ only through the ratio of the rescaled frequency with the system size as $\omega L^z$.
We now interpret the behavior of $\sigma_{xx}-\sigma_{xy}$ diagram in terms of the dynamical scaling eqn.(\[scaling-eq\]) as shown in Fig.\[flow-scaling\]. When we consider a dynamical scaling behavior, it is convenient to consider a rescaled frequency $\omega L^z$ since a dependence on the frequency appears in a form of $\omega L^z$ in eqn.(\[scaling-eq\]). In Fig. \[flow-scaling\](a) we show a result for a fixed rescaled frequency $\omega L^z=3$, then the flow of ac conductivities can be discussed in terms of varying $L$ as in dc case.The dynamical scaling hypothesis (eqn.(\[scaling-eq\])) expects that $(\sigma_{xx},\sigma_{xy})$ right on $\varepsilon_F=0$ with $\delta \epsilon_F L^{1/\nu} =0$ for all $L$ should depend only on $\omega L^z$, i.e., does not flow, while at the center of Fig.\[flow-scaling\](a) a slight upward flow is seen showing a metallic behavior discussed above. Away from $\varepsilon_F = 0$, on the other hand, the flow should depend only on $\delta\varepsilon_F L^{1/\nu}$ from eq.(\[scaling-eq\]) for a fixed value of the rescaled frequency $\omega L^z$, which implies that the flows starting from various $\varepsilon_F \neq 0$ should reside on a single curve as seen in Fig.\[flow-scaling\](a).
In order to examine the dependence of the two parameter flow on the rescaled frequency $\omega L^z$, we superpose all the results in Fig.\[flow-scaling\](b). There, we show flows when the sample size is varied as $L=20\rightarrow 40$ for fixed values of $\omega L^z$. For each value of $\omega L^z$ we plot the flows corresponding to various values of Fermi energy $\epsilon_F$. The value of $\epsilon_F$ is indicated by different symbols that mark the smallest sample size ($L=20$). Fig.\[flow-scaling\](b) thus visualizes the the two-parameter flows superposes for various values of $\omega L^2$ and for various values of Fermi energy $\epsilon_F$. The results for various values of $\omega L^z=3-6$ are then superposed in Fig.\[flow-scaling\](b). In this summary plot we can see that the $(\sigma_{xx},\sigma_{xy})$ flows for different values of $\omega L^z$ tend to coalesce into a single curve in the region away from $\sigma_{xy}=0$. This is a consequence that we see the same two parameter flow with various cutoff length scale posed by different rescaled frequencies. Close to the unstable fixed point at $(\sigma_{xx},\sigma_{xy}) = ( \sigma_{xx}^c ,0)$ the flow shows a metallic behavior (i.e., $\sigma_{xx}$ increasing with $L$) renormalizing into the unstable fixed point with increasing $\sigma_{xx}$, and slightly deviates from a single curve. The role of the value of increasing $\omega L^z$ appears in a shift of the initial position of the flow in $(\sigma_{xx}(\omega), \sigma_{xy}(\omega))$ for each $\varepsilon_F$ toward the opposite direction to the flow. More precisely, for a larger $\omega L^z$ we have more broadened peak structure in $\sigma_{xx}$ and more broadened transition width in $\sigma_{xy}$, so that the initial point (smallest-$L$ data) of the flow for each $\varepsilon_F$ shifts closer to the unstable fixed point (the center of the flow at $(\sigma_{xx}, \sigma_{xy}) = (\sigma_{xx}^c, 0)$ ), which we can observe in a shift of initial values in Fig.\[flow-scaling\](b). This behavior arises from the fact that the relevant cutoff for the two-parameter flow in the ac regime is determined by the dynamical length scale $L_\omega$, where a larger frequency $\omega$ gives a smaller cutoff length scale $L_\omega \sim \omega^{-1/z}$ and leads to an overall shift toward the direction opposite to that of the flow (in this case, toward the unstable fixed point), and a more broadened width of the plateau-to-plateau transition.
Flow diagrams for larger $\omega$
=================================
![ (a) $\sigma_{xx}(\epsilon_F=0,\omega)$ plotted for frequency up to $\omega=0.1$, where the data for various system sizes almost coincide with each other for large $\omega$. (b) $\sigma_{xx}(\omega)-\sigma_{xy}(\omega)$ diagram for raw frequency $\omega$ with $L=30$. []{data-label="large-w"}](Fig4.eps){width="0.9\linewidth"}
So far we have discussed the behavior for a small-frequency region, where the dynamical length scale $L_\omega \sim \omega^{-1/z}$ behaves an infrared cutoff for the critical phenomena at the Anderson transition and the dynamical scaling arguments also hold quite well. It is worth while to ask how the behavior of the larger-frequency region, typically for $\omega$ up to $0.1\omega_c$, looks like in a similar two parameter $\sigma_{xx}-\sigma_{xy}$ plot. Naturally, in the large-frequency region, ac conductivities are expected to be no longer dominated by the criticality and show a qualitatively different, rather classical behavior. In this region we should adopt raw $\omega$ instead of $\omega L^z$, because the system should be out of the critical (i.e., dynamical-scaling) region. In Fig.\[large-w\](a), we look at the large-$\omega$ behavior of the longitudinal $\sigma_{xx}(\omega,\varepsilon_F=0)$ against the frequency $\omega$, from $\omega = 0.01\omega_c$ to $\omega = 0.1\omega_c$. In a large-frequency, ac longitudinal conductivity at the center of LL shows a monotonic decrease of $\sigma_{xx}(\varepsilon=0)$ with $\omega$ as consistent with Ref.[@gammel-brenig]. This clearly signals a deviation from the dynamical scaling ansatz, which assumes the $\omega L^z$-dependent conductivity as in eqn.\[scaling-eq\].
The $\sigma_{xx}(\omega)-\sigma_{xy}(\omega)$ diagram for the large-$\omega$ region where the flows are indicated with varying frequency $\omega$ for various values of $\epsilon_F$ is shown in Fig.\[large-w\](b). The $\omega$-driven flows show a different pattern from the flows in a small-omega region in the previous section, although it is attracted with decreasing $\omega$ to the static-QHE fixed points $(\sigma_{xx},\sigma_{xy})=(0, \pm 2)e^2/h$, just as in the temperature-driven flows[@aoki1986-T-flow]. This behavior is understood that in the large-frequency region, where frequency $\omega$ become comparable to the cyclotron frequency $\omega_c$, the frequency puts a small length scale comparable to the magnetic length, which should naturally induce deviations from the critical region and show a pattern of the two conductivities different from that in the small-frequency region. The existence itself of flows around $\omega/\omega_c \sim 1$ implies that the system is not fully dominated by a Drude-like behavior, which is consistent with the observation of the ac plateau structure in this frequency region, i.e., $\omega \sim 0.1$ that corresponds to the THz region.
Summary
=======
We have numerically obtained $\sigma_{xx}(\omega)-\sigma_{xy}(\omega)$ diagram for graphene QHE system, and have examined the flow for two regimes. In a small-$\omega$ regime, flows are governed by the dynamical length scale posed by the frequency as a relevant cutoff length scale for the criticality around the Anderson transition. We also discussed a metallic behavior around the unstable fixed point reflecting the delocalized state at the LL center. The larger-$\omega$ regime exhibits rather classical flows driven by the bare frequency due to the small dynamical length scale comparable to the magnetic length scale .
We wish to thank Mikito Koshino, Kentaro Nomura and Akira Furusaki for illuminating discussions. This work has been supported in part by Grants-in-Aid for Scientific Research, Nos.20340098, 23340112 from JSPS. TM has been supported by JSPS.
[36]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , , , , , ****, ().
, ****, ().
, , , , ****, ().
, ****, ().
, , , , ****, ().
, ****, ().
, , , ****, ().
, , , ****, ().
, ****, ().
, , , , , , , , ****, ().
, , , , ****, ().
, , , ****, ().
, , , , , , ****, ().
, , , , , , ****, ().
, , , , , , , , , Nat. Phys. 7, 48 (2011).
, ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, , (, ), vol. of **, pp. .
, ****, ().
, ****, ().
, , , ****, (). ; K. Nomura and N. Nagaosa, Phys. Rev. Lett. [**106**]{}, 166802 (2011).
, , , , , ****, ().
, , , ****, ().
, , , , ****, ().
, ****, ().
, ****, ().
, , , , ****, ().
, ****, ().
[^1]: For each valley, this choice of high-energy cutoff (retaining $n=-N_{max} \sim N_{max}$ LLs) makes the Hall conductivity coincide with the half of the total Hall conductivity contributed from both two valleys (K, K’), for which a cancellation of a ultraviolet divergence occurs [@ostrovsky08]. So we can concentrate on one of the decoupled valleys in the numerical calculation, although well-defined conductivities are the sum of contributions from two valleys
|
---
abstract: 'A particular initial state for the construction of the perturbative expansion of QCD is investigated. It is formed as a coherent superposition of zero momentum gluon pairs and shows Lorentz as well as global $SU(3)$ symmetries. It follows that the gluon and ghost propagators determined by it, coincides with the ones used in an alternative of the usual perturbation theory proposed in a previous work. Therefore, the ability of such a procedure of producing a finite gluon condensation parameter already in the first orders of perturbation theory is naturally explained. It also follows that this state satisfies the physicality condition of the BRST procedure in its Kugo and Ojima formulation. The BRST quantization is done for the value $\alpha=1$ of the gauge parameter where the procedure is greatly simplified. Therefore, after assuming that the adiabatic connection of the interaction does not take out the state from the interacting physical space, the predictions of the perturbation expansion, at the value $ \alpha=1$, for the physical quantities should have meaning. The validity of this conclusion solves the gauge dependence indeterminacy remained in the proposed perturbation expansion.'
author:
- |
[**Marcos Rigol Madrazo**]{}\
[*Centro de Estudios Aplicados al Desarrollo Nuclear*]{}\
[*Calle 30, N. 502 e/ 5ta y 7ma, Miramar, La Habana, Cuba* ]{}\
\
[*Abdus Salam International Centre for Theoretical Physics*]{}\
[*Strada Costiera 11, 34014, Trieste, Italy*]{}\
[*and* ]{}\
[*Instituto de Cibernética Matemática y Física*]{}\
[*Calle E, N. 309, Esq. a 15, Vedado, La Habana, Cuba*]{}\
(Published in Phys. Rev. D62, 074018, 2000)\
(Final Version)
date: 'September, 2001'
title: '[**Modified initial state for perturbative QCD**]{}'
---
Introduction.
=============
Quantum Chromodynamics (QCD) was discovered in the seventies and up to this time it is considered as the fundamental theory for the strong interactions, as a consequence it has been deeply investigated [@Yang].
In one limit, the smallness of the coupling constant at high momentum (asymptotic freedom) made possible the theoretical investigation of the so-called hard processes by using the familiar perturbative language. This so-called Perturbative QCD (PQCD) was satisfactorily developed. However, relevant phenomena associated with the strong interactions can’t be described by the standard perturbative methods and the development of the Non-Perturbative QCD is at the moment one of the challenges of this theory.
One of the most peculiar characteristics of the strong interactions is color confinement. According to this philosophy, colored objects, like quarks and gluons, can’t be observed as free particles in contrast with hadrons that are colorless composite states and effectively detected. The physical nature of such phenomenon remains unclear. Numerous attempts to explain this property have been made, for example explicit calculations in which the theory is regularized on a spatial lattice [@Creutz], and also through the construction of phenomenological models. In the so called MIT Bag Model [@Chodos], it is assumed that a bag or a bubble is formed around the objects having color in such a way that they could not escape from it, because their effective mass is smaller inside the bag volume and very high outside. Within the so called String Model [@Gervais], which is based in the assumption that the interaction forces between quarks and antiquarks grow when the distance increases, in such a way that the energy increases linearly with the string length $E(L)=kL$.
A fundamental problem in QCD is the nature of the ground state [@Shuryak1; @ShuryakTex; @Shuryak2]. This state is imagined as a very dense state of matter, composed of gluons and quarks interacting in a complicated way. Its properties are not easily accessible in experiments, because quarks and gluons fields cannot be directly observed. Furthermore, the interactions between quarks can’t be directly determined.
It is already accepted that in QCD the zero point oscillations of the coupled modes produce a finite energy density, which is determined phenomenologically. The numerical estimate of it is
$$E_{vac} \simeq -f \langle 0\mid (gG)^2\mid 0\rangle \simeq
0.5GeV/fm^3,$$ where the so called non perturbative gluonic condensate $\langle 0\mid(gG)^2 \mid 0\rangle $ was introduced and phenomenologically evaluated by Shifman, Vainshtein and Zakharov [@Zakharov]. The negative sign of $E_{vac}$ means that the non perturbative vacuum energy is lower that the one associated to the perturbative vacuum.
Since a long time ago, one particular kind of models has shown to be able to predict similar properties. These are the chromo-magnetic vacuum approaches, in which it is assumed that a vacuum magnetic field is existing at all the points [@Savv2]. Concretely a constant magnetic abelian field $H$ is assumed. A one-loop calculation gives as the result the following energy density
$$E\left(H\right) =\frac{H^2}2\left(1+\frac{bg^2}{16\pi ^2} \ln
\left(\frac H{\Lambda ^2}\right) \right).$$
This formula predicts negative values for the energy at small fields, so the usual perturbative ground state with $H=0$ is unstable with respect to the formation of a state with a non vanishing field intensity at which the energy $E\left(H\right)$ has a minimum [@Savv2]. Many physical problems related with hadron structure, confinement problem, etc. have been investigated using the Savvidy model. Nevertheless, after some time its intense study was abandoned. The main reasons were: (1) The perturbative relation giving $E_{vac}$ would only be valid if the second order in the expansion in powers of the Planck constant is relatively small. (2) The specific spatial direction and the color direction of the magnetic field break the now seemingly indispensable Lorentz and $SU(3)$ invariance of the ground state. (3) The magnetic moment of the vector particle (gluon) is such that its energy in the presence of the field has a negative eigenvalue, which also makes the homogeneous magnetic field $H$ unstable.
Before presenting the objectives of the present work it should be stressed that the perturbative quantization of QCD is realized in the same way as that in QED. The quadratic field terms of the QCD Lagrangian have the same form that the ones corresponding to the electrons and photons in QED. However, in connection with the interaction, there appear a substantial difference due to the coupling of the gluon to itself. In addition is a general fact that a perturbative expansion has some freedom in dependence on the initial conditions at $t\rightarrow \pm \infty $ or what is the same, from the states in which the expansion is based. Moreover, as was expressed before, the exact ground state has a non-trivial structure associated to a gluon condensate.
Then, given the above remarks, it is not unreasonable to expect that the true vacuum state could be well described through a modified Feynman expansion perturbatively describing a gluon condensate. Such a perturbative condensate could generate all the low energy physical observable, which in the standard expansion could require an infinite number of terms of the series.
In a previous work of one of the authors (A.C.) [@Cabo], following the above idea, a modified perturbation theory for QCD was proposed. This expansion retains the main invariance of the theory (the Lorentz and $SU(3)$ ones), and is also able to reproduce main physical predictions of the chromomagnetic field models. It seems possible to us that this procedure could produce a reasonable if not good description of the low energy physics. If it is the case, then, the low and high energy descriptions of QCD would be unified in a common perturbative framework. In particular in [@Cabo] the results had the interesting outcome of producing a non-vanishing mean value for the relevant quantity $\langle G^2\rangle $. In addition the effective potential for the condensation parameter in the first order approximation shows a minimum at non vanishing values of that parameter. Therefore, the procedure is able to reproduce at least some central predictions of the chromomagnetic models and general QCD analysis.
The main objective of the present work consists in investigating the foundations of the mentioned perturbation theory. The concrete aim is to find a state in the Fock space of the non interacting theory being able to generate that expansion by also satisfying the physicality condition of the BRST quantization approach.
It follows that it is possible to find the sought for state and it turns out to be an exponential of a product of two gluon and ghost creation operators. That is, it can be interpreted as a coherent superposition of states with many zero momentum gluon and ghost pairs. Therefore, this structure gives an explanation for the ability of the expansion being investigated to produce non zero values of the gluon condensation in the first orders of perturbation theory. The fact that the effective action also shows a minimum for non-vanishing values of the condensation parameter also supports the idea that the considered state improves the perturbative expansion. It is also shown that the state satisfies the linear condition which defines the physical subspace in the BRST quantization for the $\alpha =1$ value of the gauge parameter. Thus, the indefiniteness in the appropriate value of this parameter to be used which remained in the former work is resolved opening the way for the study of the predictions of the proposed expansion.
It should mentioned that a similar idea as the one advanced in [@Cabo] was afterwards considered in [@hoyer]. In that work, gluon and quark condensation in a range of momenta of the order of $\Lambda _{qcd}$ have been considered with the similar aim of constructing an alternative perturbative theory for QCD. However, in our view, that construction should break the Lorentz invariance because the condensates are expected to show a non-vanishing energy density. If this limitation is not at work their proposed mechanism could be worth considering as an alternative possibility.
The exposition will be organized as follows: In Section 2, the BRST operational quantization method for gauge fields developed by Kugo and Ojima is reviewed. Starting from it, in Section 3 the ansatz for the Fock space state, which generates the desired form of the perturbative expansion, is introduced. The proof that the state satisfies the physical state condition is also given in this section. Then, in Section 4 it is shown that the proposed state can generate the wanted modification of the propagator by a proper selection of the parameters at hand. The modification of the propagator for the ghost particles is also considered in this section. Finally the evaluation of the gluon condensation parameter done in a previous work [@Cabo] is reviewed in order to illustrate the ability of the procedure in predicting a main property of the real QCD ground state.
Review of the K-O Quantization procedure
========================================
In the present section the operator formalism developed by T. Kugo and I. Ojima [@Kugo1; @Kugo2; @Kugo3; @Kugo4] is reviewed and after specialized to the non interacting limit of gluodynamics (GD). This formulation takes into account the invariance of the Lagrangian under a global symmetry operation called the BRST transformation [@BRST]. We will consider the construction of a relativistic invariant initial state in the non-interacting limit of QCD. The BRST physical state condition in the non-interacting limit will be also imposed. As explained before, the motivation is that we think that this state has the possibility to furnish the gluodynamics ground state under the adiabatic connection of the interaction. In what follows we will work in Minkowski space with the conventions defined below.
Let G a compact group and $\Lambda$ any matrix in the adjoint representation of its associated Lie Algebra. The matrix $\Lambda$ can be represented as a linear combination of the form
$$\Lambda =\Lambda ^aT^a,$$
where $T^a$ are the generators $(a=1,...,$Dim$ G=n)$ which are chosen as Hermitian ones, satisfying
$$\left[ T^a,T^b\right] =if^{abc}T^c.$$
The variation of the fields under infinitesimal gauge transformations are given by
$$\begin{aligned}
\delta _\Lambda A_\mu ^a\left(x\right) &=&\partial _\mu \Lambda
^a\left(x\right) +gf^{acb}A_\mu ^c\left(x\right) \Lambda
^b\left(x\right) =D_\mu ^{ab}\left(x\right) \Lambda ^b, \\ D_\mu
^{ab}\left(x\right) &=&\partial _\mu \delta ^{ab}+gf^{acb}A_\mu
^c\left(x\right).\end{aligned}$$
The metric $g_{\mu \nu }$ is diagonal and taken in the convention
$$g_{00}=-g_{ii}=1 \qquad \text{for}\ \ \, i=1,2,3.$$
The complete GD Lagrangian to be considered is the one employed in the operator quantization approach of [@OjimaTex]. Its explicit form is given by
$$\begin{aligned}
{\cal L} &=&{\cal L}_{YM}+{\cal L}_{GF}+{\cal L}_{FP} \label{Lag}
\\ {\cal L}_{YM} &=&-\frac 14F_{\mu \nu }^a\left(x\right) F^{\mu
\nu,a}\left(x\right), \label{YM} \\ {\cal L}_{GF} &=&-\partial
^\mu B^a\left(x\right) A_\mu ^a\left(x\right) + \frac \alpha
2B^a\left(x\right) B^a\left(x\right), \label{GF} \\ {\cal L}_{FP}
&=&-i\partial ^\mu \overline{c}^a\left(x\right) D_\mu
^{ab}\left(x\right) c^b\left(x\right). \label{FP}\end{aligned}$$
The field strength is
$$F_{\mu \nu }^a\left(x\right) =\partial _\mu A_\nu ^a\left(x\right)
-\partial _\nu A_\mu ^a\left(x\right) +gf^{abc}A_\mu
^b\left(x\right) A_\nu ^c\left(x\right) .$$
Relation (\[YM\]) defines the Yang-Mills standard Lagrangian, (\[GF\]) is the gauge fixing term and (\[FP\]) is the Lagrangian which describes the dynamics of the nonphysical Faddeev-Popov ghost fields.
The physical state conditions in the BRST procedure [@OjimaTex; @Govaerts] are given by
$$\begin{aligned}
&&Q_B\mid \Phi \rangle =0, \\
&&Q_C\mid \Phi \rangle =0.\end{aligned}$$
where
$$\begin{aligned}
&&Q_B=\int d^3x\left[ B^a\left(x\right) \nabla _0
c^a\left(x\right) +gB^a\left(x\right) f^{abc}A_0^b\left(x\right)
c^c\left(x\right) \right. \nonumber \\ &&\hspace{6cm}\left. +\frac
i2g\partial _0\left(\overline{c}^a\right) f^{abc}c^b\left(x\right)
c^c\left(x\right) \right],\end{aligned}$$
with
$$f\left(x\right) \nabla_0 g\left(x\right) \equiv f\left(x\right)
\partial _0g\left(x\right) -\partial _0\left(f\left(x\right)
\right) g\left(x\right).$$
The BRST charge is conserved as a consequence of the BRST symmetry of Lagrangian (\[Lag\]). The also conserved charge $Q_C$ is given by
$$Q_C=i\int d^3x\left[ \overline{c}^a\left(x\right) \nabla_0 c^a
\left(x\right) +g\overline{c}^a\left(x\right)
f^{abc}A_0^b\left(x\right) c^c\left(x\right) \right],$$
which conservation comes from the Noether theorem due to the invariance of Lagrangian (\[Lag\]) under the phase transformation $ c\rightarrow e^\theta c,\ \overline{c}\rightarrow
e^{-\theta }\overline{c}$. This charge defines the so-called “ghost number” as the difference between the number of $c$ and $\overline{c}$ ghosts.
Our interest will be centered here in the Yang-Mills theory without spontaneous breaking of the gauge symmetry in the limit of no interaction. The quantization of the theory defined by the Lagrangian (\[Lag\]), after to be considered in the interaction free limit $g\rightarrow 0$, leads to the following commutation relations for the free fields
$$\begin{aligned}
\left[ A_\mu ^a\left(x\right),A_\nu ^b\left(y\right) \right]
&=&\delta ^{ab}\left(-ig_{\mu \nu }D\left(x-y\right)
+i\left(1-\alpha \right)
\partial _\mu \partial _\nu E\left(x-y\right) \right), \nonumber \\
\left[ A_\mu ^a\left(x\right),B^b\left(y\right) \right] &=&\delta
^{ab}\left(-i\partial _\mu D\left(x-y\right) \right), \nonumber
\\ \left[ B^a\left(x\right) ,B^b\left(y\right) \right]
&=&\left\{ \overline{c}
^a\left(x\right),\overline{c}^b\left(y\right) \right\} =\left\{
c^a\left(x\right),c^b\left(y\right) \right\} =0, \nonumber \\
\left\{ c^a\left(x\right),\overline{c}^b\left(y\right) \right\}
&=&-D\left(x-y\right). \label{com}\end{aligned}$$
The equations of motion for the non-interacting fields takes the simple form
$$\Box A_\mu ^a\left(x\right) -\left(1-\alpha \right) \partial _\mu
B^a\left(x\right) =0,$$
$$\partial ^\mu A_\mu ^a\left(x\right) +\alpha B^a\left(x\right)
=0,
\label{liga1}$$
$$\Box B^a\left(x\right) =\Box c^a\left(x\right) =\Box
\overline{c}^a\left(x\right) =0.$$
This equations can be solved for arbitrary values of $\alpha$. However, as it was said before, the discussion will be restricted to the case $\alpha=1$ which corresponds to the situation in which all the gluon components satisfy the D’Alambert equation. This selection, as considered in the framework of the usual perturbative expansion, implies that you are not able to check the $\alpha $ independence of the physical quantities. However in the present discussion the aim is to construct a perturbative state that satisfies the BRST physicality condition, in order to connect adiabatically the interaction. Then, the physical character of all the prediction will follow whenever the former assumption that adiabatic connection does not take the state out of the physical subspace at any intermediate state is valid. Clearly, the consideration of different values of $\alpha $, would be also a convenient recourse for checking the $\alpha \ $independent perturbative expansion. However, at this stage it is preferred to delay the more technical issue of implementing the BRST quantization for any value of $ \alpha$ for future work.
In that way the field equations in the $\alpha =1$ gauge will be
$$\Box A_\mu ^a\left(x\right) =0, \label{movi1}$$
$$\partial ^\mu A_\mu ^a\left(x\right)+ B^a\left(x\right) =0, \label{movi3}$$
$$\Box B^a\left(x\right) =\Box c^a\left(x\right) =\Box
\overline{c}^a\left(x\right) =0. \label{movi2}$$
The solutions of the set (\[movi1\])-(\[movi2\]) can be written as
$$\begin{aligned}
A_\mu ^a\left(x\right) &=&\sum\limits_{\vec{k},\sigma
}\left(A_{\vec{k} ,\sigma }^af_{k,\mu }^\sigma \left(x\right)
+A_{\vec{k},\sigma }^{a+}f_{k,\mu }^{\sigma *}\left(x\right)
\right), \nonumber \\ B^a\left(x\right)
&=&\sum\limits_{\vec{k}}\left(B_{\vec{k}}^ag_k\left(x\right)
+B_{\vec{k}}^{a+}g_k^{*}\left(x\right) \right), \nonumber \\
c^a\left(x\right)
&=&\sum\limits_{\vec{k}}\left(c_{\vec{k}}^ag_k\left(x\right)
+c_{\vec{k}}^{a+}g_k^{*}\left(x\right) \right), \nonumber \\
\overline{c}^a\left(x\right)
&=&\sum\limits_{\vec{k}}\left(\overline{c}_{
\vec{k}}^ag_k\left(x\right)
+\overline{c}_{\vec{k}}^{a+}g_k^{*}\left(x\right) \right).\end{aligned}$$
The wave packets for non-massive scalar and vector fields are taken as
$$\begin{aligned}
g_k\left(x\right) &=&\frac 1{\sqrt{2Vk_0}}\exp \left(-ikx\right) ,
\nonumber \\ f_{k,\sigma }^\mu \left(x\right) &=&\frac
1{\sqrt{2Vk_0}}\epsilon _\sigma ^\mu \left(k\right) \exp
\left(-ikx\right). \label{pol}\end{aligned}$$
As can be seen from (\[movi3\]) the five $A_{\vec{k},\sigma }^a$ and $B_{ \vec{k}}^a\ $ modes are not all independent. Indeed, it follows from (\[movi3\]) that
$$B_{\vec{k}}^a=A_{\vec{k}}^{S,a}=A_{\vec{k},L}^a.$$
Then, the expansion of the free Heisenberg fields takes the form
$$A_\mu ^a\left(x\right)
=\sum\limits_{\vec{k}}\left(\sum\limits_{\sigma
=1,2}A_{\vec{k},\sigma }^af_{k,\mu }^\sigma \left(x\right)
+A_{\vec{k} }^{L,a}f_{k,L,\mu }\left(x\right)
+B_{\vec{k}}^af_{k,S,\mu }\left(x\right) \right)+h.c.,$$
where $h.c.$ represents the Hermitian conjugate of the first term. In order to satisfy the commutation relations (\[com\]) the creation and annihilation operator associated to the Fourier components of the fields should obey
$$\begin{aligned}
\left[ A_{\vec{k},\sigma }^a,A_{\vec{k}^{\prime },\sigma ^{\prime
}}^{a^{\prime }+}\right] &=&-\delta ^{aa^{\prime }}\delta
_{\vec{k}\vec{k} ^{\prime }}\tilde{\eta}_{\sigma \sigma ^{\prime
}}, \nonumber \\ \left\{
c_{\vec{k}}^a,\overline{c}_{\vec{k}^{\prime }}^{a^{\prime
}+}\right\} &=&i\delta ^{aa^{\prime }}\delta
_{\vec{k}\vec{k}^{\prime }}, \nonumber \\ \left\{
\overline{c}_{\vec{k}}^a,c_{\vec{k}^{\prime }}^{a^{\prime
}+}\right\} &=&-i\delta ^{aa^{\prime }}\delta
_{\vec{k}\vec{k}^{\prime }}\end{aligned}$$
and all the others vanish, $\tilde{\eta}_{\sigma \sigma ^{\prime
}}=\epsilon _\sigma ^\mu \left(k\right) \epsilon _{\sigma ^{\prime
},\mu }^{*}\left(k\right) $, for $\sigma,\sigma ^{\prime
}=1,2,L,S$.
The above commutation rules and equations of motion define the quantized non-interacting limit of GD. It is possible now to start defining the alternative interaction free ground state to be considered for the adiabatic connection of the interaction. As discussed before, the expectation is that the physics of the perturbation theory being investigated is able to furnish a helpful description of low energy physical effects.
The alternative initial state
=============================
After beginning to work in the K.O. formalism some indications were found about that the appropriate state vector obeying the physical condition in this procedure could have the general structure
$$|\phi \rangle =\exp \sum\limits_a\left(C_1\left(\left|
\vec{p}\right| \right)
A_{\vec{p},1}^{a+}A_{\vec{p},1}^{a+}+C_2\left(\left|
\vec{p}\right| \right)
A_{\vec{p},2}^{a+}A_{\vec{p},2}^{a+}+C_3\left(\left|
\vec{p}\right| \right)
\left(B_{\vec{p}}^{a+}A_{\vec{p}}^{L,a+}+i\overline{c}_{\vec{p}
}^{a+}c_{\vec{p}}^{a+}\right) \right) \mid 0\rangle \label{Vacuum}$$
where $\vec{p}$ is an auxiliary momentum, chosen as one of the few smallest values of the spatial momentum for the quantized theory in a finite volume $V$. This value will be taken later in the limit $V\rightarrow \infty $ for recovering Lorentz invariance. From here the sum on the color index $a$ will be explicit. The parameters $C_i\left(\left| \vec{p}\right| \right)$ will be fixed below from the condition that the free propagator associated to a state satisfying the BRST physical state condition, coincides with the one proposed in the previous work [@Cabo]. The solution of this problem, would then give foundation to the physical implications of the discussion in that work.
It should also be noticed that the state defined by (\[Vacuum\]) have some similarity with coherent states [@Itzykson]. However, in the present case, the creation operators appear in squares. Thus, the argument of the exponential creates pairs of physical and non-physical particles. An important property of this function is that its construction in terms of pairs of creation operators determines that the mean value of an odd number of field operators vanishes. This is at variance with the standard coherent state, in that the mean values of the fields are non-zero. The vanishing of the mean field is a property in common with the standard perturbative vacuum, which Lorentz invariance could be broken by any non-zero expectation value of the 4-vector of the gauge field. It should be also stressed that this state as formed by the superposition of states of pair of gluons suggests a connection with some recent works in the literature that consider the formation of gluons pairs due to color interactions.
Let us argue below that the state (\[Vacuum\]) satisfies the BRST physical conditions
$$\begin{aligned}
&&Q_B\mid \Phi \rangle =0, \\
&&Q_C\mid \Phi \rangle =0.\end{aligned}$$
The expression of the charges in the interaction free limit [@OjimaTex] are
$$\begin{aligned}
Q_B
&=&i\sum\limits_{\vec{k},a}\left(c_{\vec{k}}^{a+}B_{\vec{k}}^a-B_{\vec{
k }}^{a+\ }c_{\vec{k}}^a\right), \\ Q_C
&=&i\sum\limits_{\vec{k},a}\left(\overline{c}_{\vec{k}}^{a+}c_{\vec{k}
}^a+c_{\vec{k}}^{a+}\overline{c}_{\vec{k}}^a\right).\end{aligned}$$
Considering first the action of $Q_B$
[ $$\begin{aligned}
&&Q_B\mid \Phi \rangle =i\exp \left\{ \sum\limits_{\sigma
,a}C_\sigma \left(\left| \vec{p}\right| \right) A_{\vec{p},\sigma
}^{a+}A_{\vec{p},\sigma }^{a+}\right\} \times \nonumber \\
&&\times \left(\exp \left\{ \sum\limits_aC_3\left(\left|
\vec{p}\right| \right)
i\overline{c}_{\vec{p}}^{a+}c_{\vec{p}}^{a+}\right\} \sum\limits_{
\vec{k},b}c_{\vec{k}}^{b+}B_{\vec{k}}^b\exp \left\{
\sum\limits_aC_3\left(\left| \vec{p}\right| \right)
B_{\vec{p}}^{a+}A_{\vec{p}}^{L,a+}\right\} \right. \\ &&-\left.
\exp \left\{ \sum\limits_aC_3\left(\left| \vec{p}\right| \right)
B_{\vec{p}}^{a+}A_{\vec{p}}^{L,a+}\right\}
\sum\limits_{\vec{k},b}B_{\vec{k} }^{b+}c_{\vec{k}}^b\exp \left\{
\sum\limits_aC_3\left(\left| \vec{p}\right| \right)
i\overline{c}_{\vec{p}}^{a+}c_{\vec{p}}^{a+}\right\} \right) \mid
0\rangle =0, \nonumber\end{aligned}$$ ]{}
in which the following identity was used
$$\left[ B_{\vec{k}}^b,\exp \sum\limits_aC_3\left(\left|
\vec{p}\right| \right) B_{\vec{p}}^{a+}A_{\vec{p}}^{L,a+}\right]
=-C_3\left(\left| \vec{p} \right| \right) B_{\vec{p}}^{b+}\delta
_{\vec{k},\vec{p}}\exp \sum\limits_aC_3\left(\left| \vec{p}\right|
\right) B_{\vec{p}}^{a+}A_{\vec{ p}}^{L,a+}. \label{ident1}$$
For the action of $Q_C$ on the considered state we have
[ $$\begin{aligned}
&&Q_C\mid \Phi \rangle =i\exp \left\{ \sum\limits_{\sigma
,a}C_\sigma \left(\left| \vec{p}\right| \right) A_{\vec{p},\sigma
}^{a+}A_{\vec{p},\sigma }^{a+}+\sum\limits_aC_3\left(\left|
\vec{p}\right| \right) B_{\vec{p} }^{a+}A_{\vec{p}}^{L,a+}\right\}
\\ &&\times \left[ \sum\limits_{\vec{k},b}
\overline{c}_{\vec{k}}^{b+}c_{\vec{k}
}^b\left(1+\sum\limits_aiC_3\left(\left| \vec{p}\right| \right)
\overline{ c }_{\vec{p}}^{a+}c_{\vec{p}}^{a+}\right)
+\sum\limits_{\vec{k},b}c_{\vec{k}
}^{b+}\overline{c}_{\vec{k}}^b\left(1+\sum\limits_aiC_3\left(\left|
\vec{p} \right| \right)
\overline{c}_{\vec{p}}^{a+}c_{\vec{p}}^{a+}\right) \right] \mid
0\rangle =0 \nonumber\end{aligned}$$ ]{}
which vanishes due to the commutation rules of the ghost operators.
Next, the evaluation of the norm of the proposed state is considered. Due to the commutation properties of the operators, it can be written as
[ $$\begin{aligned}
\langle \Phi \mid \Phi \rangle =\prod\limits_{a=1,..,8} &\left\{
\prod\limits_{\sigma =1,2}\langle 0\mid \exp \left\{ C_\sigma
^{*}\left(\left| \vec{p}\right| \right) A_{\vec{p},\sigma
}^aA_{\vec{p},\sigma }^a\right\} \exp \left\{ C_\sigma
\left(\left| \vec{p}\right| \right) A_{ \vec{p},\sigma
}^{a+}A_{\vec{p},\sigma }^{a+}\right\} \mid 0\rangle \right. &
\nonumber \\ &\times \langle 0\mid \exp \left\{
C_3^{*}\left(\left| \vec{p}\right| \right)
A_{\vec{p}}^{L,a}B_{\vec{p}}^a\right\} \exp \left\{
C_3\left(\left| \vec{p}\right| \right)
B_{\vec{p}}^{a+}A_{\vec{p}}^{L,a+}\right\} \mid 0\rangle &
\nonumber \\ &\left. \times \langle 0\mid
\left(1-iC_3^{*}\left(\left| \vec{p}\right| \right)
c_{\vec{p}}^a\overline{c}_{\vec{p}}^a\right)
\left(1+iC_3\left(\left| \vec{p}\right| \right)
\overline{c}_{\vec{p}}^{a+}c_{\vec{p} }^{a+}\right) \mid 0\rangle
\right\}.&\end{aligned}$$ ]{}
For the product of the factors associated to transverse modes and the eight values of the color index, after expanding the exponential in series it follows
$$\begin{aligned}
&&\left[ \langle 0\mid \exp \left\{ C_\sigma ^{*}\left(\left|
\vec{p} \right| \right) A_{\vec{p},\sigma }^aA_{\vec{p},\sigma
}^a\right\} \exp \left\{ C_\sigma \left(\left| \vec{p}\right|
\right) A_{\vec{p},\sigma }^{a+}A_{\vec{p},\sigma }^{a+}\right\}
\mid 0\rangle \right] ^8 \nonumber \\ &&=\left[ \langle 0\mid
\sum\limits_{m=0}^\infty \left| C_\sigma \left(\left|
\vec{p}\right| \right) \right| ^{2m}\frac{\left(A_{\vec{p},\sigma
}^a\right) ^{2m}\left(A_{\vec{p},\sigma }^{a+}\right)
^{2m}}{\left(m!\right) ^2}\mid 0\rangle \right] ^8 \nonumber \\
&&=\left[ \sum\limits_{m=0}^\infty \left| C_\sigma \left(\left|
\vec{p} \right| \right) \right| ^{2m}\frac{\left(2m\right)
!}{\left(m!\right) ^2} \right] ^8, \label{normT}\end{aligned}$$
where we used the identity
$$\langle 0\mid \left(A_{\vec{p},\sigma }^a\right)
^{2m}\left(A_{\vec{p} ,\sigma }^{a+}\right) ^{2m}\mid 0\rangle
=\left(2m\right) !.$$
The factors related with the scalar and longitudinal modes can be transformed as follows
$$\begin{aligned}
&&\left[ \langle 0\mid \exp \left\{ C_3^{*}\left(\left|
\vec{p}\right| \right)
A_{\vec{p}}^{L,a}\vec{B}_{\vec{p}}^a\right\} \exp \left\{
C_3\left(\left| \vec{p}\right| \right)
B_{\vec{p}}^{a+}A_{\vec{p}}^{L,a+}\right\} \mid 0\rangle \right]
^8 \nonumber \\ &&=\left[ \langle 0\mid \sum\limits_{m=0}^\infty
\left| C_3\left(\left| \vec{p}\right| \right) \right|
^{2m}\frac{\left(A_{\vec{p}}^{L,a}B_{\vec{p} }^a\right)
^m\left(B_{\vec{p}}^{a+}A_{\vec{p}}^{L,a+}\right)
^m}{\left(m!\right) ^2}\mid 0\rangle \right] ^8 \nonumber \\
&&=\left[ \sum\limits_{m=0}^\infty \left| C_3\left(\left|
\vec{p}\right| \right) \right| ^{2m}\right] ^8=\left[ \frac
1{\left(1-\left| C_3\left(\left| \vec{p}\right| \right) \right|
^2\right) }\right] ^8\qquad \text{for} \quad \left|
C_3\left(\left| \vec{p}\right| \right) \right| <1, \label{normLS}\end{aligned}$$
in which we considered the relation
$$\langle 0\mid \left(A_{\vec{p}}^{L,a}B_{\vec{p}}^a\right)
^m\left(B_{\vec{ p }}^{a+}A_{\vec{p}}^{L,a+}\right) ^m\mid
0\rangle =\left(m!\right) ^2.$$
Finally the factor connected with the ghost fields can be evaluated to be
$$\begin{aligned}
&&\left[ \langle 0\mid \left(1-iC_3^{*}\left(\left| \vec{p}\right|
\right) c_{\vec{p}}^a\overline{c}_{\vec{p}}^a\right)
\left(1+iC_3\left(\left| \vec{ p}\right| \right)
\overline{c}_{\vec{p}}^{a+}c_{\vec{p}}^{a+}\right) \mid 0\rangle
\right] ^8 \nonumber \\ &&=\left[ 1+\left| C_3\left(\left|
\vec{p}\right| \right) \right| ^2\langle 0\mid
c_{\vec{p}}^a\overline{c}_{\vec{p}}^a\overline{c}_{\vec{p}}^{a+}c_{
\vec{p}}^{a+}\mid 0\rangle \right] =\left[ 1-\left|
C_3\left(\left| \vec{p} \right| \right) \right| ^2\right] ^8.
\label{normG}\end{aligned}$$
After substituting all the calculated factors, the norm of the state can be written as
$$N=\langle \Phi \mid \Phi \rangle =\prod\limits_{\sigma =1,2}\left[
\sum\limits_{m=0}^\infty \left| C_\sigma \left(\left|
\vec{p}\right| \right) \right| ^{2m}\frac{\left(2m\right)
!}{\left(m!\right) ^2}\right] ^8.$$
Therefore, it is possible to define the normalized state
$$\mid \widetilde{\Phi }\rangle =\frac 1{\sqrt{N}}\mid \Phi \rangle,$$
$$\langle \widetilde{\Phi }\mid \widetilde{\Phi }\rangle =1.$$
Note that, as should be expected, the norm is not dependent on the $ C_3\left(\left| \vec{p}\right| \right) $ parameter which defines the non-physical particle operators entering in the definition of the state.
Gluon and Ghost modified propagators.
=====================================
Let us determine the form of the main elements of perturbation theory, that is the free particle propagators. It will be seen that the propagators associated to the considered state has the same form as proposed in \[11\] under a proper selection of the parameters. Consider the generating functional of the free particle Green’s functions as given by
$$Z\left(J\right) \equiv \langle \widetilde{\Phi }\mid T\left(\exp
\left\{ i\int d^4x \sum \limits_{a=1,..,8} J^{\mu,a}\left(x\right)
A_\mu ^{a}\left(x\right) \right\} \right) \mid \widetilde{\Phi
}\rangle.$$
As a consequence of the Wick’s theorem the generating functional can be written in the form [@Gasiorowicz]
[ $$\begin{aligned}
Z\left(J\right) &\equiv &\langle \widetilde{\Phi }\mid \exp
\left\{ i\int d^4x\sum \limits_{a=1,..,8}J^{\mu,a}\left(x\right)
A_\mu ^{a-}\left(x\right) \right\} \exp \left\{ i\int d^4x\sum
\limits_{a=1,..,8}J^{\mu,a}\left(x\right) A_\mu ^{a+} \left(x\right) \right\}
\mid \widetilde{\Phi }\rangle \nonumber \\
&&\times\exp \left\{ \frac i2\sum\limits_{a,b=1,..8}\int d^4xd^4y
J^{\mu ,a}\left(x\right) D_{\mu \nu
}^{ab}(x-y)J^{\nu,b}\left(y\right) \right\},\end{aligned}$$ ]{}where $D_{\mu \nu }^{ab}(x-y)$ is the usual gluon propagator.
Therefore, the sought for modification to the free propagator is completely determined by the term
$$\prod\limits_{a=1,..,8}\langle \widetilde{\Phi }\mid \exp \left\{
i\int d^4xJ^{\mu,a}\left(x\right) A_\mu ^{a-}\left(x\right)
\right\} \exp \left\{ i\int d^4xJ^{\mu,a}\left(x\right) A_\mu
^{a+} \left(x\right) \right\} \mid \widetilde{\Phi }\rangle,
\label{Mod}$$
where all the color dependent operators are decoupled thanks to the commutation relations.
The annihilation and creation parts of the field operators in this expression are given by
$$\begin{aligned}
A_\mu ^{a+}\left(x\right)
&=&\sum\limits_{\vec{k}}\left(\sum\limits_{\sigma
=1,2}A_{\vec{k},\sigma }^af_{k,\mu }^\sigma \left(x\right)
+A_{\vec{k}}^{L,a}f_{k,L,\mu }\left(x\right) +B_{\vec{k}
}^af_{k,S,\mu }\left(x\right) \right), \\ A_\mu
^{a-}\left(x\right)
&=&\sum\limits_{\vec{k}}\left(\sum\limits_{\sigma
=1,2}A_{\vec{k},\sigma }^{a+}f_{k,\mu }^{\sigma *}\left(x\right)
+A_{\vec{k}}^{L,a+}f_{k,L,\mu }^{*}\left(x\right) +B_{\vec{k}
}^{a+}f_{k,S,\mu }^{*}\left(x\right) \right).\end{aligned}$$
For each color the following terms should be calculated
[ $$\begin{aligned}
&\exp &\left\{ i\int d^4xJ^{\mu,a}\left(x\right) A_\mu
^{a+}\left(x\right) \right\} \mid \Phi \rangle \label{expd} \\
&=&\exp \left\{ i\int d^4xJ^{\mu,a}\left(x\right)
\sum\limits_{\vec{k} }\left(\sum\limits_{\sigma
=1,2}A_{\vec{k},\sigma }^af_{k,\mu }^\sigma \left(x\right)
+A_{\vec{k}}^{L,a}f_{k,L,\mu }\left(x\right) +B_{\vec{k}
}^af_{k,S,\mu }\left(x\right) \right) \right\} \nonumber \\
&&\times \exp \left\{ C_1\left(\left| \vec{p}\right| \right)
A_{\vec{p} ,1}^{a+}A_{\vec{p},1}^{a+}+C_2\left(\left|
\vec{p}\right| \right) A_{\vec{p}
,2}^{a+}A_{\vec{p},2}^{a+}+C_3\left(\left| \vec{p}\right| \right)
\left(B_{ \vec{p}}^{a+}A_{\vec{p}}^{L,a+}+
i\overline{c}_{\vec{p}}^{a+}c_{\vec{p} }^{a+}\right) \right\} \mid
0\rangle. \nonumber\end{aligned}$$ ]{}
After a systematic use of the commutation relations among the annihilation and creation operators, the exponential operators can be decomposed in products of exponential for each space-time mode. This fact allows us to perform the calculation for each kind of wave independently. Considering first the transverse components is obtained
$$\begin{aligned}
&&\exp \left\{ i\int d^4xJ^{\mu,a}\left(x\right)
\sum\limits_{\vec{k}}A_{ \vec{k},\sigma }^af_{k,\mu }^\sigma
\left(x\right) \right\} \exp \left\{ C_\sigma \left(\left|
\vec{p}\right| \right) A_{\vec{p},\sigma }^{a+}A_{ \vec{p},\sigma
}^{a+}\right\} \mid 0\rangle \quad \text{for}\ \sigma =1,2
\nonumber
\\ &&=\exp \left\{ C_\sigma \left(\left| \vec{p}\right| \right)
\left(A_{\vec{ p},\sigma }^{a+}+i\int d^4xJ^{\mu,a}\left(x\right)
f_{p,\mu }^\sigma \left(x\right) \right) ^2\right\} \mid 0\rangle.
\label{52}\end{aligned}$$
The expressions for the longitudinal and scalar modes can be evaluated in a similar way and are found to be
$$\begin{aligned}
&&\exp \left\{ i\int d^4xJ^{\mu,a}\left(x\right)
\sum\limits_{\vec{k}}B_{ \vec{k}}^af_{k,S,\mu }\left(x\right)
\right\} \exp \left\{ i\int d^4xJ^{\mu,a}\left(x\right)
\sum\limits_{\vec{k}}A_{\vec{k} }^{L,a}f_{k,L,\mu }\left(x\right)
\right\} \nonumber \\ &&\hspace{8.8cm}\times \exp \left\{
C_3\left(\left| \vec{p}\right| \right)
B_{\vec{p}}^{a+}A_{\vec{p}}^{L,a+}\right\} \mid 0\rangle \nonumber
\\ &&=\exp \left\{ C_3\left(\left| \vec{p}\right| \right)
\left(B_{\vec{p} }^{a+}-i\int d^4xJ^{\mu,a}\left(x\right) f_{p,L,\mu
}\left(x\right) \right) \right. \label{53}
\\ &&\hspace{5.5cm}\left. \times \left(A_{\vec{p}}^{L,a+}-i\int
d^4xJ^{\mu,a}\left(x\right) f_{p,S,\mu }\left(x\right) \right)
\right\} \mid 0\rangle. \nonumber\end{aligned}$$
Here it should be noticed that the sign difference is produced by the commutation relations.
For the calculation of the total modification (\[Mod\]), it is needed to evaluate
$$\langle \Phi \mid \exp \left\{ i\int d^4xJ^{\mu,a}\left(x\right)
A_\mu ^{a-}\left(x\right) \right\} =\left(\exp \left\{ -i\int
d^4xJ^{\mu,a}\left(x\right) A_\mu ^{a+}\left(x\right) \right\}
\mid \Phi \rangle \right) ^{\dagger }. \label{iz}$$
which may be easily obtained by the results for the r.h.s of (\[52\]) and (\[53\]).
In what follows the following notation will be employed
$$J_{p,i}^a=\int \frac{d^4x}{\sqrt{2Vp_0}}J^{\mu,a}\left(x\right)
\epsilon _{i,\mu }\left(p\right).$$
After using the relations (\[52\]), (\[53\]) and the result of (\[iz\]) the following factors are obtained for each color value in the expression (\[Mod\])
[ $$\begin{aligned}
&\langle 0\mid &\prod\limits_{\sigma =1,2}\exp \left\{ C_\sigma
^{*}\left(\left| \vec{p}\right| \right) \left(A_{\vec{p},\sigma
}^a-iJ_{p,\sigma }^a\right) ^2\right\} \exp \left\{ C_\sigma
\left(\left| \vec{p}\right| \right) \left(A_{\vec{p},\sigma
}^{a+}-iJ_{p,\sigma }^a\right) ^2\right\} \mid 0\rangle \nonumber
\\ &\times &\langle 0\mid \exp \left\{ C_3^{*}\left(\left|
\vec{p}\right| \right) \left(A_{\vec{p}}^{L,a}+iJ_{p,S}^a\right)
\left(B_{\vec{p} }^a+iJ_{p,L}^a\right) \right\} \nonumber \\
&&\qquad \times \exp \left\{ C_3\left(\left| \vec{p}\right|
\right) \left(B_{\vec{p}}^{a+}-iJ_{p,L}^a\right)
\left(A_{\vec{p}}^{L,a+}-iJ_{p,S}^a \right) \right\} \mid 0\rangle
\nonumber \\ &\times &\langle 0\mid \exp \left\{
C_3^{*}\left(\left| \vec{p}\right| \right)
\left(-ic_{\vec{p}}^a\overline{c}_{\vec{p}}^a\right) \right\} \exp
\left\{ C_3\left(\left| \vec{p}\right| \right)
\left(i\overline{c}_{\vec{p} }^{a+}c_{\vec{p}}^{a+}\right)
\right\} \mid 0\rangle. \label{just3}\end{aligned}$$ ]{}
where the parts of the expression associated to each space-time mode are also decoupled.
In evaluating these matrix elements the idea was the following. First to expand the exponents of the exponential operators being at the left of the scalar products in (\[just3\]) by factorizing the exponential operator having an exponent being linear in the sources “$J$”. After that, taking into account that the inverse of this linear operator is annihilating the vacuum, it follows that the net effect of this linear operator is to shift the creation fields entering in the exponential operators at the right of the scalar products in (\[just3\]) in a constant being linear in the sources “$J$”. Further, the same procedure can be performed to act with the exponential factor, which can be also extracted from the new exponential operator acting on the vacuum at the right. Now its action on the operators at the left of (\[just3\]) reduces again to a shift in a constant in the annihilation fields defining this operator. In such a way it is possible to arrive to a recurrence relation, which can be proven by mathematical induction.
The recurrence relation obtained after n steps in the case of the transverse modes takes the form
[ $$\begin{aligned}
&\exp &\left\{ -\left(J_{p,\sigma }^a\right) ^2\left[ C_\sigma
^{*}\left(\left| \vec{p}\right| \right) +4C_\sigma \left(\left|
\vec{p}\right| \right) \left(C_\sigma ^{*}\left(\left|
\vec{p}\right| \right) +\frac 12 \right) ^2\right. \right. \times
\nonumber \\ &&\hspace{4cm}\times \left. \left.
\sum\limits_{m=0}^n\left(4^{2m}\left(\left| C_\sigma \left(\left|
\vec{p}\right| \right) \right| ^2\right)
^{2m}+4^{2m+1}\left(\left| C_\sigma \left(\left| \vec{p}\right|
\right) \right| ^2\right) ^{2m+1}\right) \right] \right\}
\nonumber \\ &\times &\langle 0\mid \exp \left\{ C_\sigma
^{*}\left(\left| \vec{p} \right| \right) \left(A_{\vec{p},\sigma
}^a\right) ^2\right\} \times \nonumber \\ &&\qquad \times \exp
\left\{ -i2^{3+2n}J_{p,\sigma }^aA_{\vec{p},\sigma
}^a\left(C_\sigma ^{*}\left(\left| \vec{p}\right| \right) \right)
^{n+1}\left(C_\sigma \left(\left| \vec{p}\right| \right) \right)
^{n+1}\left(C_\sigma ^{*}\left(\left| \vec{p}\right| \right)
+\frac 12 \right) \right\} \nonumber \\ &&\qquad \qquad \times
\exp \left\{ C_\sigma \left(\left| \vec{p}\right| \right)
\left(A_{\vec{p},\sigma }^{a+}\right) ^2\right\} \mid 0\rangle.
\label{65}\end{aligned}$$ ]{}
After restricting the possible values of $C_\sigma $ to satisfy $\left| C_\sigma \left(\left| \vec{p}\right| \right) \right|
<\frac 12$ the linear part of the operators in the exponent is multiplied by a quantity tending to zero in the limit $n=\infty $ and it can be omitted in such a limit. By also using the formula for the geometrical series the following expression can be obtained for (\[65\])
[ $$\begin{aligned}
&\exp &\left\{ -\left(J_{p,\sigma }^a\right) ^2\left(C_\sigma
^{*}\left(\left| \vec{p}\right| \right) +4C_\sigma \left(\left|
\vec{p}\right| \right) \left(C_\sigma ^{*}\left(\left|
\vec{p}\right| \right) +\frac 12 \right) ^2\frac
1{\left(1-\left(2\left| C_\sigma \left(\left| \vec{p} \right|
\right) \right| \right) ^2\right) }\right) \right\} \nonumber \\
&\times &\langle 0\mid \exp \left\{ C_\sigma ^{*}\left(\left|
\vec{p} \right| \right) \left(A_{\vec{p},\sigma }^a\right)
^2\right\} \exp \left\{ C_\sigma \left(\left| \vec{p}\right|
\right) \left(A_{\vec{p},\sigma }^{a+}\right) ^2\right\} \mid
0\rangle. \label{just4}\end{aligned}$$ ]{}
In a similar way for the factors in (\[just3\]) corresponding to longitudinal and scalar modes, it can be obtained
[ $$\begin{aligned}
&\exp &\left\{ -J_{p,S}^aJ_{p,L}^a\left(C_3^{*}\left(\left|
\vec{p}\right| \right) +C_3\left(\left| \vec{p}\right| \right)
\left(C_3^{*}\left(\left| \vec{p}\right| \right) +1\right) ^2\frac
1{\left(1-\left| C_3\left(\left| \vec{p}\right| \right) \right|
^2\right) }\right) \right\} \nonumber \\ &\times &\langle 0\mid
\exp \left\{ C_3^{*}\left(\left| \vec{p}\right| \right)
A_{\vec{p}}^{L,a}B_{\vec{p}}^a\right\} \exp \left\{
C_3\left(\left| \vec{p}\right| \right)
B_{\vec{p}}^{a+}A_{\vec{p}}^{L,a+}\right\} \mid 0\rangle.\end{aligned}$$ ]{}
Therefore, after collecting the contributions of all the modes and substituting $J_{p,i}^a$, by also assuming $2C_1\left(\left|
\vec{p}\right| \right) =2C_2\left(\left| \vec{p}\right| \right)
=C_3\left(\left| \vec{p} \right| \right) $ (which follows necessarily in order to obtain Lorentz invariance) and using the properties of the defined vectors basis, the modification to the propagator becomes
$$\begin{aligned}
&&\exp \left\{ \frac 12\int
\frac{d^4xd^4y}{2p_0V}J^{\mu,a}\left(x\right)
J^{\nu,a}\left(y\right) g_{\mu \nu }\right. \nonumber
\\ &&\hspace{2cm}\times \left. \left[ C_3^{*}\left(\left|
\vec{p}\right| \right) +C_3\left(\left| \vec{p}\right| \right)
\left(C_3^{*}\left(\left| \vec{p}\right| \right) +1\right) ^2\frac
1{\left(1-\left| C_3\left(\left| \vec{p}\right| \right) \right|
^2\right) }\right] \right\}. \label{M0d2}\end{aligned}$$
Now it is possible to perform the limit process $\vec{p}\rightarrow 0$. In doing this limit, it is considered that each component of the linear momentum $\vec{p}$ is related with the quantization volume by
$$p_x\sim \frac 1a,\ p_y\sim \frac 1b,\ p_z\sim \frac 1c,\ V=abc\sim
\frac 1{ \left| \vec{p}\right| ^3},$$ Since $C_3\left(\left| \vec{p}\right| \right) <1$ then it follows
$$\lim_{\vec{p}\rightarrow 0}\frac{C_3^{*}\left(\left|
\vec{p}\right| \right) }{4p_0V}\sim \lim_{\vec{p}\rightarrow
0}\frac{C_3^{*}\left(\left| \vec{p} \right| \right) \left|
\vec{p}\right| ^3}{4p_0}=0.$$
For the other limit it follows
$$\lim_{\vec{p}\rightarrow 0}\frac{C_3\left(\left| \vec{p}\right|
\right) \left(C_3^{*}\left(\left| \vec{p}\right| \right) +1\right)
^2\frac 1{ \left(1-\left| C_3\left(\left| \vec{p}\right| \right)
\right| ^2\right) } }{4p_0V}, \label{Lim1}$$
Then, after fixing a dependence of the arbitrary constant $C_3$ of the form $ \left| C_3\left(\left| \vec{p}\right| \right) \right|
\sim \left(1-\kappa \left| \vec{p}\right| ^2\right),\kappa >0$, and $C_3\left(0\right) \neq -1$ the limit (\[Lim1\]) becomes
$$\lim_{\vec{p}\rightarrow 0}\frac{C_3\left(\left| \vec{p}\right|
\right) \left(C_3^{*}\left(\left| \vec{p}\right| \right) +1\right)
^2\left| \vec{p} \right| ^3\frac 1{\left(1-\left(1-\kappa \left|
\vec{p}\right| ^2\right) ^2\right) }}{4p_0}=\frac C{2\left(2\pi
\right) ^4} \label{const}$$
where $C$ is an arbitrary constant determined by the also arbitrary factor $\kappa$. An analysis of its properties has been done which shows that $C$ can take only real and nonnegative values.
Therefore, the total modification to the propagator including all color values turns to be
[ $$\begin{aligned}
&\prod\limits_{a=1,..,8}&\langle \widetilde{\Phi }\mid \exp
\left\{ i\int d^4xJ^{\mu,a}\left(x\right) A_\mu
^{a-}\left(x\right) \right\} \exp \left\{ i\int
d^4xJ^{\mu,a}\left(x\right) A_\mu ^{a+}\left(x\right) \right\}
\mid \widetilde{\Phi }\rangle \nonumber \\ &=&\exp \left\{
\sum\limits_{a=1,..8}\int d^4xd^4yJ^{\mu,a}\left(x\right)
J^{\nu,a}\left(y\right) g_{\mu \nu }\frac C{2\left(2\pi \right)
^4} \right\}.\end{aligned}$$ ]{}
Also, the generating functional associated to the proposed initial state can be written in the form
$$Z[J]=\exp \left\{ \frac i2\sum\limits_{a,b=1,..8}\int d^4xd^4y
J^{\mu ,a}\left(x\right) \widetilde{D}_{\mu \nu
}^{ab}(x-y)J^{\nu,b}\left(y\right) \right\},$$
where
$$\widetilde{D}_{\mu \nu }^{ab}(x-y)=\int \frac{d^4k}{\left(2\pi
\right) ^4} \delta ^{ab}g_{\mu \nu }\left[ \frac 1{k^2}-iC\delta
\left(k\right) \right] \exp \left\{ -ik\left(x-y\right) \right\}
\label{propag}$$
which shows that the gluon propagator has the same form proposed in [@Cabo], for the selected gauge parameter value $\alpha =1$ (which corresponds to $\alpha=-1$ in that reference).
Finally, let us consider the possible modifications of the ghost propagator, which can be produced by the new initial state. It is needed to evaluate the expression
[ $$\begin{aligned}
\prod\limits_{a=1,..,8}\langle \widetilde{\Phi }\mid &\exp&
\left\{ i\int d^4x\left(\overline{\xi }^a\left(x\right)
c^{a-}\left(x\right) +\overline{ c}^{a-}\left(x\right) \xi
^a\left(x\right) \right) \right\} \nonumber \\ \times &\exp&
\left\{ i\int d^4x\left(\overline{\xi }^a\left(x\right)
c^{a+}\left(x\right) +\overline{c}^{a+}\left(x\right) \xi
^a\left(x\right) \right) \right\} \mid \widetilde{\Phi }\rangle,
\label{ini}\end{aligned}$$ ]{}
In this case the calculation is easier because of the fermionic character of the ghost makes that only two non-vanishing terms exist in the series expansion of the exponential. Therefore, here it is unnecessary to employ recurrence relations. The following result for (\[ini\]) is obtained $$\exp \left\{ -\sum\limits_{a=1,..8}i\int d^4xd^4y\overline{\xi }^a
\left(x\right) \xi ^a\left(y\right) \frac{C_G}{\left(2\pi \right)
^4}\right\}$$
where in this case $C_G$ is an arbitrary nonnegative real constant. It will be equal to zero if taking $C_3\left(0\right)$ real. This selection makes that the result coincides with the one in the reference [@Cabo] where the ghost propagator was not modified.
The expression of the generating functional for the ghost particles takes the form
$$Z_G[\overline{\xi },\xi ]=\exp \left\{
i\sum\limits_{a,b=1,..8}\int d^4xd^4y \overline{\xi
}^a\left(x\right) \widetilde{D}_G^{ab}(x-y)\xi ^b\left(y\right)
\right\},$$
where $$\widetilde{D}_G^{ab}(x-y)=\int \frac{d^4k}{\left(2\pi \right) ^4}
\delta ^{ab}\left[ \frac{\left(-i\right) }{k^2}-C_G\delta
\left(k\right) \right] \exp \left\{ -ik\left(x-y\right) \right\}.$$
Finally in order to illustrate one of the main properties of the proposed modified perturbation expansion, let us review here a previous calculation [@Cabo] of the gluon condensation parameter $G^2$ in the ground state. In the simplest approximation, that is the mean value of $G^2$ in the interaction free initial state, it corresponds to evaluate
[ $$\begin{aligned}
\langle 0\mid S_{g}\left[ A\right] \mid 0\rangle &=&\left\{ \left[
\frac{1}{ 2i^2}S_{ij}^{g}\frac{\delta ^2}{\delta j_{i}\delta
j_{j}}+\frac{1}{3!i^3} S_{ijk}^{g} \frac{\delta ^{3}}{\delta
j_{i}\delta j_{j}\delta j_{k}}+\frac{1 }{4!i^4}
S_{ijkl}^{g}\frac{\delta ^{4}}{\delta j_{i}\delta j_{j}\delta
j_{k} \delta j_{l}}\right] \right\} \nonumber \\ &&\times \exp
(\frac{i}{2}j_{i}\widetilde{D}_{ij}j_{j}),\end{aligned}$$ ]{}where, using the DeWitt notation, the symbol $S_{ij...l}^g$ represents the functional derivative of the action $S_{g}$ over a number of source arguments $j_{\mu
_i}^{a_i}(x_i),j_{\mu j}^{a_j}(x_j)$...and $j_{\mu
_l}^{a_l}(x_l)$. As usual in this convention, the equality of two of compact indices $i,j...l$ means the sum over the color and Lorentz indices and the subsequent integration over the spacetime coordinates. The symbol $\widetilde{D}_{ij}$ is just the kernel of the gluon propagator (\[propag\]). The first and second terms in the squared brackets have zero contribution as evaluated in dimensional regularization at zero value of the sources. On the other hand the last terms corresponding with four gluon self-interaction gives a non vanishing addition to the gluon condensation parameter precisely due to the condensate term in the propagator.
The contribution can be evaluated to be
$$\langle 0\mid S_g\left[ \phi \right] \mid 0\rangle \ =-\frac{72\
g^2C^2}{ (2\pi)^8}\int dx\,$$
which corresponds with a gluon condensation parameter given by $$G\ ^2\equiv \langle 0\mid G_{\mu \nu }^aG_{\mu \nu }^a\mid
0\rangle =\frac{ 288\ g^2C^2}{(2\pi)^8}.$$
Therefore, it turns out that the procedure is able to predict the gluon condensation at the most simple approximation.
Summary
=======
By using the operational formulation of the Quantum Gauge Fields Theory developed by Kugo and Ojima, a particular state vector of QCD in the non-interacting limit, which obeys the BRST physical state condition, was constructed. The general motivation for looking for this wave-function is to search for a reasonably good description of low energy QCD properties, through giving foundation to the perturbative expansion proposed in [@Cabo]. The high energy QCD description should not be affected by the modified perturbative initial state. In addition it can be expect that the adiabatic connection of the color interaction starting with it as an initial condition, generate at the end the true QCD interacting ground state. In case of having the above properties, the analysis would allow to understand the real vacuum as a superposition of infinite number of soft gluon pairs.
It has been checked that properly fixing the free parameters in the constructed state, the perturbation expansion proposed in the former work is reproduced for the special value $\alpha =1$ of the gauge constant. Therefore, the appropriate gauge is determined for which the expansion introduced previously [@Cabo] is produced by an initial state, satisfying the physical state condition for the BRST quantization procedure. The fact that the non-interacting initial state is a physical one, lead to expect that the final wave-function after the adiabatic connection of the interaction will also satisfy the physical state condition for the interacting theory. If this assumption is correct, the results of the calculations of transition amplitudes and the values of physical quantities should be also physically meaningful. In future, a quantization procedure for arbitrary values of $\alpha $ will be also considered. It is expected that with its help the gauge parameter independence of the physical quantities could be implemented. It seems possible that the results of this generalization will lead to $\alpha $ dependent polarizations for gluons and ghosts and their respective propagators, which however could produce $\alpha $ independent results for the physical quantities. However, this more involved discussion will be delayed for future consideration.
It is important to mention now a result obtained during the calculation of the modification to the gluon propagator, in the chosen regularization. It is that the arbitrary constant $C$ was determined here to be real and positive. This outcome restricts an existing arbitrariness in the discussion given in the previous work. As this quantity $C$ is also determining the square of the generated gluon mass as positive or negative, real or imaginary, therefore it seems very congruent to arrive to a definite prediction of $C$ as real and positive.
The modification to the free ghost propagator introduced by the considered vacuum state was also calculated. Moreover, after considering the free parameter in the proposed trial state as real, which it seems the most natural assumption, the ghost propagator is not be modified, as it was assumed in [@Cabo].
Some tasks which can be addressed in future works are: The study of the applicability of the Gell-Mann and Low theorem with respect to the adiabatic connection of the interactions, starting from the here proposed initial state. The development of zero modes quantization, that is gluon states with exact vanishing four momentum. The ability to consider them with success would allow a formally cleaner definition of the proposed state, by excluding the auxiliary momentum $\vec{p}$ recursively used in the construction carry out. Finally, the application of the proposed perturbation theory in the study of some problems related with confinement and the hadron structure.
[**Acknowledgments**]{}
The authors would like to acknowledge the helpful comments and suggestions of A. Gonzalez, F. Guzman, P. Fileviez, D. Bessis, G. Japaridze, C. Handy A. Mueller, E. Weinberg and J. Lowenstein. One of the authors (A.C.M.) is indebted by general support of the Abdus Salam ICTP during stay (August to September 1999) in which this work was prepared. The support of the Center of Theoretical Studies of Physical Systems of the Clark Atlanta University and the Christopher Reynolds Foundation, allowing the visit to U.S.A. in which the results were commented with various colleagues, is also greatly acknowledged.
C. N. Yang and R. Mills, Phys. Rev. 96, 191 (1954).
M. Creutz, Phys. Rev. D21, 2308 (1980).
A. Chodos, R. L. Jaffe, K. Johnson, C. B. Thorn and V. Weisskopf, Phys. Rev. D9, 3471 (1974).
J. L. Gervais and A. Neveu, Phys. Lett. B80, 255 (1979).
E. U. Shuryak, Phys. Rep. 115, 151 (1984).
E. U. Shuryak, The QCD Vacuum, Hadrons and the Superdense Matter, World Scientific, Singapore, 1988.
T. Schäfer and E. V. Shuryak, Rev. Mod. Phys. 70, 323 (1998).
M. A. Shifman, A. I. Vainshtein and V. I. Zakharov, Nucl. Phys. B147, 385 (1979); B147, 448 (1979); B147, 519 (1979).
G. K. Savvidi, Phys. Lett. B71, 133 (1977).
A. Cabo, S. Peñaranda and R. Martinez, Mod. Phys. Lett. A10, 2413 (1995).
P. Hoyer, hep-ph/9709444, Talk presented at the APCTP-ICTP Joint International Conference 1997 (AIJIC 97) on Recent Developments in Non-perturbative Methods, Seoul, Korea, May 26 - 30, 1997.
T. Kugo and I. Ojima, Prog. Theor. Phys. 60, 1869 (1978).
T. Kugo and I. Ojima, Prog. Theor. Phys. 61, 294 (1979).
T. Kugo and I. Ojima, Prog. Theor. Phys. 61, 644 (1979).
T. Kugo and I. Ojima, Prog. Theor. Phys. Suppl. 66, 1 (1979).
C. Becchi, A. Rouet and R. Stora, Ann. Phys. 98, 287 (1976).
N. Nakanishi and I. Ojima, Covariant Operator Formalism of Gauge Theories and Quantum Gravity, Singapore, Word Scientific, 1990.
J. Govaerts, Hamiltonian Quantization and Constrained Dynamics, Leuven University Press, 1991.
C. Itzykson and J. -B. Zuber, Quantum Field Theory, New York, McGraw-Hill, 1980.
S. Gasiorowicz, Elementary Particle Physics, New York, Jonh Wiley & Sons, 1966.
|
---
abstract: 'This paper deals with the construction of a [*correlation decay*]{} tree (hypertree) for interacting systems modeled using graphs (hypergraphs) that can be used to compute the marginal probability of any vertex of interest. Local message passing equations have been used for some time to approximate the marginal probabilities in graphs but it is known that these equations are incorrect for graphs with loops. In this paper we construct, for any finite graph and a fixed vertex, a finite tree with appropriately defined boundary conditions so that the marginal probability on the tree at the vertex matches that on the graph. For several interacting systems, we show using our approach that if there is very strong spatial mixing on an infinite regular tree, then one has strong spatial mixing for any given graph with maximum degree bounded by that of the regular tree. Thus we identify the regular tree as the worst case graph, in a weak sense, for the notion of strong spatial mixing.'
address:
- 'Microsoft Research, Redmond, WA 98052 '
- 'School of Mathematics and college of computing, Georgia Institute of Technology, Atlanta, GA 30332'
author:
- Chandra Nair
- Prasad Tetali
bibliography:
- 'mybiblio.bib'
title: 'The correlation decay (CD) tree and strong spatial mixing in multi-spin systems'
---
[^1]
Introduction
============
In this paper we show that computation of the marginal probability for a vertex in a graphical model can be reduced to the computation of the marginal probability of the vertex in a rooted tree of self-avoiding walks, with appropriately defined boundary conditions. The computation tree approach for graphical models has been used by [@wei06], [@bag06], [@gak07], for the problems of independent sets, colorings and list-colorings. In [@jus06], the work of [@wei06] for computing marginal probabilities was extended to inference problems in general two spins models. Our work builds on [@wei06; @jus06], and demonstrates how the computation tree can be extended to more than two spins and also for more than two-body interactions. This leads to a different tree (the [*correlation decay*]{} tree), which in a sense is more natural than the dynamic programming based tree of [@gak07] for the case of multiple spins. Further, this approach also yields a tree for the case of multi-spin interactions with multiple spins.
A practical motivation for the creation of a tree structure is the following. The feasible algorithms for computation of marginal probabilities in large interacting systems are constrained to be distributed and local. This requirement has given rise to message passing algorithms (like belief propagation) for systems modeled using graphs. Unfortunately, these algorithms do not necessarily give the correct answer for graphs with many loops, and may not even converge. However, for a tree it is known that the equations are exact and the marginal probability at the root can be computed in a single iteration by starting from the leaves. Thus, if for any graph one can show the existence of a tree, that respects the locality, in which the same marginal probability results, then one can use the exactness of the message passing algorithms on a tree to obtain a convergent, distributed, local algorithm for the computation of marginal probabilities on the original graph.
The caveat with this approach is that the size of the tree can be exponentially large compared to the original graph. So even though the computations are exact, they may not be efficient in practice. However, for certain interesting counting problems [@wei06; @gak07; @bgknt06] approximation algorithms have been designed using the notion of spatial correlation decay, where the influence of the boundary at a root decays as the spatial distance between the boundary and the root increases. Hence pruning the tree to an efficiently computable neighborhood usually yields good and efficient approximations. Thus, to design efficient algorithms it would be useful to show some kind of decay of correlation in the tree structure that is presented here (and hence the name [*correlation decay*]{} tree).
The second part of this paper addresses this issue of spatial correlation decay. We show that, for lots of systems of interest, if there is “very strong spatial mixing" in the infinite regular tree of degree $D$, then there also exists “strong spatial mixing" for any graph with maximum degree $D$. So, in a loose sense, the infinite regular tree is indeed a worst case graph for correlation decay. The fact that some form of strong spatial mixing in the infinite regular tree should imply strong spatial mixing in graphs for a general multi-spin system was conjectured by E. Mossel, [@mos07]. (In the case of independent sets and colorings, the infinite tree being the worst case for the onset of multiple Gibbs measures was conjectured by A. Sokal [@sok00].)
In the next section, we prove the generalization of the result in [@wei06] to the case of multiple-spins but still restricting ourselves to two-body (pairwise) interactions.
Preliminaries
=============
Consider a finite spin system with pairwise interactions, and modeled as a graph, $G=(V,E)$. Let the partition function of this spin system be denoted by $$Z_G = \sum_{\vec{x} \in X^n}
\prod_{(i,j) \in G} \Phi_{i,j}(x_i,x_j) \prod_{i \in V}
\phi_i(x_i).$$ Let $\Lambda \subseteq [n]$ be a subset of [*frozen*]{} vertices (i.e. vertices whose spin values are fixed) and let $$Z_G^\Lambda =
\sum_{\vec{x} \notin X_\Lambda} \prod_{(i,j) \in
G}\Phi_{i,j}(x_i,x_j) \prod_{i \in V} \phi_i(x_i).$$ We wish to compute the following marginal probability with respect to the Gibbs measure, $$\label{eq:GibbsMargProb1} P_G(x_1 = \sigma | X_\Lambda ) =
\frac{1}{Z_G^\Lambda} \sum_{\stackrel{x_1 = \sigma,} {\vec{x}
\notin X_\Lambda}} \prod_{(i,j) \in G}\Phi_{i,j}(x_i,x_j) \prod_{i
\in V} \phi_i(x_i).$$ Instead of performing this marginal probability computation in the original graph $G$ we shall create a [*correlation decay*]{} (CD) tree, $T_\Lambda$, on which the same marginal probability results by performing the computation as described in Section \[sse:compsec\].
The CD Tree {#sse:comptree}
-----------
Similar tree constructions can be found for restricted classes of spin systems by [@wei06; @fis59; @gmp04; @scs05; @bag06; @jus06], and in particular the one in [@wei06]. Our starting point of the tree is the same as in [@wei06], i.e. we begin by labeling the edges of the graph; draw the tree of self avoiding walks, $T_{saw}$; and include the vertices that close a cycle. In [@wei06], the vertices that close a cycle were denoted as occupied or unoccupied depending on whether the edge closing the cycle in $T_{saw}$ was larger than the edge beginning the cycle or not.
Our main point of deviation from the construction in [@wei06] is in the treatment of vertices that close the cycle that were appended in $T_{saw}$. The vertices that close the cycle with higher numbered edges than those that begin the cycle (i.e. those that were marked occupied) are now constrained to take a particular spin value $\sigma_q$. The vertices that close the cycle with lower numbered edges (i.e. the unoccupied vertices) are constrained to take the same value as the occurrence of it earlier in the graph, i.e. the value of the vertex that begins the cycle. This constraint is denoted by a [*coupling line*]{} and influences the way the marginal probabilities are computed on the tree. The tree thus obtained is called the CD-tree, $T_{CD}$, associated with graph $G$.
\[def:coupling\] A [*coupling line*]{} on a rooted tree is a virtual line connecting a vertex $u$ to some vertex $v$ in the subtree below $u$. This line will play a role in the computation of the marginal probabilities as will be explained in detail later. In brief words, when one descends into the subtree of $u$ to compute the marginal probability that $u$ assumes a spin $\sigma_i$, then the vertex $v$ becomes frozen to $\sigma_i$, the same as $u$. Thus, the spin to which $v$ is frozen is coupled to the spin of $u$, whose marginal probability is being determined.
\[rem:topofcl\] One can easily make the following observations regarding coupling lines. A vertex can be the top end point of several coupling lines and indeed the number of coupling lines from any point is related to the number of cycles the vertex is part of in a certain subgraph of the original graph. A vertex can only be the bottom end point of a unique coupling line and for every such point, there is a unique twin point whose spin is frozen to $\sigma_q$, corresponding to traversing the cycle in the opposite direction.
Computation of marginal probabilities on the CD tree {#sse:compsec}
----------------------------------------------------
Here we describe the algorithm for computing the marginal probability at the root for a tree with coupling lines. Let $T$ be a rooted tree with frozen vertices $\Lambda$. In the tree presented in the previous section, the set $\Lambda$ is also assumed to contain the vertices frozen to $\sigma_q$. Consider the recursion $$\label{eq:treerec0} R_T^{\sigma_\Lambda}(\sigma_v) =
\frac{\phi_v(\sigma_v)}{\phi_v(\sigma_q)} \prod_{i=1}^d
\frac{\sum_{l=1}^q \Phi_{v,u_i}(\sigma_v,\sigma_l)
R_{T_i}^{\sigma_{\Lambda_i}}(u_i=\sigma_l)}{\sum_{l=1}^q
\Phi_{v,u_i}(\sigma_q,\sigma_l)
R_{T_i}^{\sigma_{\Lambda_i}}(u_i=\sigma_l)}\,.$$ At this step (proven by the next theorem) we will be computing the ratio of the probability that the root assumes a spin $\sigma_v$ (with respect to the reference spin $\sigma_q$), and therefore the lower end points of the coupling lines joined to the root to be frozen to $\sigma_v$. Thus the set of frozen vertices $\Lambda$ gets appended with this subset of vertices; and the subset of this enhanced $\Lambda$ that is in the subtree of the $i$th child is denoted as $\sigma_{\Lambda_i}$. (There is an abuse of notation in that $\sigma_{\Lambda_i}$ depends on the spin $\sigma_v$ as $\Lambda$ gets appended with the new vertices frozen by the dotted lines to $\sigma_v$.) One can use the above recursion to recursively compute the ratios for the correlation decay tree. The validity of this computation forms the basis of the next theorem.
Consider a rooted tree with $D$ denoting the maximum number of children for any vertex. Let $C$ denote the computation time required for one step of the recursion in , then it is clear that computing the probability at the root given the marginal probabilities at depth $\ell$ requires $\Theta([(q-1)D]^\ell)$ time. The hidden constants in $\Theta$ depend on $C$ and $q$. Observe that a bound for the computation time, $t_\ell$, at depth $\ell$ can be obtained via the recursion $t_\ell \leq qC + [(q-1) D] t_{\ell - 1}$.
Note that whenever the tree visits a frozen vertex, the subtree under the frozen vertex can be pruned as this does not affect the computation. Similarly the subtree under a vertex that is also below the lower end of the virtual coupling line can be pruned. This leads to a subtree, $T_{CD}^\Lambda$, of $T_{CD}$.
We shall demonstrate this construction and computation using the following example graph with edges labeled in the usual lexicographic order. We shall retain the labeling of vertices on $T_{CD}$ to reflect its origin from $G$ but other than that they play no role in spin assignments and two similarly labeled vertices can have arbitrary spin assingments in general.
![The construction of the CD tree: The light dotted lines in the figure denote the virtual [*coupling lines*]{}.[]{data-label="fig:ct"}](comptree.eps){width="0.5\linewidth"}
Let us assume that we are interested in computing the marginal probability of the vertex $a$ for valid 5-colorings of the graph $G$ using the tree $T_{CD}$ on the right. A coloring is valid if no two adjacent vertices are assigned the same color. For this interacting system $\sigma \in \{1,2,3,4,5\}$ and $$\Phi(\sigma_i,\sigma_j) = \left\{ \begin{array}{ll} 1 &
\mbox{if}~ \sigma_i \neq \sigma_j~\\ 0 & \mbox{if}~ \sigma_i = \sigma_j
\end{array} \right.$$ and the potential function $\phi(\sigma_i) = 1$. Let us assume that the vertices $b,c$ are frozen to spins $2,3$ respectively and the reference spin $\sigma_q = 4$. It is easy to see using symmetry or explicit computation that $a$ takes spins $1,4,5$ with probability $1/3$ each, or in other words the ratios (with respect to color 4), $R_G(1) = R_G(4) = R_G(5) = 1$. The pruned subtree $T_{CD}^\Lambda$ can be drawn as in Figure \[fig:pt\].
![The CD-tree $T_{CD}^\Lambda$ and the subtree $T_d$, pruned by the frozen vertices and coupling lines. The frozen colors are written adjacent to vertices in $\Lambda$. []{data-label="fig:pt"}](pruned.eps){width="0.45\linewidth"}
Equation gives $$\label{eq:comp11} R_{T_{CD}}(1) = \frac{R_{T_d}(2) + R_{T_d}(4) +
R_{T_d}(5)}{R_{T_d}(1) + R_{T_d}(2) + R_{T_d}(5)},$$ where $T_d$ represents the subtree of $T_{CD}^\Lambda$ under vertex $d$. The frozen subtrees $T_d$ for the four computations $R_{T_d}(1)$, $R_{T_d}(2)$, $ R_{T_d}(4), R_{T_d}(5)$ are represented in Figure \[fig:pt2\].
![The subtree $T_d(\cdot)$ for the computations $R_{T_d}(1), R_{T_d}(4), R_{T_d}(4), R_{T_d}(5),$ respectively. Note the spin of the new frozen vertex as forced by the coupling line in the four cases.[]{data-label="fig:pt2"}](pruned2.eps){width="0.55\linewidth"}
The resultant subtrees $T_d(\cdot)$ have the usual computation procedure (i.e. they do not have coupling lines); for example, the value $R_{T_d}(1)$ can be computed as $$\begin{split}
R_{T_d}(1) & = \left(\frac{R_{T_e}(2) + R_{T_e}(3) + R_{T_e}(4)
+ R_{T_e}(5)}{R_{T_e}(1) + R_{T_e}(2) + R_{T_e}(3) +
R_{T_e}(5)}\right)\left( \frac{R_{T_f}(2) + R_{T_f}(3) +
R_{T_f}(4) +
R_{T_f}(5)}{R_{T_f}(1) + R_{T_f}(2) + R_{T_f}(3) + R_{T_f}(5)}\right) \\
& = \left(\frac{ \frac34 + \frac34 + \frac34 + \frac34}{1 +
\frac34 + \frac34 + \frac34}\right)\left( \frac{ \frac34 + \frac34
+ 1 + \frac34}{\frac34 + \frac34 + \frac34 + \frac34}\right) = 1.
\end{split}$$ By symmetry to the previous computation $R_{T_d}(2) =
R_{T_d}(5)=1$ and from the definition, $R_{T_d}(4)=1$. Thus from one obtains $$R_{T_{CD}}(1) = \frac{ 1 + 1 + 1}{ 1 + 1 + 1} = 1,$$ as desired.
The next theorem and its proof is essentially the same as in [@wei06a]; therefore we will use the same notation whenever possible and skip the details of similar arguments.
\[th:comptreepresmargprob\] For every graph $G=(V,E)$, every $\Lambda \subseteq V$, any configuration $ \sigma_\Lambda$, and all $\sigma_v$ $$R_G^{\sigma_\Lambda}(v=\sigma_v) =
\mathbb{R}_{T_{CD}}^{\sigma_\Lambda}(v=\sigma_v),$$ where $\mathbb{R}_{T_{CD}}^{\sigma_\Lambda}(v=\sigma_v)$ stands for the ratio (with respect to the reference spin, say $q$) of the probability that the root $v$ of $T_{CD}$ has spin $\sigma_v$ when the computation is performed as described above. The actual probabilities can be computed from the ratios by normalizing them such that the probabilities sum to one.
Let $\sigma_q$ be a fixed spin. Define the ratios $$R_G^{\sigma_\Lambda}(\sigma_v)
\stackrel{\triangle}{=}
\frac{p_G^{\sigma_\Lambda}(v=\sigma_v)}{p_G^{ \sigma_\Lambda }
(v=\sigma_q) }.$$ Let $d$ be the degree of vertex $v$ and let $u_i, 1 \leq i \leq d$ be its neighbors. If the graph $G$ was indeed a tree $T$, then we can see that the following exact recursion $$\label{eq:treerec} R_T^{\sigma_\Lambda}(\sigma_v) =
\frac{\phi_v(\sigma_v)}{\phi_v(\sigma_q)} \prod_{i=1}^d
\frac{\sum_{l=1}^q \Phi_{v,u_i}(\sigma_v,\sigma_l)
R_{T_i}^{\sigma_{\Lambda_i}}(u_i=\sigma_l)}{\sum_{l=1}^q
\Phi_{v,u_i}(\sigma_q,\sigma_l)
R_{T_i}^{\sigma_{\Lambda_i}}(u_i=\sigma_l)},$$ would hold, where $T_i$ is the subtree associated with the neighbor $u_i$ obtained by removing the $i$th edge of $v$, and $\sigma_{\Lambda_i}$ is the restriction of $\sigma_\Lambda$ to $\Lambda \cap T_i$ and appended with the new vertices frozen to $\sigma_v$ corresponding to the lower endpoints of coupling lines originating from $v$.
Fixing the vertex of interest $v$, define $G'$ as the graph obtained by making $d$ copies of the vertex $v$ and each $v_i$ having a single edge to $u_i$. In addition, the vertex potential $\phi_v(\sigma_v)$ is re-defined to $\phi_v^{1/d}(\sigma_v)$. It is easy to see that the following two ratios are equal $$\frac{p_G^{\sigma_\Lambda}(v=\sigma_v)}{p_G^{ \sigma_\Lambda
} (v=\sigma_q)} =
\frac{p_{G'}^{\sigma_\Lambda}(v_1=\sigma_v,...,v_d =
\sigma_v)}{p_{G'}^{ \sigma_\Lambda} (v_1=\sigma_q, ... , v_d =
\sigma_q)}.$$ Defining $$R_{G',v_i}^{\sigma_\Lambda
\tau_i}(\sigma_v) =
\frac{p_{G'}^{\sigma_\Lambda}(v_1=\sigma_v,...,v_i = \sigma_v,
v_{i+1} = \sigma_q, .. , v_d = \sigma_q)}{p_{G'}^{ \sigma_\Lambda}
(v_1=\sigma_v,...,v_{i-1} = \sigma_v, v_i = \sigma_q, .. , v_d =
\sigma_q)}$$ one sees that $$R_G^{\sigma_\Lambda}(\sigma_v) = \prod_{i=1}^d R_{G',v_i}^{\sigma_\Lambda
\tau_i}(\sigma_v).$$ It is easy to see that $R_{G',v_i}^{\sigma_\Lambda \tau_i}(\sigma_v)$ is the ratio of the probaility that the vertex $v_i = \sigma_v$ to the probability of $v_i = \sigma_q$, conditioned on $\sigma_\Lambda$ and $\tau_i$, where $\tau_i$ denotes the configuration where vertices $v_1,...,v_{i-1}$ are frozen to $\sigma_v$ and vertices $v_{i+1},...,v_{d}$ are frozen to $\sigma_q$.
In $G'$, the vertex $v_i$ is only connected to $u_i$; and let $G'\setminus v_i$ denote the connected component of $G'$ that contains $u_i$ after the removal of the edge $(v_i,u_i)$. Therefore $$R_{G',v_i}^{\sigma_\Lambda \tau_i}(\sigma_v) =
\frac{\phi_v^{1/d}(\sigma_v)}{\phi_v^{1/d}(\sigma_q)}
\frac{\sum_{l=1}^q \Phi_{v,u_i}(\sigma_v,\sigma_l) R_{G'\setminus
v_i}^{\sigma_{\Lambda} \tau_i}(u_i=\sigma_l)}{\sum_{l=1}^q
\Phi_{v,u_i}(\sigma_q,\sigma_l) R_{G'\setminus
v_i}^{\sigma_{\Lambda} \tau_i}(u_i=\sigma_l)},$$ and hence $$\label{eq:graphrecur} R_G^{\sigma_\Lambda}(\sigma_v) =
\frac{\phi_v(\sigma_v)}{\phi_v(\sigma_q)} \prod_{i=1}^d
\frac{\sum_{l=1}^q \Phi_{v,u_i}(\sigma_v,\sigma_l) R_{G'\setminus
v_i}^{\sigma_{\Lambda} \tau_i}(u_i=\sigma_l)}{\sum_{l=1}^q
\Phi_{v,u_i}(\sigma_q,\sigma_l) R_{G'\setminus
v_i}^{\sigma_{\Lambda} \tau_i}(u_i=\sigma_l)}.$$ Observe that the recursion terminates since at each step the number of unfixed vertices reduces by one.
Observe that the equation in is similar to the one for the tree . This similarity will help us identify the recursion to be exactly the same one in $T_{CD}$ with the condition corresponding to $\sigma_\Lambda$ along with the coupling of the values of vertices that was used in its definition. The key difference between the binary spin model in [@wei06] and this proof also lies here; that in the binary spin model one of the spins was always the reference spin and the other was the subject of the recursion. Thus the coupling of the spin to its parent in $T_{CD}$ was implicit.
From the similarity of and , one can use induction to complete the proof provided that the graph $G'\setminus v_i$ with the condition corresponding to $\sigma_\Lambda \tau_i$ leads to the same subtree of $T_{CD}$ corresponding to the $i$-th child of the original root with the condition corresponding to $\sigma_{\Lambda_i}$. It is easy to observe that the two trees are the same – both are paths in $G$ starting at $u_i$, and copies of $v$ are set to $\sigma_v$ if it is reached via a smaller numbered edge and set to $\sigma_q$ else. The above observation along with the fact that the stopping rules coincide for the two recursions completes the proof of Theorem \[th:comptreepresmargprob\] using induction.
Multi-spin interactions {#sse:mulspiter}
-----------------------
In this section, we extend the results of the previous section from pairwise interactions to multi-spin interactions. The underlying model can be depicted by a hypergraph with the hyperedges denoting the vertices involved in an interaction.
Consider a finite spin system whose interactions can be modeled as a hypergraph, $G=(V,E)$. Let the partition function of this spin system be denoted by $$Z_G = \sum_{\vec{x} \in X^n}
\prod_{e \in E} \Phi_{e}(\vec{x}_e) \prod_{i \in V} \phi_i(x_i).$$ As before, let $\Lambda \subseteq [n]$ be a subset of [*frozen*]{} vertices (i.e. vertices whose spin values are fixed) and let $$Z_G^\Lambda =
\sum_{\vec{x} \notin X_\Lambda} \prod_{e \in E}\Phi_{e
}(\vec{x}_e) \prod_{i \in V} \phi_i(x_i).$$ We wish to compute the following marginal probability with respect to the Gibbs measure, $$\label{eq:GibbsMargProb} P_G(x_1 = \sigma | X_\Lambda ) =
\frac{1}{Z_G^\Lambda} \sum_{\stackrel{x_1 = \sigma,} {\vec{x}
\notin X_\Lambda}} \prod_{e \in E}\Phi_{e}(\vec{x}_e) \prod_{i \in
V} \phi_i(x_i).$$
CD hypertrees on hypergraphs {#sse:comptreehg}
----------------------------
The motivation for the following hypertree essentially comes from the proof of the CD tree in the previous section. Let the $n$ vertices in $V$ be numbered in some fixed order, ${1,...,n}$. The tree is constructed in a top down approach just as the tree of self avoiding walks.
The procedure described below is similar to a generalization of the tree of self avoiding walks for graphs. For ease of exposition we will keep describe the construction using the following example. Let $V = \{1,2,3,4,5\}$ and let the hyperedges be $\{(1,2,3),(1,2,5),(1,3,4),(2,5,4)\}.$ Let us assume that vertex $1$ is the root. From $G$ construct the graph $G_1$ with vertex $1$ replicated thrice (equal to its degree) to $1_a,1_b,1_c$. Let the resulting hyperedges be $\{(1_a,2,3),(1_b,2,5),(1_c,3,4),(2,5,4)\}$. Observe that, $$\begin{aligned}
\frac{P_G(x_1 = \sigma_1)}{P_G(x_1= \sigma_0)} = ~&
\frac{P_{G_1}(x_{1_a} = \sigma_1,
x_{1_b}=\sigma_1,x_{1_c}=\sigma_1)}{P_{G_1}(x_{1_a} = \sigma_0,
x_{1_b}=\sigma_0,x_{1_c}=\sigma_0)} \\
= ~& \frac{P_{G_1}(x_{1_a} = \sigma_1|
x_{1_b}=\sigma_0,x_{1_c}=\sigma_0)}{P_{G_1}(x_{1_a} = \sigma_0|
x_{1_b}=\sigma_0,x_{1_c}=\sigma_0)} \times \frac{P_{G_1}(x_{1_b} =
\sigma_1| x_{1_a}=\sigma_1,x_{1_c}=\sigma_0)}{P_{G_1}(x_{1_b} =
\sigma_0|
x_{1_a}=\sigma_1,x_{1_c}=\sigma_0)} \\
& \quad \times \frac{P_{G_1}(x_{1_c} = \sigma_1|
x_{1_a}=\sigma_1,x_{1_b}=\sigma_1)}{P_{G_1}(x_{1_c} = \sigma_0|
x_{1_a}=\sigma_1,x_{1_b}=\sigma_1)}.\end{aligned}$$
Now consider a graph $H$ where vertex 1 has degree three and such that the removal of vertex 1 and the three hyperedges, disconnects the graph into 3 disconnected components. The first component, $H_1$, contains the set of vertices $\{2^{(1)},3^{(1)},4^{(1)},5^{(1)},
1_b^{(1)}, 1_c^{(1)}\}$, with the vertices $1_b^{(1)}$ and $1_c^{(1)}$ frozen to have spin $\sigma_0$. The hyperedges that form part of this component (along with the root) are $\{(1,2^{(1)},3^{(1)}),(1_b^{(1)},2^{(1)},5^{(1)}), $ $
(1_c^{(1)},3^{(1)},4^{(1)}),(2^{(1)},5^{(1)},4^{(1)})\}$.
The second component, $H_2$, contains the set of vertices $\{2^{(2)},3^{(2)},4^{(2)},5^{(2)}, 1_a^{(2)}, 1_c^{(2)}\}$, with the vertex $1_a^{(2)}$ frozen to have spin $\sigma_1$ and the vertex $1_c^{(2)}$ frozen to have spin $\sigma_0$; and the hyperedges being $\{(1_a^{(2)},2^{(2)},3^{(2)}),$ $ (1,2^{(2)},5^{(2)}), $ $
(1_c^{(2)},3^{(2)},4^{(2)}), (2^{(2)},5^{(2)},4^{(2)})\}$. Finally, the third component, $H_3$, contains the set of vertices $\{2^{(3)},3^{(3)},4^{(3)},5^{(3)}, 1_a^{(3)}, 1_b^{(3)}\}$; the vertices $1_a^{(3)}$ and $1_b^{(3)}$ frozen to have spin $\sigma_1$; and hyperedges $\{(1_a^{(3)},2^{(3)},3^{(3)}), $ $
(1_b^{(3)},2^{(3)},5^{(3)}), $ $
(1,3^{(3)},4^{(3)}),(2^{(3)},5^{(3)},4^{(3)})\}.$
It is clear that the following holds, $$\begin{aligned}
\frac{P_H(x_1 = \sigma_1)}{P_H(x_1= \sigma_0)} = ~ &
\frac{P_{H_1}(x_1 = \sigma_1)}{P_{H_1}(x_1= \sigma_0)} \times
\frac{P_{H_2}(x_1 = \sigma_1)}{P_{H_2}(x_1= \sigma_0)} \times
\frac{P_{H_3}(x_1 = \sigma_1)}{P_{H_3}(x_1= \sigma_0)} \\
= ~ & \frac{P_{G_1}(x_{1_a} = \sigma_1|
x_{1_b}=\sigma_0,x_{1_c}=\sigma_0)}{P_{G_1}(x_{1_a} = \sigma_0|
x_{1_b}=\sigma_0,x_{1_c}=\sigma_0)} \times \frac{P_{G_1}(x_{1_b} =
\sigma_1| x_{1_a}=\sigma_1,x_{1_c}=\sigma_0)}{P_{G_1}(x_{1_b} =
\sigma_0|
x_{1_a}=\sigma_1,x_{1_c}=\sigma_0)} \\
& \quad \times \frac{P_{G_1}(x_{1_c} = \sigma_1|
x_{1_a}=\sigma_1,x_{1_b}=\sigma_1)}{P_{G_1}(x_{1_c} = \sigma_0|
x_{1_a}=\sigma_1,x_{1_b}=\sigma_1)}, \\
= ~ & \frac{P_G(x_1 = \sigma_1)}{P_G(x_1= \sigma_0)} .\end{aligned}$$
Further, this general procedure for separating the children of the root can now be performed iteratively on each of its children to yield a CD hypertree, $H_{CD}$, in the same way as one generates the CD tree for pairwise interactions. Since at each stage, the number of unfrozen vertices reduces by one, the procedure terminates yielding a hypertree with the degree of every vertex bounded by its degree in the original hypergraph. This leads to the following result for the case of hypergraphs,
\[th:comptreepresmargprobhypergraphs\] For every hypergraph $G=(V,E)$, every $\Lambda \subseteq V$, any configuration $
\sigma_\Lambda$, and all $\sigma_v$ $$R_G^{\sigma_\Lambda}(v=\sigma_v) =
\mathbb{R}_{H_{CD}}^{\sigma_\Lambda}(v=\sigma_v),$$ where $\mathbb{R}_{H_{CD}}^{\sigma_\Lambda}(v=\sigma_v)$ stands for the ratio (with respect to the reference spin, say $\sigma_0$) of the probability that the root $V$ of $H_{CD}$ has spin $\sigma_v$ when computations are performed as described previouly. The actual probabilities can be computed from the ratios by normalizing them such that the probabilities sum to one.
Spatial mixing and Infinite regular trees {#sse:ssm}
=========================================
In this section, we study spatial mixing and demonstrate sufficient conditions for spatial mixing to exist for all graphs $G$ with maximum degree $b+1$ in terms of spatial mixing conditions on the infinite regular tree, $\hat{\mathbb{T}}^b$, of degree $b + 1$. We review the concept of strong spatial mixing that was considered in [@wei06] and prove one of our main results.
\[def:ssm\] Let $\delta: \mathbb{N} \to \mathbb{R}^+$ be a function that decays to zero as $n$ tends to infinity. The distribution over the spin system depicted by $G=(V,E)$ exhibits [*strong spatial mixing*]{} with rate $\delta(\cdot)$ if and only if for every spin $\sigma_1$, every vertex $v \in V$ and $\Lambda
\subseteq V$ and any two spin configurations, $\sigma_\Lambda,
\tau_\Lambda,$ on the frozen spins, we have $$|p(v=\sigma_1| X_\Lambda = \sigma_\Lambda) - p(v=\sigma_1| X_\Lambda =
\tau_\Lambda) | \leq \delta(\mathrm{dist}(v,\Delta)),$$ where $\Delta \subseteq \Lambda$ stands for the subset in which the frozen spins differ.
Let $T$ denote a rooted tree. We say that a collection $L$ of virtual edges is a set of [*valid coupling lines*]{}, if they satisfy the following constraints: a coupling line joins a vertex to some vertex in the subtree under it; the lower endpoints of the coupling lines are unique; no pair of coupling lines form a nested pair or an interleaved pair, i.e. the endpoints do not lie on a single path.
Observe that the pruned CD tree, $T_{CD}^\Lambda$, is a tree with a set of valid coupling lines. The pruned CD tree also has the property that the end points of coupling lines have a corresponding twin leaf that is frozen to $\sigma_q$, but we have not imposed that requirement above. It is possible that enforcing that requirement and thus limiting the set of [*valid coupling lines*]{} may strengthen the results, but we omit it here for ease of exposition.
\[def:vssm\] Let $T$ denote a rooted tree. Let $\delta:
\mathbb{N} \to \mathbb{R}^+$ be a function that decays to zero as $n$ tends to infinity. The distribution over the spin system at the root, $v$, of $T$ exhibits [*very strong spatial mixing*]{} with rate $\delta(\cdot)$ if and only if for every spin $\sigma_1$, every set of [*valid coupling lines*]{}, for every $\Lambda \subseteq V$ and any two spin configurations, $\sigma_\Lambda, \tau_\Lambda,$ on the frozen spins, we have $$\Big| p_T(v=\sigma_1| X_\Lambda = \sigma_\Lambda) - p_T(v=\sigma_1| X_\Lambda =
\tau_\Lambda) \Big| \leq \delta(\mathrm{dist}(v,\Delta)),$$ where $\Delta \subseteq \Lambda$ stands for the subset in which the frozen spins differ. The computations of the marginal probability on this tree with coupling lines is performed as described in Section \[sse:compsec\].
From the recursions observe that the computation tree can be pruned at any frozen vertex or at any lower endpoint of a coupling line.
It is clear that very strong spatial mixing reduces to strong spatial mixing in the absence of coupling lines. Thus very strong spatial mixing on a tree implies strong spatial mixing with the same rate on the tree.
The main result of this section is that very strong spatial mixing on the infinite regular tree with degree $b+1$ implies strong spatial mixing on any graph with degree $b+1$. We will distinguish between two cases of neighboring interactions:
- Spatially invariant interactions $\Phi(\cdot,\cdot) \geq
0$ and potentials $\phi(\cdot)\geq 0$ where the interaction matrix $\Phi(\cdot,\cdot)$ satisfies the positively alignable condition stated below.
- General spatially invariant interactions $\Phi(\cdot,\cdot) \geq 0$ and potentials $\phi (\cdot)\geq 0$ that need not satisfy the positively alignable condition.
\[def:posalign\] A matrix $\Phi(\cdot,\cdot)$ is said to be [*positively alignable*]{} if there exists a non-negative vector $\alpha(\cdot)$ such that the column vectors of the matrix $\Phi$ can be aligned in the $[1 ... 1]^T$ direction, i.e. $\Phi \alpha = [ 1 1 ...
1]^T$. Alternately, the vector $[1 ... 1]^T$ belongs to the convex cone of the column vectors of $\Phi$.
Note that a sufficient condition for $\Phi$ to be positively alignable is the existence of a ([*permissive*]{} spin) $\sigma_0$ which satisfies the following property: $\Phi(\sigma_i,\sigma_0) = c_1>0$ for all spins $\sigma_i$, and $\phi(\sigma_0) = c_2>0$ (e.g. the “unoccupied" spin in independent sets).
We will state the next theorem for Case $(i)$, and a similar theorem (see Section \[sse: geninter\]) will hold for the other case. The reason for separating the two cases is that in Case $(i)$ one can stay within the same spin space in the infinite tree $\hat{\mathbb{T}}^b$, to verify very strong spatial mixing.
Interactions that are positively alignable
------------------------------------------
\[thm:treesuffices\] For every positive integer $b$ and fixed $\Phi(\cdot, \cdot),\phi(\cdot)$ such that $\Phi$ is positively alignable, if $\hat{\mathbb{T}}^b$ exhibits very strong spatial mixing with rate $\delta$ then every graph with maximum degree $b+1$ and having the same $\Phi(\cdot, \cdot),\phi(\cdot)$ exhibits strong spatial mixing with rate $\delta$.
The proof of this theorem follows in a straightforward manner from Theorem \[th:comptreepresmargprob\]. If $T_\Lambda$ is the tree in Section \[sse:comptree\] rooted at $v$, i.e. $T_{CD}$ adapted to $\Lambda$, then Theorem \[th:comptreepresmargprob\] implies that $$\label{eq:gratree} \Big|p_G(v=\sigma_1| X_\Lambda =
\sigma_\Lambda) - p_G(v=\sigma_1| X_\Lambda = \tau_\Lambda) \Big|
= \Big|p_{T_\Lambda}(v=\sigma_1| X_\Lambda = \sigma_\Lambda) -
p_{T_\Lambda}(v=\sigma_1| X_\Lambda = \tau_\Lambda)\Big|.$$ Further note that for any subset $\Delta$ of vertices of $G$, dist(v,$\Delta$) is equal to the distance between the root $v$ and the subset of vertices of $T_\Lambda$ composed of the copies of vertices in $\Delta$ as the paths in $T_\Lambda$ correspond to paths in $G$. To complete the proof we need to move from $T_\Lambda$ to $\hat{\mathbb{T}}^b$.
Note that $\Phi$ is positively alignable is equivalent to the existence of a probability vector $a(\cdot) $ such that $$\label{eq:posalign} \sum_i \Phi(\sigma_l,\sigma_i) \phi(\sigma_i)
a(\sigma_i) = c_1
>0, ~ \forall \sigma_l.$$ As every vertex in $T_\Lambda$ has at most the degree of the vertex in $G$, one can view $T_\Lambda$ as a subgraph of $\hat{\mathbb{T}}^b$. (As before $\Lambda$ is also assumed to contain the vertices that are frozen to $\sigma_q$ by the construction.) Let $\partial(T_\Lambda)$ represent the non-fixed boundary vertices, i.e. vertices in $T_\Lambda$ that are not fixed by $\Lambda$, are not the lower end points of a dotted line, and have degree strictly less than $b + 1$. Let $\Lambda_1$ denote the set of vertices: in $\hat{\mathbb{T}}^b \setminus T_\Lambda$ that is attached to one of the vertices in $\partial(T_\Lambda)$. Append $\Lambda_1$ to $T_\Lambda$ to yield a subtree, $\hat{\mathbb{T}}^b_\Lambda$ of $\hat{\mathbb{T}}^b$. Choose the spins for the vertices in $\Lambda_1$ independently, distributed proportional to $\phi(\cdot)a(\cdot)$.
We claim that $$p_{T_\Lambda}(v=\sigma_1| X_\Lambda = \sigma_\Lambda) =
p_{\hat{\mathbb{T}}^b_\Lambda}(v=\sigma_1| X_\Lambda =
\sigma_\Lambda).$$ This follows from the observation that for all $u_i$ in $\Lambda_1$ we have $$\begin{split}
& \frac{\sum_{l=1}^q \Phi_{v,u_i}(\sigma_v,\sigma_l)
R_{T_i}^{\sigma_{\Lambda_i}}(u_i=\sigma_l)}{\sum_{l=1}^q
\Phi_{v,u_i}(\sigma_q,\sigma_l)
R_{T_i}^{\sigma_{\Lambda_i}}(u_i=\sigma_l)} \\
& \quad = \frac{\sum_{l=1}^q \Phi_{v,u_i}(\sigma_v,\sigma_l)
a(\sigma_l)\phi(\sigma_l)}{\sum_{l=1}^q
\Phi_{v,u_i}(\sigma_q,\sigma_l) a(\sigma_l)\phi(\sigma_l)}
\stackrel{(a)}{=} 1,
\end{split}$$ where $(a)$ follows from . Thus the recursions in $\hat{\mathbb{T}}^b_\Lambda$ becomes identical to the ones in $T_\Lambda$.
Now from the very strong spatial mixing property that $\hat{\mathbb{T}}^b$ is assumed to possess, we have $$\begin{split}
& \Big| p_{T_\Lambda}(v=\sigma_1| X_\Lambda = \sigma_\Lambda) -
p_{T_\Lambda}(v=\sigma_1| X_\Lambda = \tau_\Lambda) \Big| \\
& \quad= \Big| p_{\hat{\mathbb{T}}^b_\Lambda}(v=\sigma_1|
X_\Lambda = \sigma_\Lambda) -
p_{\hat{\mathbb{T}}^b_\Lambda}(v=\sigma_1| X_\Lambda =
\tau_\Lambda) \Big| \leq \delta(\mathrm{dist}(v,\Delta)).
\end{split}$$ The above equation along with completes the proof.
\[cor:uniqueGM\] Very strong spatial mixing on $\hat{\mathbb{T}}^b$ (with positively alignable $\Phi$) implies a unique Gibbs measure on all graphs with maximum degree $b+1$.
From Theorem \[thm:treesuffices\], very strong spatial mixing on $\hat{\mathbb{T}}^b$ with positively alignable $\Phi$ implies strong spatial mixing on graphs with maximum degree $b+1$. Since strong spatial mixing is a sufficient condition for the existence of a unique Gibbs measure on all graphs with maximum degree $b+1$, the result follows.
General Interactions {#sse: geninter}
--------------------
Consider the scenario of general interactions. Define an extra (permissive) spin $\sigma_0$ that satisfies the following property: $\Phi(\sigma_0,\sigma_l) = c_2 > 0, \phi(\sigma_0) = c_3 > 0.$ If there is very strong spatial mixing on the infinite tree with this extra spin $\sigma_0$ then the following analogue of Theorem \[thm:treesuffices\] holds.
\[thm:gentreesuffices\] For every positive integer $b$, if $\hat{\mathbb{T}}^b$ (with the extra spin $\sigma_0$) exhibits very strong spatial mixing with rate $\delta$ then every graph with maximum degree $b+1$ and having the same $\Phi(\cdot,\cdot),\phi(\cdot)$ exhibits very spatial mixing with rate $\delta$.
The proof is similar to that of Theorem \[thm:treesuffices\] except for the following changes. Fix the spins of the vertices in $\Lambda_1$ to $\sigma_0$ instead of generating them independently with probability $a(\cdot)$. Condition also on the event that none of the sites in $T_\Lambda$ are assigned the extra spin $\sigma_0$. With these two changes made, the proof of Theorem \[thm:treesuffices\] carries over and hence is not repeated.
On very strong spatial mixing on trees
---------------------------------------
The idea of very strong spatial mixing is different from the standard notions of spatial mixing due to the introduction of coupling lines. However, it is key to note that these coupling lines behave similarly in configurations $\sigma_\Lambda$ and $\tau_\Lambda$ and thus conceptually it is similar to strong spatial mixing where vertices close to the root are allowed to be frozen to identical spins in both $\sigma_\Lambda$ and $\tau_\Lambda$. However the fact that the actual computations involve spins to be frozen to different values may lead to a strictly stronger condition than strong spatial mixing. In some sense, this condition demands that the difference of marginal probabilities depend only on the spatial locations of the frozen vertices and not on the spins that these vertices assume, reminiscent of uniform convergence in analysis.
One sufficient condition for very strong spatial mixing is the existence of a Lipschitz contraction for probabilities or log-likelihoods, as in [@ban06; @bgknt06]. In general if one can show that some continuous monotone function $f(p^{\sigma_\Lambda}(\sigma_v))$, where $p^\sigma_\Lambda(\sigma_v)$ is computed using the recursions in from the probabilities of its children $\{
p_i^{\sigma_\Lambda}(\sigma_l) \}$, satisfies $$| f(p^{\sigma_\Lambda}(\sigma_v)) -
f(p^{\tau_\Lambda}(\sigma_v)) | < K \max_{i,l} |
f(p_i^{\sigma_\Lambda}(\sigma_l)) - f(p_i^{\tau_\Lambda}(\sigma_l))
|$$ for some $K < 1$, then one can show that this implies very strong spatial mixing (indeed with an exponential rate).
Algorithmic implications
========================
The idea of strong spatial mixing, combined with an exponential decay of correlation, has been used recently in [@wei06; @gak07; @bgknt06] to derive polynomial time approximation algorithms for counting problems like independent sets, list colorings and matchings. Traditionally these counting problems were approximated using Markov chain Monte Carlo (MCMC) methods yielding randomized approximation algorithms. In contrast the new techniques based on spatial correlation decay yield [*deterministic*]{} approximation algorithms, thus providing a new alternative to MCMC techniques.
\[def:expdecay\] A pairwise interacting system ($\Phi(\cdot,\cdot),\phi(\cdot)$) is said to have an [*exponential strong spatial correlation decay*]{} if an infinite regular tree of degree $D$, rooted at $v$, has a very strong spatial mixing rate, $\delta(\mathrm{dist}(v,\Delta)) \leq
e^{-\kappa_D \mathrm{dist}(v,\Delta)}$ for some $\kappa_D > 0$.
From the previous two sections, we will see that the marginal probabilities (and thus the partition function) for any pairwise interacting system with finite spins with an exponential strong spatial correlation decay, whose interactions can be modeled as a graph $G$ with bounded degree, can be approximated efficiently.
\[lem:ptasexpdecay\] Consider a graph $G$ of bounded degree, say $D$, denoting the interactions of a pairwise interaction system with exponential strong spatial correlation decay. Then the marginal probability of any vertex $v$ can be approximated to within a factor $(1\pm \epsilon)$, for $\epsilon = n^{-\beta}$, in a polynomial time given by $\Theta(n^{\frac{\beta}{\kappa_D}\log
D})$.
From the definition of strong spatial mixing rate it is clear that the marginal probability at the root can be approximated to a $(1+\epsilon)$ factor, provided $\mathrm{dist}(v,\Delta) > -
\frac{\log \epsilon} {\kappa_D} =: \ell$. That is, for any initial assignment of marginal probabilities to lead nodes at depth $l$ from the root, the recursions in would give a $(1+\epsilon)$ approximation to the true marginal probability.
Let $C$ denote the computation time required for one step of the recursion in , then it is clear that computing the probability at the root given the marginal probabilities at depth $\ell$ requires $\Theta([(q-1)D]^\ell)$ time. The hidden constants in $\Theta$ depend on $C$ and $q$. Observe that a bound for the computation time, $t_\ell$, at depth $\ell$ can be obtained via the recursion $t_\ell \leq qC + (q-1) D t_{\ell - 1}$.
Therefore, if one wishes to obtain an $\epsilon = n^{-\beta}$ approximation, then the computational complexity would be $\Theta(n^{\frac{\beta}{\kappa_D}\log (q-1)D}).$ Thus, the marginal probability as well as the partition function can be approximated in polynomial time.
It is well known that the partition function can be computed as a telescopic product of marginal probabilities (of smaller and smaller systems) and thus an efficient procedure for yielding the marginal probabilities also yields an efficient procedure (usually time gets multiplied by $n$ and the error gets magnified by $n$) for computing the partition function.
Remarks and conclusion
======================
[*On colorings in graphs:*]{} Consider the anti-ferromagnetic hard-core Potts model with $q$ spins, or equivalently, consider the vertex coloring of graph $G$ with $q$ colors. It is conjectured that for any infinite graph with maximum degree $D$ (and with appropriate vertex transitivity assumptions, so that the notion of Gibbs measures make sense), one can show that this system has a unique Gibbs measure as long as $q$ is at least $D + 1$. Using the results in the previous sections, if one establishes that the infinite regular tree with degree $D$ has very strong spatial mixing when $q$ is at least $D + 1$, then this will imply that any graph with maximum degree $D$ will also have very strong spatial mixing and thus a unique Gibbs measure.
It is known from [@jon02] that the infinite regular tree with degree $D$ has [*weak*]{} spatial mixing when the number of colors is at least $D+1$. The nature of the correlation decay suggests that very strong spatial mixing should also hold in this instance. However, it is not clear to the authors that the proof can be modified to provide an argument for very strong spatial mixing (or even whether the proof can be extended to show weak spatial mixing for irregular trees with maximum degree $D$). Another possible approach that is yet to be explored completely is whether weak spatial mixing and some monotonicity arguments, like in [@wei06] for independent sets, will directly imply very strong spatial mixing.
[*Conclusion:*]{} We have shown the existence of a computation tree in graphical models that compute the exact marginal probabilities in any graph. Further we have shown that from the point of view of very strong spatial mixing, a notion of spatial correlation decay, the infinite regular tree is a worst-case graph. So proving results on infinite regular trees would immediately imply similar results for graphs with bounded degree.
Acknowledgements {#acknowledgements .unnumbered}
================
The authors would like to thank Mohsen Bayati, Christian Borgs, Jennifer Chayes, Marc Mezard, Andrea Montanari for helpful comments and useful discussions. Special thanks go to Elchanan Mossel for urging several of us, interested in this topic, to work on this problem as well as for useful discussions with the authors. The authors would like to thank Dror Weitz for making useful comments and suggestions and for identifying an error in an earlier version which has led us to redefine the very strong spatial mixing condition on trees.
[^1]: Part of this work was done while Prasad Tetali was visiting Microsoft Research during 2006.
|
---
abstract: 'Expressions are given for the Casimir operators of the exceptional group $F_4$ in a concise form similar to that used for the classical groups. The chain $B_4\subset F_4\subset D_{13}$ is used to label the generators of $F_4$ in terms of the adjoint and spinor representations of $B_4$ and to express the 26-dimensional representation of $F_4$ in terms of the defining representation of $D_{13}$. Casimir operators of any degree are obtained and it is shown that a basis consists of the operators of degree 2, 6, 8 and 12.'
author:
- 'Adam M. Bincer'
---
Casimir operators of the exceptional group $F_4$: the chain $B_4\subset F_4\subset D_{13}$
Department of Physics, University of Wisconsin–Madison,\
Madison, Wisconsin 53706
INTRODUCTION
============
Although a general formula exists for the quadratic Casimir operator for any group this is not the case for operators of higher degree. Efficient expressions have been developed over the years for all the Casimir operators of the classical groups, but not for the exceptional groups. Berdjis[@berdjis] gives the desired Casimir operators implicitly. Until recently explicit results were available only for $G_2$. The degree 6 Casimir of $G_2$ was given in the work of Hughes and Van der Jeugt[@hughes] by an expression involving 29 terms and in the work by Bincer and Riesselmann[@bin_ries] by an expression involving 23 terms. These results were obtained using computers and leave something to be desired.
Quite recently I have developed a different approach and obtained for $G_2$ results very much alike to those for the classical groups[@classical]. Moreover it would seem that the same approach should work for the other exceptional groups. In the present work I address the group $F_4$ and leave the $E_{6,7,8}$ for a future paper.
This paper is organized as follows. In the next Sec. after explaining the use of the chain $B_4\subset F_4\subset D_{13}$ I obtain concise expressions for the Casimir operators of $F_4$. These require the knowledge of the generators of $D_{13}$ projected into $F_4$. To obtain this projection I describe in the next Sec. the 26-dimensional representation of $F_4$ and then obtain in the following Sec. the desired projection. In the Conclusion I discuss the quadratic Casimir operator of $F_4$ and demonstrate that the independent Casimir operators are of degree 2, 6, 8 and 12 (corresponding to the exponents of $F_4$ being 1, 5, 7 and 11).
The Casimir operators of $D_{13}$ and $F_4$
===========================================
My approach makes use of the chain $B_4\subset F_4\subset D_{13}$. The subgroup $B_4$ of $F_4$ is used to label the generators of $F_4$. $F_4$ is embedded in $D_{13}$ because the smallest-dimensional representation of $F_4$ is 26-dimensional and orthogonal and $D_{13}$ is the orthogonal group in 26 dimensions.
I denote the 36 generators of $B_4$ as $B_{\alpha
}^\beta=-B_{\bar\beta }^{\bar\alpha }$, with indices ranging from $-4$ to $+4$, zero [*included*]{}, $\bar\alpha \equiv-\alpha $. The hermitian property is expressed in this basis as $B_{\alpha
}^{\beta \dagger}=B_{\beta }^{\alpha }$. I denote the generators of $F_4$ as $B_{\alpha }^{\beta }$ and $S^{pqrs}$, corresponding to the decomposition of the [**52**]{} (the adjoint) of $F_4$ into the [**36**]{} and [**16**]{} of $B_4$, where the [**36**]{} is the adjoint, i.e., the $B_{\alpha }^{\beta }$, and the [**16**]{} is the spinor $S^{pqrs}=\left( S^{\overline{pqrs}} \right)^{\dagger}$, $p,q,r,s=\pm$. The $B_4\subset F_4$ relation is exhibited in the extended Dynkin diagram
[c@c@c@c @l]{} $\alpha _0$ & $\alpha _1$ & $\alpha _2$ & $\alpha _3$ & $\alpha
_4$\
\
$u_1-u_2$ & $u_2-u_3$ & $u_3-u_4$ & $u_4$ & $-\frac{1}{2}\left(
u_1+u_2+u_3+u_4 \right)$
with $B_4$ obtained by omitting $\alpha _4$ and $F_4$ obtained by omitting $\alpha _0$. That is to say: the $\alpha _i,\ 1\leq i\leq
4$, are the simple roots of $F_4$, while the $\alpha _j,\ 0\leq
j\leq 3$ are the simple roots of $B_4$. The information encoded in the Dynkin digram is made explicit by setting $\alpha
_0=u_1-u_2, \alpha _1=u_2-u_3,\alpha _2=u_3-u_4,\alpha
_3=u_4,\alpha _4=-\frac{1}{2}(u_1+u_2+u_3+u_4)$, where the $u_i$ are orthogonal unit vectors.
I denote the generators of $D_{13}$ as $D_a^b=-D_{\bar b}^{\bar a},\left( D_a^b \right)^\dagger=D_b^a$, zero [*excluded*]{}. The commutation relations of $D_{13}$ in this basis are $$\label{eq1}
\left[ D_{a}^{b},D_{c}^{d} \right]=\delta
_{c}^{b}D_{a}^{d}-\delta _{a}^{d}D_{c}^{b}+\delta _{\bar
b}^dD_{c}^{\bar a}-\delta _{c}^{\bar a}D_{\bar b}^d$$ It follows from Eq. (\[eq1\]) that $$\label{eq2}
\left[ D_{a}^{b},\left( D^k \right)_{c}^{d} \right]=\delta
_{c}^{b}\left( D^k \right)_{a}^{d}-\delta _{a}^{d}\left( D^k
\right)_{c}^{b}+\delta _{\bar b}^{d}\left( D^k \right)_{c}^{\bar
a}-\delta _{c}^{\bar a}\left( D^k \right)_{\bar b}^d$$ where I define the $k$th power, $k\geq 1$, by $$\label{eq3}
\left( D^k \right)_{a}^{b}=\left( D^{k-1} \right)_{a}^{c}D_{c}^{b}=
D_{a}^{c}\left( D^{k-1} \right)_{c}^{b},\qquad \left( D^0
\right)_{a}^{b}=\delta _{a}^{b}$$ (summation convention understood). It now follows that if I define $$\label{eq4}
{\cal C}_k(D_{13})=(D^k)_{a}^{a}$$ then these ${\cal C}_k$ commute with the generators of $D_{13}$ and so are Casimir operators of $D_{13}$ of degree $k$. Equation (\[eq14\]) provides an elegant expression for the Casimir operators of $D_{13}$ and is an example of the type of expressions valid for all the classical grups. All this is well-known and goes back to Perelomov and Popov[@perelomov]. I remark that the 13 independent Casimirs of this type are of degree $k=2s,1\leq
s\leq13$. This is because it follows from the antisymmetry property $D_{a}^b=-D_{\bar b}^{\bar a}$ that the Casimirs for $k=\mbox{odd}$ can be expressed in terms of those for $k=\mbox{even}$, and it follows from the Cayley-Hamilton theorem that Casimirs of degree $k>26$ can be expressed in terms of those for $k\leq26$. I note further that the Casimir operator of degree 26 can be expressed in terms of the square of a Casimir of degree 13 \[which is not of the form given by Eq. (\[eq4\])\] and so the integrity basis for the Casimirs contains those of degree $k=2s,1\leq s\leq 12$, and $k=13$, which agrees with the fact that the degrees $k$ of the Casimirs in the basis should be equal to the exponents of $D_{13}$ plus one.
We next observe that under the restriction of $D_{13}$ to $F_4$ the adjoint representation of $D_{13}$ decomposes thus $$\label{eq5}
\bf 325=52+273$$ where the $\bf 325$ refers to the adjoint of $D_{13}$ and the ${\bf 52}$ to the adjoint of $F_4$. Thus we can express the generators $D_{a}^b$ of $D_{13}$ in terms of the generators of $F_4$ and the components of the $\bf 273$-plet. We now obtain the Casimir operators of $F_4$ by observing that they are given by Eq. (\[eq4\]) in which the $D_{a}^b$ are replaced by their projections into $F_4$, i.e., $$\label{eq6}
{\cal C}_k(F_4)=(\tilde D^k)_{a}^a$$ where $$\label{eq7}
\tilde D_a^b=D_a^b|_{{\bf273}=0}$$ I mean by Eq. (\[eq7\]) that the projected $\tilde D_{a}^b$ are given by expressing the $D_{a}^b $ in terms of the generators of $F_4$ and members of the $\bf 273$-plet and then setting the contribution of the $\bf 273$-plet equal to zero.
The 26-dimensional representation of $F_4$ {#the-26-dimensional-representation-of-f_4 .unnumbered}
==========================================
To obtain the projected $\tilde D$ I need to obtain first explicit formulas for the 26-dimensional representation of $F_4$.
The generators $D_{a}^b$ of $D_{13}$ are given in the defining 26-dimensional representation as the following $26\times26 $ matrices: $$\label{eq8}
D_{a}^b=I_{ab}-I_{\overline{ba}}$$ where $I_{ab}$ is the $26\times26$ matrix with matrix elements $$\label{eq9}
(I_{ab})_{jk}=\delta _{aj}\delta _{bk}$$ with the labels $j,k$ taking on the same values as $a,b$: $-13\leq
j,k\leq13$, zero excluded.
The Cartan generators of $F_4$ are given in the 26-dimensional representation by the $26\times26$ matrices as follows: $$\begin{aligned}
\label{eq10}
h_1 &=& D_{5}^5+D_{6}^6-D_7^7+D_{8}^8-D_{9}^9-D_{10}^{10}\\
\label{eq11}
h_2 &=& D_{3}^3+D_{4}^4-D_{5}^5-D_{6}^6+D_{10}^{10}-D_{11}^{11}\\
\label{eq12}
h_3 &=& \frac{1}{2}\left(
%% FOLLOWING LINE CANNOT BE BROKEN BEFORE 80 CHAR
D_{2}^2-2D_{3}^3-D_{4}^4+D_{6}^6-D_{8}^8+D_{9}^9-D_{10}^{10}+D_{11}^{11}-D_{12}^{12}
\right)\\
\label{eq13}
h_4 &=& \frac{1}{2}\left(
%% FOLLOWING LINE CANNOT BE BROKEN BEFORE 80 CHAR
-2D_{2}^2+D_{3}^3-D_{4}^4+D_{5}^5-D_{6}^6+D_{7}^7-D_{9}^9+D_{12}^{12}-D_{13}^{13}
\right)\end{aligned}$$ These are precisely the same expressions as were obtained by Patera[@patera] and Ekins and Cornwell[@ekins] if I relabel their rows and columns thus: their $1\rightarrow\mbox{mine}-13$, their $2\rightarrow\mbox{mine}-12,\ldots$, their $13\rightarrow\mbox{mine}-1$, their $14\rightarrow\mbox{mine}+1,\ldots$, their $26\rightarrow\mbox{mine}+13$.
Given these explicit matrices for the Cartan generators $h_i$, the associated generators $e_i$ and $f_i=e_i^\dagger$ in the Chevalley basis are found from the equations[@ekins] $$\label{eq14}
\left[ e_j,h_k \right]=A_{kj}e_j,\qquad \left[ f_j,e_k
\right]=\delta _{jk}h_k$$ The summation convention does not apply to Eqs. (\[eq14\]) and $A$ is the Cartan matrix of $F_4$: $$\label{eq15}
A=
\left(
\begin{array}{cccc}
2 & -1 & 0 & 0\\
-1 & 2 & -1 & 0\\
0 & -2 & 2 & -1\\
0 & 0 & -1 & 2
\end{array}
\right)$$ A solution of Eqs. (\[eq14\]) for the simple generators $e_i$ is as follows: $$\begin{aligned}
e_1 &=& D_{7}^5+D_{9}^6+D_{10}^{8}\label{eq16}\\
e_2 &=& D_{5}^3+D_{6}^4+D_{11}^{10}\label{eq17}\\
e_3 &=& 2^{-\frac{1}{2}}\left( D_{3}^{\bar
1}+D_{3}^1+D_{4}^2+D_{8}^6+D_{10}^9+D_{12}^{11}
\right)\label{eq18}\\
e_4 &=& 2^{-\frac{1}{2}}\left(
zD_{2}^{\bar1}+z^*D_{2}^1+D_{4}^3+D_{6}^5+D_{9}^7+D_{13}^{12} \right)\end{aligned}$$ where $$\label{eq20}
z\equiv e^{i\pi /3}$$ Except for the renumbering of rows and columns and a different choice of phases, my expressions for $e_1$ and $e_2$ are precisely the same as those given by Patera[@patera] and Ekins and Cornwell[@ekins]. However my expressions for $e_3$ and $e_4$ differ from the corresponding expressions of those authors. It would seem that they resolved some of the arbitrariness in the solution by demanding that it be real; I require that it display the antisymmetry across the antidiagonal corresponding to the fact that we have an orthogonal representation.
In accordance with my labeling of generators of $F_4$ in the $B_4$ basis in terms of the adjoint and the spinor of $B_4$ I have that the above simple generators $e_i$ should be labeled as follows: $$\label{eq20?}
\begin{array}{r@{=}l@{\rightarrow}l@{=}l}
\alpha _1 & u_2-u_3 & e_1 & B_{2}^3\\
\alpha _2 & u_3-u_4 & e_2 & B_{3}^4\\
\alpha _3 & u_4 & e_3 & B_{4}^0\\
\alpha _4 & -\frac{1}{2}\left( u_1+u_2+u_3+u_4 \right) & e_4 &
S^{++++}
\end{array}$$ Next I form commutators of the simple generators and obtain level one generators $$\begin{aligned}
\alpha _1+\alpha _2 &=& u_2-u_4\rightarrow B_{2}^4=\left[
B_{2}^3,B_{3}^4 \right]=D_{7}^3+D_{9}^4-D_{11}^8\nonumber \\
\alpha _2+\alpha _3 &=& u_3\rightarrow B_{3}^0=\left[
B_{3}^4,B_{4}^0 \right]=2^{-\frac{1}{2}}\left(
D_{5}^{\bar1}+D_{5}^1+D_{6}^2-D_{8}^4+D_{11}^9-D_{12}^{10} \right)\nonumber \\
\alpha _3+\alpha _4 &=& -\frac{1}{2}(u_1+u_2+u_3-u_4)\rightarrow
S^{+++-}=\left[ B_{4}^0,S^{++++}
\right]\sqrt{2}\nonumber \\
&=& -2^{-\frac{1}{2}}\left(
z^*D_{4}^{\bar1}+zD_{4}^1+D_{3}^{\bar2}-D_{8}^5-D_{10}^{7}+D_{13} ^{11}
\right)\label{eq21}\end{aligned}$$ level two generators $$\begin{aligned}
\alpha _1+\alpha _2+\alpha _3 &=& u_2\rightarrow B_{2}^0=\left[
B_{2}^3,B_{3}^0 \right]=2^{-\frac{1}{2}}\left(
D_{7}^{\bar1}+D_{7}^1+D_{9}^2-D_{10}^4-D_{11}^6+D_{12}^8 \right)\nonumber \\
\alpha _2+\alpha _3+\alpha _4 &=& -\frac{1}{2}\left(
u_1+u_2-u_3+u_4 \right)\rightarrow S^{++-+}\nonumber \\
&=& \left[
B_{3}^0,S^{++++} \right]\sqrt{2}=-2^{-\frac{1}{2}}\left(
z^*D_{6}^{\bar1}+zD_{6}^1+D_{5}^{\bar2}+D_{8}^3-D_{11}^7-D_{13}^{10}
\right)\nonumber \\
\alpha _2+2\alpha _3 &=& u_3+u_4\rightarrow B_{3}^{\bar4}=\left[
B_{4}^0,B_{3}^0 \right]=D_{5}^{\bar3}+D_{8}^2+D_{12}^9\label{eq22}\end{aligned}$$ level three generators $$\begin{aligned}
\alpha _1+\alpha _2+\alpha _3+\alpha _4 &=& -\frac{1}{2}\left(
u_1-u_2+u_3+u_4 \right)\rightarrow S^{+-++}=\left[
B_{2}^0,S^{++++} \right]\sqrt{2}\nonumber \\
&=& -2^{-\frac{1}{2}}\left(
z^*D_{9}^{\bar1}+zD_{9}^1+D_{7}^{\bar2}+D_{10}^3+D_{11}^5+D_{13}^8
\right)\nonumber \\
\alpha _1+\alpha _2+2\alpha _3 &=& u_2+u_4\rightarrow
B_{2}^{\bar4}=\left[ B_{4}^0,B_{2}^0
\right]=D_{7}^{\bar3}-D_{12}^6+D_{10}^2\nonumber \\
\alpha _2+2\alpha _3+\alpha _4 &=& -\frac{1}{2}\left(
u_1+u_2-u_3-u_4 \right)\rightarrow S^{++--}=\left[ B_{3}^0,
S^{+++-} \right]\sqrt{2}\nonumber \\
&=& -2^{-\frac{1}{2}}\left(
zD_{8}^{\bar1}+z^*D_{8}^1-D_{6}^{\bar3}-D_{5}^{\bar4}+D_{12}^7-D_{13}^9
\right)\label{eq23}\end{aligned}$$ level four generators $$\begin{aligned}
\alpha _1+\alpha _2+2\alpha _3+\alpha _4 &=&
-\frac{1}{2}\left( u_1-u_2+u_3-u_4 \right)\rightarrow
S^{+-+-}=\left[ B_{2}^0,S^{+++-} \right]\sqrt{2}\nonumber \\
&=& -2^{-\frac{1}{2}}\left(
zD_{10}^{\bar1}+z^*D_{10}^1-D_{7}^{\bar4}-D_{9}^{\bar3}-D_{12}^5+D_{13}^6
\right)\nonumber \\
\alpha _1+2\alpha _2+2\alpha _3 &=& u_2+u_3\rightarrow
B_{2}^{\bar3}=\left[ B_{3}^{\bar4},B_{2}^4
\right]=D_{7}^{\bar5}+D_{12}^4+D_{11}^2\nonumber \\
\alpha _2+2\alpha _3+2\alpha _4 &=& -u_1-u_2\rightarrow
B_{\bar1}^2=\left[ S^{++-+},S^{+++-}
\right]=D_{4}^{\bar6}+D_{8}^{\bar2}+D_{13}^7\label{eq24}\end{aligned}$$ level five generators $$\begin{aligned}
\alpha _1+\alpha _2+2\alpha _3+2\alpha _4 &=& -u_1-u_3\rightarrow
B_{\bar 1}^3=\left[ B_{\bar1}^2,B_{2}^3
\right]=D_{9}^{\bar4}-D_{10}^{\bar2}+D_{13}^5\nonumber \\
\alpha _1+2\alpha _2+2\alpha _3+\alpha _4 &=& -\frac{1}{2}\left(
u_1-u_2-u_3+u_4 \right)\rightarrow S^{+--+}=\left[
B_{3}^0,S^{+-++} \right]\sqrt{2}\nonumber \\
&=& 2^{-\frac{1}{2}}\left(
zD_{11}^{\bar1}+z^*D_{11}^1-D_{7}^{\bar6}-D_{9}^{\bar5}+D_{12}^3-D_{13}^4
\right)\label{eq25}\end{aligned}$$ level six generators $$\begin{aligned}
\alpha _1+2\alpha _2+2\alpha _3+2\alpha _4 &=&
-u_1-u_4\rightarrow B_{\bar1}^4=\left[ B_{\bar1}^2,B_{2}^4
\right]=D_{6}^{\bar9}+D_{11}^{\bar2}+D_{13}^3\nonumber \\
\alpha _1+2\alpha _2+3\alpha _3+\alpha _4 &=& -\frac{1}{2}\left(
u_1-u_2-u_3-u_4 \right)\rightarrow S^{+---}=-\left[ B_{4}^0,S^{+--+}
\right]\sqrt{2}\nonumber \\
&=& 2^{\frac{1}{2}}\left(
%% FOLLOWING LINE CANNOT BE BROKEN BEFORE 80 CHAR
z^*D_{12}^{\bar1}+zD_{12}^1+D_{7}^{\bar8}+D_{10}^{\bar5}-D_{11}^{\bar3}-D_{13}^2
\right)\label{eq26}\end{aligned}$$ one level seven generator $$\begin{aligned}
\alpha _1+2\alpha _2+3\alpha _3+2\alpha _4 &=& -u_1\rightarrow
B_{\bar1}^0=-\left[ S^{+---},S^{++++} \right]\sqrt{2}\nonumber \\
&=& 2^{\frac{1}{2}}\left(
%% FOLLOWING LINE CANNOT BE BROKEN BEFORE 80 CHAR
D_{13}^{\bar1}+D_{13}^1+D_{9}^{\bar8}+D_{10}^{\bar6}-D_{11}^{\bar4}-D_{12}^{\bar2}
\right)\label{eq27}\end{aligned}$$ one level eight generator $$\label{eq28}
\alpha _1+2\alpha _2+4\alpha _3+2\alpha _4=-u_1+u_4\rightarrow
B_{4}^1=\left[ B_{\bar1}^0,B_{4}^0
\right]=-D_{10}^{\bar8}+D_{12}^{\bar4}-D_{13}^{\bar3}$$ one level nine generator $$\label{eq29}
\alpha _1+3\alpha _2+4\alpha _3+2\alpha _4=-u_1+u_3\rightarrow
B_{3}^1=\left[ B_{\bar1}^0,B_{3}^0
\right]=-D_{11}^{\bar8}+D_{12}^{\bar6}-D_{13}^{\bar5}$$ and one level ten generator $$\label{eq30}
2\alpha _1+3\alpha _2+4\alpha _3+2\alpha _4=-u_1+u_2\rightarrow
B_{2}^1=\left[ B_{\bar1}^0, B_{2}^0
\right]=D_{10}^{\overline{11}}+D_{12}^{\bar9}-D_{13}^{\bar7}$$ Note that the root corresponding to the highest level, Eq.(\[eq30\]), is precisely the negative of $\alpha _0$, where $\alpha _0$ is the extra root added to the Dynkin diagram of $F_4$ to produce the extended Dynkin diagram.
In addition to the above 24 $e$-type generators, Eqs.(\[eq16\])–(\[eq30\]), I have 24 $f$-type generators obtained by taking the hermitian conjugate of the above. Thus corresponding to the expressions above for the simple (level zero) lowering generators $e_i$ I have $$\begin{aligned}
f_1 &=&
e_1^\dagger=B_{2}^{3^\dagger}=B_{3}^2=D_{5}^7+D_{6}^9+D_{8}^{10}\nonumber \\
f_2 &=&
%% FOLLOWING LINE CANNOT BE BROKEN BEFORE 80 CHAR
e_{2}^{\dagger}=B_{3}^{4^{\dagger}}=B_{4}^3=D_{3}^5+D_{4}^6+D_{10}^{11}\nonumber \\
f_3 &=&
e_3^\dagger=B_{4}^{0^{\dagger}}=B_{0}^4=2^{\frac{1}{2}}\left(
D_{\bar1}^3+D_{1}^3+D_{2}^4+D_{6}^8+D_{9}^{10}+D_{11}^{12} \right)\nonumber \\
f_4 &=&
e_{4}^\dagger=S^{++++^\dagger}=S^{----}=2^{\frac{1}{2}}\left(
z^*D_{\bar1}^2+zD_{1}^2+D_{3}^4+D_{5}^6+D_{7}^9+D_{12}^{13}
\right)\label{eq31}\end{aligned}$$ and so on for the generators in higher levels.
Moreover, for the hermitian Cartan generators I have that the Chevalley and $B_4$ bases are related as follows: $$\begin{aligned}
h_1 &=& \left[ f_1,e_1 \right]=\left[ B_{3}^2,B_{2}^3
\right]=B_{3}^3-B_{2}^2\nonumber \\
h_2 &=& \left[ f_2,e_2 \right]=\left[ B_{4}^3,B_{3}^4
\right]=B_{4}^4-B_{3}^3\nonumber \\
h_3 &=& \left[ f_3,e_3 \right]=\left[ B_{0}^4,B_{4}^0
\right]=-B_{4}^4\nonumber \\
h_4 &=& \left[ f_4,e_4 \right]=\left[ S^{----},S^{++++}
\right]=\frac{1}{2}\left( B_{1}^1+B_{2}^2+B_{3}^3+B_{4}^4
\right)\label{eq32}\end{aligned}$$ or, solving above for the $B_{\alpha }^{\alpha }$ and using Eqs.(\[eq10\])–(\[eq13\]), $$\begin{aligned}
-B_{1}^1 &=& h_1+2h_2+3h_3+2h_4\nonumber \\
&=& \frac{1}{2}\left(
%% FOLLOWING LINE CANNOT BE BROKEN BEFORE 80 CHAR
D_{2}^2+D_{4}^4+D_{6}^6+D_{8}^8+D_{9}^9+D_{10}^{10}+D_{11}^{11}+D_{12}^{12}+2D_{13}^{13} \right)\nonumber \\
-B_{2}^2 &=& h_1+h_2+h_3=\frac{1}{2}\left(
%% FOLLOWING LINE CANNOT BE BROKEN BEFORE 80 CHAR
D_{2}^2+D_{4}^4+D_{6}^6-2D_{7}^7+D_{8}^8-D_{9}^9-D_{10}^{10}-D_{11}^{11}-D_{12}^{12} \right)\nonumber \\
-B_{3}^3 &=& h_2+h_3=\frac{1}{2}\left(
%% FOLLOWING LINE CANNOT BE BROKEN BEFORE 80 CHAR
D_{2}^2+D_{4}^4-2D_{5}^5-D_{6}^6-D_{8}^8+D_{9}^9+D_{10}^{10}-D_{11}^{11}-D_{12}^{12}\right)\nonumber \\
-B_{4}^4 &=& h_3=\frac{1}{2}\left(
%% FOLLOWING LINE CANNOT BE BROKEN BEFORE 80 CHAR
D_{2}^2-2D_{3}^3-D_{4}^4+D_{6}^6-D_{8}^8+D_{9}^9-D_{10}^{10}+D_{11}^{11}-D_{12}^{12}
\right)\label{eq33}\end{aligned}$$
The projected generators $\tilde D_{a}^b$ {#the-projected-generators-tilde-d_ab .unnumbered}
=========================================
Now since the 26 is the defining representation of $D_{13}$, the results above expressing the generators of $F_4$ in the 26-dimensional representation in terms of the generators of $D_{13}$ in the 26-dimensional representation, can be interpreted as giving the generators of $F_4$ in terms of those of $D_{13}$ in any representation. Now then the $\tilde D_{a}^b$, the generators of $D_{13}$ projected into $F_4$, are given by inverting the above equations.
Thus the 13 Cartan generators of $D_{13}$ projected into $F_4$ are given by inverting Eqs. (\[eq33\]): $$\begin{aligned}
\tilde D_{1}^1 &=& 0\nonumber \\
\tilde D_{2}^2 &=& -\frac{1}{6}\left( B_{1}^1+B_{2}^2+B_{3}^3+B_{4}^4
\right)\nonumber \\
\tilde D_{3}^3 &=& \frac{1}{3}B_{4}^4\nonumber \\
\tilde D_{4}^4 &=& -\frac{1}{6}\left(
B_{1}^1+B_{2}^2+B_{3}^3-B_{4}^4 \right)\nonumber \\
\tilde D_{5}^5 &=& \frac{1}{3}B_{3}^3\nonumber \\
\tilde D_{6}^6 &=& -\frac{1}{6}\left(
B_{1}^1+B_{2}^2-B_{3}^3+B_{4}^4 \right)\nonumber \\
\tilde
D_{7}^7 &=& \frac{1}{3}B_{2}^2\nonumber \\
\tilde D_{8}^8 &=& -\frac{1}{6}\left(
B_{1}^1+B_{2}^2-B_{3}^3-B_{4}^4 \right)\nonumber \\
\tilde D_{9}^9 &=& -\frac{1}{6}\left(
B_{1}^1-B_{2}^2+B_{3}^3+B_{4}^4 \right)\nonumber \\
\tilde D_{10}^{10} &=& -\frac{1}{6}\left(
B_{1}^1-B_{2}^2+B_{3}^3-B_{4}^4 \right)\nonumber \\
\tilde D_{11}^{11} &=& -\frac{1}{6}\left(
B_{1}^1-B_{2}^2-B_{3}^3+B_{4}^4 \right)\nonumber \\
\tilde D_{12}^{12} &=& -\frac{1}{6}\left(
B_{1}^1-B_{2}^2-B_{3}^3-B_{4}^4 \right)\nonumber \\
\tilde D_{13}^{13} &=& -\frac{1}{3}B_{1}^1\label{eq34}\end{aligned}$$ Perhaps an explanation of how Eq. (\[eq34\]) is obtained is in order. Equations (\[eq33\]) are four equations for four $B_{\alpha }^{\alpha }$ in terms of thirteen $D_{a}^a$ (no summations). In addition there are nine more equations for appropriate components of the $\bf 273$-plet involving these same thirteen $D_{a}^a$. This total of 13 equations can be written as follows $$\label{eq35}
b_A=U_{AB}d_B$$ where $1\leq A,B\leq 13$, where $d_B\equiv D_{B}^B$ (no summation), $b_A\equiv B_{A}^A$ (no summation) for $A=1,2,3,4$, and $b_A$ for $5\leq A\leq 13$ refers to components of the $\bf
273$-plet. Inversion of Eq. (\[eq35\]) is achieved by $$\label{eq36}
d_A=U^{-1}_{AB}b_B$$ where the inverse of the $13\times13$ matrix $U$ is given by $$\label{eq37}
U^{-1}=\frac{1}{3}U^\dagger$$ where the factor $\frac{1}{3}$ accounts for the difference in the normalization of the $d_A$ and $b_A$. Finally the projected $\tilde d_A$ are obtained by setting in Eq. (\[eq36\]) $b_A=0$ for $5\leq A\leq13$.
By proceeding in the same fashion I obtain the 156 generators $\tilde D_{a}^b$ with $a>b$ by inverting the 24 $e$-type equations with the result:\
for the 24 $\tilde D_{13}^b$ with $13>b$: $$\begin{aligned}
\tilde D_{13}^{\overline{12}} &=& \tilde
D_{13}^{\overline{11}}=\tilde D_{13}^{\overline{10}}=\tilde
D_{13}^{\bar9}=\tilde D_{13}^{\bar8}=\tilde D_{13}^{\bar6}=\tilde
D_{13}^{\bar4}=\tilde D_{13}^{\bar2}=0,\nonumber \\
\tilde D_{13}^{\bar7} &=& -\frac{1}{3}B_{2}^1,\ \tilde
D_{13}^{\bar5}=-\frac{1}{3}B_{3}^1,\ \tilde
D_{13}^{\bar3}=-\frac{1}{3}B_{4}^1,\nonumber \\
\tilde D_{13}^{\bar1} &=&
\tilde D_{13}^1=
-\frac{1}{3\sqrt{2}}B_{0}^1,\ \tilde
D_{13}^2=-\frac{1}{3\sqrt{2}}S^{+---},\nonumber \\
\tilde D_{13}^3 &=& \frac{1}{3}B_{\bar1}^4,\ \tilde
D_{13}^4=-\frac{1}{3\sqrt{2}}S^{+--+},\ \tilde
D_{13}^5=\frac{1}{3}B_{\bar1}^3,\nonumber \\
\tilde D_{13}^6 &=& -\frac{1}{3\sqrt{2}}S^{+-+-},\ \tilde
D_{13}^7=\frac{1}{3}B_{\bar1}^2,\ \tilde
D_{13}^8=-\frac{1}{3\sqrt{2}}S^{+-++},\nonumber \\
\tilde D_{13}^9 &=& \frac{1}{3\sqrt{2}}S^{++--},\ \tilde
D_{13}^{10}=\frac{1}{3\sqrt{2}}S^{++-+},\nonumber \\
\tilde D_{13}^{11} &=& -\frac{1}{3\sqrt{2}}S^{+++-},\ \tilde
D_{13}^{12}=\frac{1}{3\sqrt{2}}S^{++++}\label{eq38}\end{aligned}$$ for the 22 $\tilde D_{12}^b$ with $12>|b|$: $$\begin{aligned}
\tilde D_{12}^{\overline{11}} &=& \tilde
D_{12}^{\overline{10}}=\tilde D_{12}^{\bar8}=\tilde
D_{12}^{\bar7}=\tilde D_{12}^{\bar5}=\tilde D_{12}^{\bar3}=\tilde
D_{12}^{2}=0,\nonumber \\
\tilde D_{12}^{\bar9} &=& \frac{1}{3}B_{2}^1,\ \tilde
D_{12}^{\bar6}=\frac{1}{3}B_{3}^1,\ \tilde
D_{12}^{\bar4}=\frac{1}{3}B_{4}^1\nonumber \\
\tilde D_{12}^{\bar2} &=& \frac{1}{3\sqrt{2}}B_{0}^1,\ \tilde
D_{12}^{\bar1}=\frac{z}{3\sqrt{2}}S^{+---},\ \tilde
D_{12}^1=\frac{z^*}{3\sqrt{2}}S^{+---},\nonumber \\
\tilde D_{12}^3 &=& \frac{1}{3\sqrt{2}}S^{+--+},\ \tilde
D_{12}^4=\frac{1}{3}B_{2}^{\bar3},\ \tilde
D_{12}^5=\frac{1}{3\sqrt{2}}S^{+-+-},\nonumber \\
\tilde D_{12}^6 &=& \frac{1}{3}B_{4}^{\bar2},\ \tilde
D_{12}^7=-\frac{1}{3\sqrt{2}}S^{++--},\ \tilde
D_{12}^8=\frac{1}{3\sqrt{2}}B_{2}^0,\nonumber \\
\tilde D_{12}^9 &=& \frac{1}{3}B_{3}^{\bar4},\ \tilde
D_{12}^{10}=\frac{1}{3\sqrt{2}}B_{0}^{\bar3},\ \tilde
D_{12}^{11}=\frac{1}{3\sqrt{2}}B_{4}^0\label{eq39}\end{aligned}$$ for the 20 $\tilde D_{11}^b$ with $11>|b|$: $$\begin{aligned}
\tilde D_{11}^{\bar9} &=& \tilde D_{11}^{\bar7}=\tilde
D_{11}^{\bar6}=\tilde D_{11}^{\bar5}=\tilde D_{11}^{3}=\tilde
D_{11}^4=0,\nonumber \\
\tilde D_{11}^{\overline{10}} &=& -\frac{1}{3}B_{2}^1,\ \tilde
D_{11}^{\overline8}=-\frac{1}{3}B_{3}^1,\ \tilde
D_{11}^{\overline4}=\frac{1}{3\sqrt{2}}B_{0}^1,\nonumber \\
\tilde D_{11}^{\overline3} &=& -\frac{1}{3\sqrt{2}}S^{+---},\
\tilde D_{11}^{\overline2}=\frac{1}{3}B_{\overline1}^4,\ \tilde
D_{11}^{\overline1}=\frac{z^*}{3\sqrt{2}}S^{+--+},\nonumber \\
\tilde D_{11}^1 &=& \frac{z}{3\sqrt{2}}S^{+--+},\ \tilde
D_{11}^2=\frac{1}{3}B_{2}^{\bar3},\ \tilde
D_{11}^5=-\frac{1}{3\sqrt{2}}S^{+-++},\nonumber \\
\tilde D_{11}^6 &=& \frac{1}{3\sqrt{2}}B_{0}^{\bar2},\ \tilde
D_{11}^7=\frac{1}{3\sqrt{2}}S^{++-+},\ \tilde
D_{11}^8=-\frac{1}{3}B_2^4,\nonumber \\
\tilde D_{11}^9 &=& \frac{1}{3\sqrt{2}}B_2^0,\ \tilde
D_{11}^{10}=\frac{1}{3}B_3^4\label{eq40}\end{aligned}$$ for the 18 $\tilde D_{10}^b$ with $10>|b|$: $$\begin{aligned}
\tilde D_{10}^{\bar9} &=& \tilde D_{10}^{\bar7}=\tilde
D_{10}^{\bar4}=\tilde D_{10}^{\bar3}=\tilde D_{10}^{5}=\tilde
D_{10}^{6}=0,\nonumber \\
\tilde D_{10}^{\bar8} &=& -\frac{1}{3}B_4^1,\ \tilde
D_{10}^{\bar5}=\frac{1}{3\sqrt{2}}S^{+---},\ \tilde
D_{10}^{\bar2}=-\frac{1}{3}B_{\bar1}^3\nonumber \\
\tilde D_{10}^{\bar1} &=& -\frac{z^*}{3\sqrt{2}}S^{+-+-},\ \tilde
D_{10}^1=-\frac{z}{3\sqrt{2}}S^{+-+-},\ \tilde
D_{10}^2=\frac{1}{3}B_2^{\bar4},\nonumber \\
\tilde D_{10}^3 &=& -\frac{1}{3\sqrt{2}}S^{+-++},\ \tilde
D_{10}^4=\frac{1}{3\sqrt{2}}B_{0}^{\bar2},\ \tilde
D_{10}^7=\frac{1}{3\sqrt{2}}S^{+++-},\nonumber \\
\tilde D_{10}^8 &=& \frac{1}{3}B_2^3,\ \tilde
D_{10}^9=\frac{1}{3\sqrt{2}}B_4^0\label{eq41}\end{aligned}$$ for the 16 $\tilde D_{9}^b$ with $9>|b|$: $$\begin{aligned}
\tilde D_9^{\bar7} &=& \tilde D_9^{\bar2}=\tilde D_9^3=\tilde
D_9^5=\tilde D_9^8=0,\nonumber \\
\tilde D_9^{\bar8} &=& -\frac{1}{3\sqrt{2}}B_0^1,\ \tilde
D_9^{\bar6}=\frac{1}{3}B_{\bar4}^1,\ \tilde
D_9^{\bar5}=-\frac{1}{3\sqrt{2}}S^{+--+},\nonumber \\
\tilde D_9^{\bar4} &=& \frac{1}{3}B_{\bar1}^3,\ \tilde
D_9^{\bar3}=\frac{1}{3\sqrt{2}}S^{+-+-},\ \tilde
D_9^{\bar1}=-\frac{z}{3\sqrt{2}}S^{+-++},\nonumber \\
\tilde D_9^1 &=& -\frac{z^*}{3\sqrt{2}}S^{+-++},\ \tilde
D_9^2=\frac{1}{3\sqrt{2}}B_2^0,\ \tilde D_9^4=\frac{1}{3}B_2^4,\nonumber \\
\tilde D_9^6 &=& \frac{1}{3}B_2^3,\ \tilde
D_9^7=\frac{1}{3\sqrt{2}}S^{++++}\label{eq42}\end{aligned}$$ for the 14 $\tilde D_8^b$ with $8>|b|$: $$\begin{aligned}
\tilde D_8^{\bar6} &=& \tilde D_8^{\bar5}=\tilde
D_8^{\bar4}=\tilde D_8^{\bar3}=\tilde D_8^7=0,\nonumber \\
\tilde D_8^{\bar7} &=& -\frac{1}{3\sqrt{2}}S^{+---},\ \tilde
D_8^{\bar2}=\frac{1}{3}B_{\bar1}^2,\ \tilde
D_8^{\bar1}=-\frac{z^*}{3\sqrt{2}}S^{++--},\nonumber \\
\tilde D_8^1 &=& -\frac{z}{3\sqrt{2}}S^{++--},\ \tilde
D_8^2=\frac{1}{3}B_3^{\bar4},\ \tilde
D_8^3=-\frac{1}{3\sqrt{2}}S^{++-+},\nonumber \\
\tilde D_8^4 &=& -\frac{1}{3\sqrt{2}}B_3^0,\ \tilde
D_8^5=\frac{1}{3\sqrt{2}}S^{+++-},\ \tilde
D_8^6=\frac{1}{3\sqrt{2}}B_4^0\label{eq43}\end{aligned}$$ for the 12 $\tilde D_7^b$ with $7>|b|$: $$\begin{aligned}
\tilde D_7^2 &=& \tilde D_7^4=\tilde D_7^6=0,\ \tilde
D_7^{\bar6}=-\frac{1}{3\sqrt{2}}S^{+--+},\ \tilde
D_7^{\bar5}=\frac{1}{3}B_2^{\bar3},\nonumber \\
\tilde D_7^{\bar4} &=& \frac{1}{3\sqrt{2}}S^{+-+-},\ \tilde
D_7^{\bar3}=\frac{1}{3}B_2^{\bar4},\ \tilde
D_7^{\bar2}=-\frac{1}{3\sqrt{2}}S^{+-++},\nonumber \\
\tilde D_7^{\bar1} &=& \tilde D_7^{1}=\frac{1}{3\sqrt{2}}B_2^0,\
\tilde D_7^3=\frac{1}{3}B_2^4,\ \tilde
D_7^5=\frac{1}{3}B_2^3\label{eq44}\end{aligned}$$ for the 10 $\tilde D_6^b$ with $6>|b|$: $$\begin{aligned}
\tilde D_6^{\bar5} &=& \tilde D_6^{\bar2}=\tilde D_6^3=0,\
\tilde D_6^{\bar4}=-\frac{1}{3}B_{\bar1}^2,\
\tilde D_6^{\bar3}=\frac{1}{3\sqrt{2}}S^{++--},\nonumber \\
\tilde D_6^{\bar1} &=& -\frac{z}{3\sqrt{2}}S^{++-+},\
\tilde D_6^1=-\frac{z^*}{3\sqrt{2}}S^{++-+},\
\tilde D_6^2=\frac{1}{3\sqrt{2}}B_3^0,\nonumber \\
\tilde D_6^4 &=& \frac{1}{3}B_3^4,\
\tilde D_6^5=\frac{1}{3\sqrt{2}}S\mbox{++++}\label{eq45}\end{aligned}$$ for the 8 $\tilde D_5^b$ with $5>|b|$: $$\begin{aligned}
\tilde D_5^2 &=& \tilde D_5^4=0,\ \tilde
D_5^{\bar4}=\frac{1}{3\sqrt{2}}S^{++--},\ \tilde
D_5^{\bar3}=\frac{1}{3}B_3^{\bar4},\nonumber \\
\tilde D_5^{\bar2} &=& -\frac{1}{3\sqrt{2}}S^{++-+},\ \tilde
D_5^{\bar1}=\tilde D_5^1=\frac{1}{3\sqrt{2}}B_3^0,\nonumber \\
\tilde D_5^3 &=& \frac{1}{3}B_3^4\label{eq46}\end{aligned}$$ for the 6 $D_4^b$ with $4>|b|$: $$\begin{aligned}
\tilde D_4^{\bar3} &=& \tilde D_4^{\bar2}=0,\ \tilde
D_4^{\bar1}=-\frac{z}{3\sqrt{2}}S^{+++-},\ \tilde
D_4^1=-\frac{z^*}{3\sqrt{2}}S^{+++-},\nonumber \\
\tilde D_4^2 &=& \frac{1}{3\sqrt{2}}B_4^0,\ \tilde
D_4^3=\frac{1}{3\sqrt{2}}S^{++++}\label{eq47}\end{aligned}$$ for the 4 $\tilde D_3^b$ with $3>|b|$: $$\label{eq48}
\tilde D_3^2=0,\ \tilde
D_3^{\bar2}=-\frac{1}{3\sqrt{2}}S^{+++-},\ \tilde
D_3^{\bar1}=\tilde D_3^1=\frac{1}{3\sqrt{2}}B_4^0$$ and finally for the two $\tilde D_2^b$ with $2>|b|$: $$\label{eq49}
\tilde D_2^{\bar1}=\frac{z^*}{3\sqrt{2}}S^{++++},\ \tilde
D_2^1=\frac{z}{3\sqrt{2}}S^{++++}$$ Lastly the 156 $\tilde D_a^b$ with $a<b$ are obtained from the results above by hermitian conjugation: $$\label{eq50}
\tilde D_b^a=\tilde D_a^{b^\dagger},\ B_b^a=B_a^{b^\dagger},\
S^{\overline{pqrs}}=S^{pqrs^\dagger}$$ This completes the calcultion of the Casimir operators of $F_4$.
Conclusion {#conclusion .unnumbered}
==========
I conclude with two remarks,
1. For $k=2$ the result of inserting the explicit formulas for the projected $\tilde
D$, Eqs. (\[eq34\]), (\[eq38\]–\[eq50\]), into Eq. (\[eq6\]) can be simplified into the following formula for the quadratic Casimir operator of $F_4$: $$\label{eq51}
{\cal C}_2(F_4)=\tilde D_a^b\tilde D_b^a=\frac{1}{3}B_\alpha
^\beta B_\beta ^\alpha +\frac{2}{3}S^{pqrs}S^{\overline{pqrs}}$$ and I remind the reader that the various subscripts are summed over the following range: $-13\leq a,b\leq13$ (zero excluded); $-4\leq \alpha ,\beta \leq4$ (zero included); $p,q,r,s=\pm$.
The general form of this result for the quadratic Casimir of $F_4$ in the $B_4$ basis was to be expected since the two pieces in Eq. (\[eq51\]) are the only quadratic invariants of the subgroup $B_4$ that can be formed out of the adjoint $\bf 36$ and the spinor $\bf16$ of $B_4$. Thus this result can be viewed as a test of the formalism.
2. Recall that according to Eq. (\[eq4\]) the independent Casimirs are of degree $k=2s,\ 1\leq s\leq13$. Now consider the [ *Cartan*]{} part of the Casimirs. If I denote the Cartan part of ${\cal C}_k(F_4)$ by ${\cal K}_k$ then it follows from Eq. (\[eq6\]) that $$\label{eq52}
{\cal K}_k=
\left( \tilde D_a^a \right)^k$$ Since $\tilde D_{\bar a}^{\bar a}=-\tilde D_a^a$ this is manifestly zero for $k=\mbox{odd}$. For $k=\mbox{even}$ Eq. (\[eq52\]) becomes (where I have set $b_\alpha \equiv B_\alpha
^\alpha $, no summation) $$\begin{aligned}
&&{\cal K}_k=2\sum_{a=1}^{13}\left( \tilde D_a^a \right)^k =
2\cdot6^{-k}\left\{\left( b_1+b_2+b_3+b_4 \right)^k+\left( 2b_4
\right)^k+\left( b_1+b_2+b_3-b_4 \right)^k+\left( 2b_3 \right)^k
\right.\nonumber \\
&& +\left( b_1+b_2-b_3+b_4 \right)^k+\left( 2b_2
\right)^k+\left( b_1+b_2-b_3-b_4 \right)^k+\left(
b_1-b_2+b_3+b_4 \right)^k\nonumber \\
&&\left.+\left( b_1-b_2+b_3-b_4 \right)^k+\left( b_1-b_2-b_3+b_4
\right)^k+\left( b_1-b_2-b_3-b_4 \right)^k+\left( 2b_1
\right)^k\right\}\label{eq53}
\end{aligned}$$ For $k=2$ Eq. (\[eq53\]) gives $$\label{eq54}
{\cal K}_2=\frac{2}{3}\left( b_2^2+b_2^2+b_3^2+b_4^2 \right)$$ while for $k=4$ it gives $$\label{eq55}
{\cal K}_4=3^{-3}\left( b_1^2+b_2^2+b_3^2+b_4^2 \right)^2$$ This proves that the degree 4 Casimir is not functionally independent of the degree 2 Casimir.
For $k=6$ Eq. (\[eq53\]) gives $$\begin{aligned}
{\cal K}_6 &=& 2^{-2}\cdot 3^{-5}\left\{3\left[
b_1^6+b_2^6+b_3^6+b_4^6 \right]+5\left[ b_1^4\left(
b_2^2+b_3^2+b_4^2 \right)+b_2^4\left( b_1^2+b_3^2+b_4^2 \right)
\right.\right.\nonumber \\
&& \left.+b_3^4\left( b_1^2+b_2^2+b_4^2 \right)+b_4^4\left(
b_1^2+b_2^2+b_3^2 \right) \right]\nonumber \\
&& \left. +30\left[ b_1^2\left( b_2^2b_3^2+b_3^2b_4^2+b_2^2b_4^2
\right)+b_2^2b_3^2b_4^2 \right]\right\}\label{eq56}
\end{aligned}$$ which [*is*]{} functionally independent of the degree 2 Casimir (were it proportional to the cube of the degree 2 Casimir it would have the coefficient of the expression in the first, second and third square bracket in the ratio 1:3:6 instead of the 3:5:30 above).
Continuing along these lines I find that the degree 8 is functionally independent of the degree 2 and 6, while the degree 10 is dependent: $$\label{eq57}
{\cal K}_{10}\sim{\cal K}_2\left\{28{\cal K}_2({\cal
K}_2^3-{\cal K}_6)+3{\cal K}_8\right\}$$ and lastly the degree 12 is independent of those of lower degree. Since all the Casimirs are functions of the four quantities $b_\alpha ^2$, $1\leq\alpha \leq4$, I can solve for the $b_\alpha ^2$ in terms of the independent Casimirs of degree 2, 6, 8 and 12, and consequently all Casimirs of higher degree are necessarily dependent. This completes the demonstration that the independent Casimirs are those of degree equal to the exponents of $F_4$ plus one.
Many useful discussions with Charlie Goebel are gratefully acknowledged.
F. Berdjis, J. Math. Phys. [**22**]{}, 1851 (1981). F. Berdjis and E. Beslmuller, J. Math. Phys. [**22**]{}, 1857 (1981). J.W.B. Hughes and J. Van der Jeugt, J. Math. Phys. [**26**]{}, 894 (1985). A.M. Bincer and K. Riesselmann, J. Math. Phys. [**34**]{}, 5935 (1993). A.M. Bincer, Canad. J. Phys. (to be published). A.M. Perelomov and V.S. Popov, Sov. J. Nucl. Phys. [**7**]{}, 290 (1968). J. Patera, J. Math. Phys. [**12**]{}, 384 (1971). J.M. Ekins and J.F. Cornwell, Rep. Math. Phys. [**7**]{}, 167 (1973).
|
---
abstract: 'The contrast mechanism for ferroelectric domain imaging via piezoresponse force microscopy (PFM) is investigated. A novel analysis of PFM measurements is presented which takes into account the background caused by the experimental setup. This allows, for the first time, a quantitative, frequency independent analysis of the domain contrast which is in good agreement with the expected values for the piezoelectric deformation of the sample and satisfies the generally required features of PFM imaging.'
author:
- Tobias Jungk
- Ákos Hoffmann
- Elisabeth Soergel
title: |
Quantitative analysis of ferroelectric domain imaging\
with piezoresponse force microscopy
---
Domain engineering in ferroelectric crystals is of increasing importance for quasi-phase-matched second-harmonic generation [@Fej92], nonlinear photonic crystals [@Bro00], and ultra-high density data storage devices [@Cho02]. Among the techniques utilized for the visualization of ferroelectric domains [@Soe05] piezoresponse (or piezoelectric) force microscopy has become an established standard tool because of its non-destructive imaging capability with high lateral resolution [@Alexe; @Par05]. This detection technique is based on the deformation of the sample due to the converse piezoelectric effect. The piezoresponse (or piezoelectric) force microscope (PFM) is a standard scanning force microscope (SFM) operated in contact mode with an additional small alternating voltage applied to the tip. In piezoelectric samples this voltage causes thickness changes and therefore vibrations of the surface which lead to oscillations of the cantilever that can be read out with a lock-in amplifier. However, although widely used, the contrast mechanism for domain detection with PFM is still under discussion mainly because of inconsistencies of the measured data that concern the following features:
[-]{}
Frequency dependence: the domain contrast should be independent of the frequency of the alternating voltage applied to the tip. This applies of course only for frequencies far away from any intrinsic resonance frequencies of the cantilever. As the mechanical resonances of bulk ferroelectric crystals are very high, they are irrelevant for our considerations [@Bur75].
Vibration amplitude: the vibration amplitude of a $+z$ and a $-z$ domain face must be equal. Its value $\Delta t$ should be in agreement with the theoretical prediction $\Delta t = d \cdot U$ with $d$ being the appropriate piezoelectric constant and $U$ the voltage applied to the tip [@Lin03].
Phase shift: a phase difference of 180$^{\circ}$ between the piezoelectric response on a $+z$ and on a $-z$ domain face is considered mandatory.
Cantilever stiffness: the domain contrast should be independent of the stiffness of the cantilever used.
However, frequency scans of the alternating voltage applied to the tip are reported to show a complex spectrum, i.e. the measured domain contrast strongly depends on the frequency. The vibration amplitude measured is not equal on differently orientated domains and the reported values differ by orders of magnitude. The phase difference of 180$^{\circ}$ is not generally obtained. Finally the domain contrast in PFM measurements was observed to be affected by the stiffness of the cantilever. See e.g. Refs. [@Kol95; @Eng98; @Lab00; @Lab01; @Hong02; @Har02; @Har03; @Har04; @Scr05a; @Agr05].
Indeed, because of these basic inconsistencies with the features listed above alternative origins for the domain contrast in PFM measurements have been discussed. For the same experimental setup the term ”dynamic-contact electrostatic force microscopy” (DC-EFM) was introduced and domain contrast was explained by specific electrical properties of the $+z$ and the $-z$ domain faces [@Hon98]. Differences in the work functions were also proposed for causing the domain contrast [@Shv02]. To achieve a deeper insight the electrostatic and the electromechanical contributions of the tip-surface junction were calculated taking into account the field and potential distributions as well as the indentation force of the tip [@Kal02].
Even though numerous approaches for an understanding of the PFM contrast mechanism of ferroelectric domains have been reported, a full (quantitative) analysis is still lacking. Although there is no doubt that PFM imaging is sensitive to ferroelectric domains, the opposite situation (a contrast in PFM imaging unambiguously proving the existence of ferroelectric domains) is not yet established because of the above mentioned inconsistencies. A more detailed understanding of the PFM detection method is therefore needed.
In this letter we present a novel analysis of the data acquired with PFM. This allows for the first time a clear understanding of the contribution of the converse piezoelectric effect which is found to fully satisfy the features listed above.
For the investigations, we used a conventional experimental setup with a commercial scanning force microscope (SMENA, NT-MDT), modified to allow application of voltages to the tip. We utilized four different cantilevers C$_{1}$-C$_{4}$ with Pt/Ir-coated tips (Micromasch) of lengths $100 - 130\,\rm \mu m$, resonance frequencies $160 - 290$kHz, and stiffness $\rm C_{1}$: $k$ = 5.3N/m, $\rm C_{2}$: $k$ = 9.8N/m, $\rm C_{3}$: $k$ = 11.4N/m and $\rm C_{4}$: $k$ = 26.4N/m. For PFM operation we applied an alternating voltage (amplitude: 10V$_{\rm pp}$) to the tip and detected the resulting oscillation of the cantilever with a lock-in amplifier (SRS 830), the phase being set to $0^{\circ}$, the time constant to 3ms. We simultaneously recorded the in-phase ($\theta = 0^{\circ}$) and the orthogonal ($\theta = 90^{\circ}$) output, $\theta$ denoting the phasing with respect to the alternating voltage applied to the tip. In the following these output signals of the lock-in amplifier will be named PFM signals, $p$ and $n$ being the PFM signal on a $+z$ and a $-z$ domain face respectively. The sample was a periodically poled, $z$-cut, congruently melting lithium niobate crystal ($\rm
8\times 10 \times 0.5$mm$^3$) with a period length of 8$\mu$m.
The experimental procedure was as follows: we firstly recorded a PFM image of the sample in order to subsequently position the tip accurately on a $+z$ or a $-z$ domain face. We then measured the frequency dependence of the amplitude of the cantilever oscillations by scanning the alternating voltage applied to the tip from 10kHz to 100kHz. The scan duration was about 10minutes. The graphs in this letter are averages over three separate frequency scans taken at different positions on the sample surface.
![\[fig:Jungk1\] Frequency dependence of the in-phase PFM signal on a $+z$ domain face of PPLN for four different cantilevers C$_{1}$-C$_{4}$, k: spring constant.](Jungk1)
Figure \[fig:Jungk1\] shows frequency scans of the in-phase PFM signal on a $+z$ domain face for the four different cantilevers used. The frequency spectra look apparently random, although some specific features recur (for example at $\sim$22 and at $\sim$29kHz for C$_1$ and C$_3$ and at $\sim 84$kHz for C$_1$ and C$_2$). The PFM signal reaches values of more than 250pm whereas only 75pm are predicted for the surface vibration due to the converse piezoelectric effect in $z$-cut $\rm LiNbO_3$. Moreover at some specific frequencies, no PFM signal is measured and even negative values are obtained. The PFM signal of the orthogonal output of the lock-in amplifier shows a similar behavior, however, with completely different spectra.
Frequency spectra similar to the ones shown in Fig. \[fig:Jungk1\] have already been reported [@Lab00; @Har04; @Scr05a]. For their explanation the excitation of resonant modes of the cantilever was proposed, tip and sample surface being in contact with each other. [@Lab00]. We observed, however, that frequency scans with no sample in the vicinity of the tip result in similar spectra, admittedly with a smaller amplitude. If the tip is in contact with the sample, the frequency spectrum can be affected e.g. by changing the coupling conditions between the tip and the SFM head. We therefore wrapped the silicon chip (to which the tip is attached) with conductive scotch tape. This led to an altered spectrum with much larger amplitudes. These results indicate a complex mechanical resonance behavior of the whole setup comprising the sample, the tip with cantilever and the SFM head. From the spectra shown in Fig. \[fig:Jungk1\] and the findings described above, it is obvious that only a small part of the PFM signal on $\rm LiNbO_3$ can be attributed to the ferroelectric properties of the sample. The PFM signal is completely dominated by a complex background signal.
![\[fig:Jungk2\] Frequency dependence of (a) the in-phase background PFM signal on a PPLN surface $( b= \frac {p + n}{2})$ , (b) the in-phase PFM signal on a glass surface and (c) the difference between these two graphs. The measurements were performed with the cantilever C$_{4}$.](Jungk2)
To determine this background signal, we averaged over the $+z$ and $-z$ domain faces: $(p+n)/2$, therefore eliminating the contributions of the ferroelectric properties of the sample to the PFM signal (Fig. \[fig:Jungk2\](a)). To prove this statement we performed reference measurements with the same cantilever on a standard glass microscope slide (Fig. \[fig:Jungk2\](b)). The difference between these two frequency spectra is shown to be extremely small (Fig. \[fig:Jungk2\](c) the vertical scale being expanded by a factor of ten). The slight decrease towards higher frequencies might be due to a drift of the experimental setup during the scan time of 10minutes. The graphs clearly show a reproducible, frequency dependent PFM signal independent of the kind of sample used. In the following this PFM signal will be denoted as the background PFM signal $b=(p+n)/2$.
![\[fig:Jungk3\] Frequency dependence of the PFM signal on a $+z$ domain face of PPLN: (a) in-phase and (b) orthogonal output. The dotted gray curves $p$ show the measured PFM signal, the black curves $p-b$ the calculated, background-corrected PFM signal. The measurements were performed with the cantilever C$_{4}$.](Jungk3)
In order to extract the contributions of the ferroelectric properties of the $\rm LiNbO_3$ sample from the PFM signal we subtracted the background PFM signal from the measured data. The result is shown in Fig. \[fig:Jungk3\] on a $+z$ domain face for the in-phase (a) and the orthogonal (b) output of the lock-in amplifier. The background-corrected curves ($p-b$, black lines) are plotted together with the measured PFM signals ($p$, gray lines). As can be clearly seen, the part of the PFM signal causing the domain contrast appears only in phase with the applied voltage with a constant amplitude.
For an interpretation of the background-corrected PFM signal we performed a quantitative analysis of the measurements. Figure \[fig:Jungk4\] shows the frequency spectra of the background-corrected in-phase PFM signals for the four cantilevers, the vertical scale being expanded by a factor of ten with respect to Fig. \[fig:Jungk3\]. All cantilevers show a frequency independent spectrum, the averaged values are C$_1$: 62.1pm, C$_2$: 58.8pm, C$_3$: 70.5pm, and C$_4$: 51.8pm. This has to be compared with the theoretically expected value for the converse piezoelectric effect of $\Delta t = \frac{\varepsilon_{333}}{C_{333}} \cdot U =
75$pm with $\varepsilon_{333} = 1.785$C/m$^2$ and $C_{333}=2.357
\times 10^{11}$N/m$^2$ being the appropriate piezoelectric and stiffness tensor elements respectively [@Jaz02].
![\[fig:Jungk4\]Frequency dependence of the in-phase, background-corrected PFM signal on a $+z$ domain face of PPLN for four different cantilevers C$_{1}$-C$_{4}$, k: spring constant.](Jungk4)
Although the background-corrected PFM signals are of the right magnitude, they are all too small however by 5 - 30% as compared to the theoretically expected value [^1]. A possible explanation lies in the mechanical constrictions of the deformation. The electrical field at the tip, causing the piezoelectric deformation, spatially decays extremely fast due to the small radius of curvature of the tip ($\sim 30$nm) [@Kol95]. As a consequence, the thickness changes of the crystal occur in a volume comparable to the tip size. Because of its stiffness, the crystal cannot fully follow the required deformation which could be the cause for measuring too small values. Using larger tips should result in higher values for the piezoelectric deformation [^2].
An important point here is that the PFM signal was found to be independent of the stiffness of the cantilever. Because we always operate the PFM at the same set-point of the feedback circuit (i.e. the same bending of the cantilever), the graphs in Fig. \[fig:Jungk4\] this already indicate that the indentation of the tip has no influence on the PFM signal. We confirmed this statement by using a stiff cantilever and varying the set-point, thus changing the indentation force by two orders of magnitude. The observed frequency spectra remained mainly unchanged. Note that a too strong indentation can trigger a local switching of the polarization of the material [@Alp01].
With the results described above, the contrast mechanism in PFM imaging of ferroelectric domains can be fully explained through the thickness change of the sample due to the converse piezoelectric effect, taking into account the background PFM signal as determined above.
To summarize the situation, a vector diagram illustrates the case for two different frequencies $\omega_1$ and $\omega_2$ of the alternating voltage applied to the tip (Fig. \[fig:Jungk5\]). At a certain frequency $\omega_1$, a background PFM signal ${\bf b}_1$ is present. The ferroelectric domains contribute ${\bf d}_1$ for the $+z$ face and ${\bf -d}_1$ for the $-z$ face to the PFM signal, both of same amplitude with a 180$^{\circ}$ phase shift between. This results in the measurement of ${\bf p}_1 = {\bf b}_1 + {\bf d}_1$ for the $+z$ face and ${\bf n}_1={\bf b}_1 - {\bf d}_1$ for the $-z$ face. It is important to note that the phasing between ${\bf p}_1$ and ${\bf p}_2$ is not $180^{\circ}$, their amplitudes are unequal and larger than the expected value. The same considerations apply of course for any other frequency $\omega_2$. It is obvious from Fig. \[fig:Jungk5\] that although the domain contrast is the same ($2{\bf d}_1 = 2 {\bf d}_2$) the PFM signals measured at different frequencies differ with respect to amplitude and phase.
![\[fig:Jungk5\] Vector diagram for the domain contrast in PFM measurements exemplified for two different frequencies $\omega_{1}$ and $\omega_{2}$. The $x$-axis denotes the in-phase output ($\theta = 0^{\circ}$) and the $y$-axis the orthogonal output ($\theta = 90^{\circ}$) of the lock-in amplifier. In the graph ${\bf
b}_{1}$, ${\bf b}_{2}$ denote the background PFM signals and $\phi_{1}$, $\phi_{2}$ their phases, ${\bf p}_{1}$, ${\bf p}_{2}$ and ${\bf n}_{1}$, ${\bf n}_{2}$ are the measured PFM signals on a $+z$ and on a $-z$ face respectively and $2{\bf d}_1 = 2 {\bf d}_2$ is the domain contrast. The background PFM signal rotates randomly with frequency, changing its phase and amplitude which strongly affects the measured PFM signals although the domain contrast is constant.](Jungk5)
In conclusion we have presented a novel analysis of the detection mechanism of ferroelectric domains with piezoresponse force microscopy. Taking into account the background PFM signal caused by the whole experimental setup, basic inconsistencies in PFM measurements concerning frequency dependence, amplitude, phasing and stiffness of the cantilever could be removed. Thus the origin of the domain contrast on PPLN could be explained solely via the converse piezoelectric effect, satisfying the generally required features of PFM imaging. The experimental data were found to be in good agreement with the theoretically expected values. Performing a quantitative analysis of the PFM signal it can thus be determined whether an observed contrast in PFM imaging can be attributed to the converse piezoelectric effect of the sample, therefore unambiguously proving the existence of domains in ferroelectric materials.
We thank R.W. Eason for stimulating discussions. Financial support of the DFG research unit 557 and of the Deutsche Telekom AG is gratefully acknowleged.
[22]{}
M. M. Fejer, G. A. Magel, D. H. Jundt, and R. L. Byer, IEEE J. Quantum Electron. **28**, 2631 (1992).
N. G. R. Broderick, G. W. Ross, H. L. Offerhaus, D. J. Richardson, and D. C. Hanna, Phys. Rev. Lett. **84**, 4345 (2000).
Y. Cho, K. Fujimoto, Y. Hiranaga, Y. Wagatsuma, A. Onoe, K. Terabe, and K. Kitamura, Appl. Phys. Lett. **81**, 4401(2002).
E. Soergel, Appl. Phys. B to be published (2005).
M. Alexe and A. Gruverman, eds., [*Nanoscale Characterisation of Ferroelectric Materials*]{} (Springer, Berlin; New York, 2004) 1st ed.
P. Paruch, T. Giamarchi, and J.-M. Triscone, Phys. Rev. Lett. **94**, 197601 (2005).
J. W. Burgess, J. Phys. D **8**, 283 (1975).
H.-N. Lin, S.-H. Chen, S.-T. Ho, P.-R. Chen, and I.-N. Lin J. Vac. Sci. Technol. B **21**, 916 (2003).
O. Kolosov, A. Gruverman, J. Hatano, K. Takahashi, and H. Tokumoto, Phys. Rev. Lett. **74**, 4309 (1995).
L. M. Eng, H.-J. G[ü]{}ntherodt, G. Rosenman, A. Skliar, M. Oron, M. Katz, and D. Eger, J. Appl. Phys. **83**, 5973 (1998).
M. Labardi, V. Likodimos, and M. Allegrini, Phys. Rev. B **61**, 14390 (2000).
M. Labardi, V. Likodimos, and M. Allegrini, Appl. Phys. A **72**, S79 (2001).
S. Hong, H. Shin, J. Woo, and K. No, Appl. Phys. Lett. **80**, 1453 (2002).
C. Harnagea, A. Pignolet, M. Alexe, and D. Hesse, Integr. Ferroelectr. **44**, 113 (2002).
C. Harnagea, M. Alexe, D. Hesse, and A. Pignolet, Appl. Phys. Lett. **83**, 338 (2003).
C. Harnagea, A. Pignolet, M. Alexe, and D. Hesse, Integr. Ferroelectr. **60**, 101 (2004).
D. A. Scrymgeour and V. Gopalan, Phys. Rev. B **72**, 024103 (2005).
A. Agronin, M. Molotskii, Y. Rosenwaks, E. Strassburg, A. Boag, S. Mutchnik, and G. Rosenman, J. Appl. Phys. **97**, 084312 (2005).
J. W. Hong, K. H. Noh, S. Park, S. I. Kwun, and Z. G. Khim, Phys. Rev. B **58**, 5078 (1998).
M. Shvebelman, P. Urenski, R. Shikler, G. Rosenman, Y. Rosenwaks, and M. Molotskii, Appl. Phys. Lett. **80**, 1806 (2002).
S. V. Kalinin and D. A.Bonnell, Phys. Rev. B **65**, 125408 (2002).
M. Jazbin[š]{}ek and M. Zgonik, Appl. Phys. B **74**, 407 (2002).
M. Abplanalp, J. Fousek, and P. G[ü]{}nter, Phys. Rev. Lett. **86**, 5799 (2001).
[^1]: This has to be compared to published values that vary from 20pm for KTP ($\rm d_{33}\sim 20\,pm/V$) [@Eng98] to 30nm for GASH [@Kol95] ($\rm d_{33}\sim 2\,pm/V$), both with 10V applied to the tip.
[^2]: Note that although at the very tip, the electric field might be as high as 10$^7$V/m (with 10V applied to the tip), this has no influence on the theoretically expected piezoelectric thickness change which is determined only by the applied voltage [@Lin03].
|
---
abstract: '**Keywords**: 11B39, 11A41'
author:
- Vladimir Pletser
title: Product of Two Consecutive Fibonacci or Lucas Numbers Divisible by their Prime Sum of Indices
---
European Space Research and Technology Centre, ESA-ESTEC P.O. Box 299, NL-2200 AG Noordwijk, The Netherlands E-mail: [email protected]
Introduction
============
One of the most interesting divisibility properties of the Fibonacci numbers is that for each prime $p$, there is a Fibonacci number $F_{n}$ such that $p$ divides $F_{n}$ (see, e.g. [@key-2]). More specifically, for $p\neq5$, $p$ divides either $F_{p-1}$ if $p\equiv\pm1(mod\,5)$, or $F_{p+1}$ if $p\equiv\pm2(mod\,5)$. For $p=5$, one has of course $p=F_{p}$.
Theorem
=======
Although already demonstrated differently in [@key-5; @key-6], a new demonstration of the following theorem is proposed in this paper.
If $p$ is prime and $r\in\mathbb{Z}^{+}$,
$$\begin{aligned}
p & = & \left(4r+1\right)\textnormal{ \textit{divides the product} }F_{2r}F_{2r+1},\textnormal{ \textit{except for} }p=5\label{eq:1}\\
p & = & \left(4r+3\right)\textnormal{ \textit{divides the product} }L_{2r+1}L_{2r+2}\label{eq:2}\end{aligned}$$
For $p$ prime and $r,s,n,m\in\mathbb{Z}^{+}$, for odd primes $p=2s+1$, one has $$L_{2s+1}-1=L_{2s+1}-L_{1}\label{eq:3}$$ The transformations $$\begin{aligned}
L_{n+m}-\left(-1\right)^{m}L_{n-m} & = & 5F_{m}F_{n}\label{eq:4}\\
L_{n+m}+\left(-1\right)^{m}L_{n-m} & = & L_{m}L_{n}\label{eq:5}\end{aligned}$$ (relations (17 a, b) in [@key-3] and relations (11) and (23) in [@key-4]) can be used.
\(i) First, let $s$ be even, $s=2r$. Relation (\[eq:3\]) yields respectively from (\[eq:4\]) and (\[eq:5\]), with $m=2r$ and $n=2r+1$, $$\begin{aligned}
L_{4r+1}-1 & = & 5F_{2r}F_{2r+1}\label{eq:6}\\
L_{4r+1}+1 & = & L_{2r}L_{2r+1}\label{eq:7}\end{aligned}$$ If $p=4r+1\neq5$ is prime, then either $p$ divides $F_{4r}$ if $p\equiv\pm1(mod\,5)=29,41,61,{\ifmmode\begingroup\def\b@ld{bold}
\text{\ifx\math@version\b@ld\bfseries\fi\ldots}\endgroup\else\ldots\fi}$, or $p$ divides $F_{4r+2}$ if $p\equiv\pm2(mod\,5)=13,17,37,{\ifmmode\begingroup\def\b@ld{bold}
\text{\ifx\math@version\b@ld\bfseries\fi\ldots}\endgroup\else\ldots\fi}$
On the other hand, one has (relation (13) in [@key-3]) $$\begin{aligned}
F_{4r} & = & F_{2r}L_{2r}\label{eq:8}\\
F_{4r+2} & = & F_{2r+1}L_{2r+1}\label{eq:9}\end{aligned}$$ Let first $p\equiv\pm1(mod\,5)$, then $p$ divides $F_{4r}$ and therefore from (\[eq:8\]) also either $F_{2r}$ or $L_{2r}$. But $p$ cannot divide $L_{2r}$. Let us assume the contrary. Suppose that $p$ divides $L_{2r}$ and also $\left(L_{4r+1}-1\right)$, as $L_{p}\equiv1\left(mod\, p\right)$ (see e.g. [@key-1], [@key-7]). It would mean from (\[eq:7\]) that $p$ should also divide simultaneously $\left(L_{4r+1}+1\right)$ which makes no sense. Therefore $p$ divides $F_{2r}$ and not $L_{2r}$, and also $\left(L_{4r+1}\text{\textendash}1\right)$.
The other case where $p\equiv\pm2(mod\,5)$ divides $F_{4r+2}$ is treated similarly.
This means that all primes $p=4r+1\neq5$ divide the product of two consecutive Fibonacci numbers of indices $2r$ and $2r+1$. More precisely, if $p\equiv\pm1(mod\,5)=29,41,61,{\ifmmode\begingroup\def\b@ld{bold}
\text{\ifx\math@version\b@ld\bfseries\fi\ldots}\endgroup\else\ldots\fi}$, then $p$ divides $F_{2r}$; if $p\equiv\pm2(mod\,5)=13,17,37,{\ifmmode\begingroup\def\b@ld{bold}
\text{\ifx\math@version\b@ld\bfseries\fi\ldots}\endgroup\else\ldots\fi}$, then $p$ divides $F_{2r+1}$.
\(ii) Second, let $s$ be odd, $s=2r+1$. Relation (\[eq:3\]) yields respectively from (\[eq:4\]) and (\[eq:5\]), with $m=2r+1$ and $n=2r+2$, $$\begin{aligned}
L_{4r+3}+1 & = & 5F_{2r+1}F_{2r+2}\label{eq:10}\\
L_{4r+3}-1 & = & L_{2r+1}L_{2r+2}\label{eq:11}\end{aligned}$$ If $p=4r+3$ is prime, then $p$ divides $F_{4r+2}$ if $p\equiv\pm1(mod\,5)=11,19,31,{\ifmmode\begingroup\def\b@ld{bold}
\text{\ifx\math@version\b@ld\bfseries\fi\ldots}\endgroup\else\ldots\fi}$; or $p$ divides $F_{4r+4}$ if $p\equiv\pm2(mod\,5)=3,7,23,43,{\ifmmode\begingroup\def\b@ld{bold}
\text{\ifx\math@version\b@ld\bfseries\fi\ldots}\endgroup\else\ldots\fi}$ One has also
$$\begin{aligned}
F_{4r+2} & = & F_{2r+1}L_{2r+1}\label{eq:12}\\
F_{4r+4} & = & F_{2r+2}L_{2r+2}\label{eq:13}\end{aligned}$$ Like above, let first $p\equiv\pm1(mod\,5)$. Then $p$ divides $F_{4r+2}$ and therefore, from (\[eq:12\]), also either $F_{2r+1}$ or $L_{2r+1}$. But $p$ cannot divide $F_{2r+1}$. Let us assume the contrary. Suppose that $p$ divides $F_{2r+1}$ and also $\left(L_{4r+3}-1\right)$. It would mean from (\[eq:10\]) that $p$ should also divide simultaneously $\left(L_{4r+3}\text{+}1\right)$ which makes no sense. Therefore $p$ divides $L_{2r+1}$ and not $F_{2r+1}$, and also $\left(L_{4r+3}\text{\textendash}1\right)$. The other case where $p\equiv\pm2(mod\,5)$ divides $F_{4r+4}$ is also treated similarly.
This means that all primes $p=4r+3$ divide the product of two consecutive Lucas numbers of indices $2r+1$ and $2r+2$. More precisely, if $p\equiv\pm1(mod5)=11,19,31,{\ifmmode\begingroup\def\b@ld{bold}
\text{\ifx\math@version\b@ld\bfseries\fi\ldots}\endgroup\else\ldots\fi}$, then $p$ divides $L_{2r+1}$; if$p\equiv\pm2(mod\,5)=3,7,23,43,{\ifmmode\begingroup\def\b@ld{bold}
\text{\ifx\math@version\b@ld\bfseries\fi\ldots}\endgroup\else\ldots\fi}$, then $p$ divides $L_{2r+2}$.
On the other hand, for $p=5$, one has obviously $L_{5}=11\equiv1(mod\,5)$.
Acknowledgment
===============
Dr C. Thiel is acknowledged for helpful discussions.
[1]{} P. S. Bruckman, Lucas Pseudoprimes are Odd, Fibonacci Quarterly 32, 155-157, 1994.
R. A. Dunlap, The Golden Ratio and Fibonacci Numbers, World Scientific Press, 1997.
T. Koshy, Fibonacci and Lucas Numbers with Applications, John Wiley, New York, p. 410, 2001.
J. Seibert, Fibonacci and Lucas Products Modulo A Prime, Solution Problem B-1037, Fibonacci Quarterly, Vol. 46-47, p. 88, 2008-2009
M. R. Schroeder, Number Theory in Science and Communication, 2nd edition, Springer-Verlag, 1986, 72-73.
S. Vajda, Fibonacci and Lucas Numbers, and The Golden Section: Theory and Applications, Halsted Press, 1989.
H. C. Williams, Edouard Lucas and primality testing, Canadian Math. Soc. Monographs 22, Wiley, New York, 1998.
|
---
abstract: 'The current flux density is a vector field that can be used to describe theoretically how electrons flow in a system out-of-equilibrium. In this work, we unequivocally demonstrate that the signal obtained from time-resolved X-ray scattering does not only map the time-evolution of the electronic charge distribution, but also encodes information about the associated electronic current flux density. We show how the electronic current flux density qualitatively maps the distribution of electronic momenta and reveals the underlying mechanism of ultrafast charge migration processes, while also providing quantitative information about the timescales of electronic coherences.'
author:
- Gunter Hermann
- Vincent Pohl
- Gopal Dixit
- Jean Christophe Tremblay
title: 'Probing Electronic Fluxes via Time-Resolved X-ray Scattering'
---
Time-resolved imaging of dynamically evolving electronic charge distribution is essential for complete understanding of complex chemical and biological processes in nature. Imaging of valence electron charge distribution is paramount to understand different instances during chemical reactions such as conformational changes, charge migration, and bond formation and breakage [@lepine2014attosecond; @leone2014will; @remacle; @lenz_jcp]. Following the quantum continuity equation, the flow of electron is accompanied by associated fluxes [@sakurai1967advanced]. The latter offers a wealth of information and has played a decisive role for understanding chemical reaction mechanisms [@Barth2962; @Barth7043; @nagashima2009electron; @takatsuka2011exploring; @okuyama2012dynamical; @diestler2013computation; @takatsuka2014chemical; @hermann2014electronic; @yamamoto2015electron; @hermann2016multidirectional; @bredtmann2014x; @okuyama2009electron; @diestler2011coupled; @patchkovskii2012electronic]. However, the notion of electronic fluxes has been restricted to theoretical modelling [@barth2006unidirectional; @okuyama2009electron; @diestler2011coupled; @patchkovskii2012electronic; @kazuo2014chemical; @pohl2016adiabatic; @schild2016electronic; @renziehausen2018many; @schaupp2018time; @matsuzaki2019electronic] and there is no general way to probe them directly in experiment. In this work, we demonstrate theoretically real-space and real-time imaging of electronic fluxes associated with charge migration using time-resolved X-ray scattering (TRXS). For this purpose, we consider oriented benzene as a test system in which a pump pulse induces adiabatic charge migration and ultrashort X-ray pulses probe the electronic fluxes accompanying charge migration.
Scattering of X-rays from matter is an invaluable technique to unveil the real-space structure of solids and molecules with atomic-scale resolution [@Nielsen]. Tremendous technological progress has been made to generate tunable ultraintense and ultrashort pulses from X-ray free-electron lasers (XFELs) [@ishikawa2012; @pellegrini2016physics; @emma2]. X-ray pulses with few femtoseconds pulse duration are routinely generated at various XFELs (LCLS, SACLA, European XFEL). Moreover, few successful attempts have been demonstrated to generate attosecond X-ray pulses [@tanaka2013; @kumar2018generation; @kumar2016temporally; @shim2018isolated; @hartmann2018attosecond; @bucksbaum2019attoXR]. The availability of these ultrashort X-ray pulses offer to extend X-ray scattering from static to time domain with unprecedented temporal resolution [@lindroth2019challenges; @young2018roadmap]. Scattering of ultrashort X-ray pulses from the evolving electronic charge distribution promises to provide stroboscopic snapshots of matter in action with atomic-scale spatial and temporal resolutions [@bucksbaum2007; @peplow2017next]. A direct approach to envision TRXS is a pump-probe experiment, where the pump pulse triggers the ultrafast dynamics and the induced dynamics is imaged by the ultrashort X-ray pulses. Not only these ultrashort X-ray pulses allow to map the motion of atoms in matter on the femtosecond timescale [@peplow2017next; @vrakking2016viewpoint], but also to record movies of electronic motion taking place from few femtoseconds to the attosecond timescale [@dixit2012; @vrakking2012].
The availability of ultrashort X-ray pulses has prompted TRXS experiments probing ultrafast processes with atomic-scale spatio-temporal resolutions. Static X-ray scattering from aligned 2,5-diiodobenzonitrile has been performed at LCLS [@kupper2014x]. TRXS experiments allowed imaging ultrafast vibrations in iodine [@glownia2016self]. Frequency-resolved TRXS was used to disentangle bound and dissociative electronic states during ultrafast vibrational dynamics in iodine [@ware2019characterizing]. Photoinduced structural change during ring opening electrocyclic chemical reaction in cyclohexadiene [@minitti2015imaging; @minitti2014toward] and cis-trans photochemical structural changes in photoactive yellow protein [@pande2016femtosecond] were captured by TRXS. Anisotropic TRXS measurements have been used to determine transition dipole moment and assign excited electronics states in molecule [@yong2018determining]. Different formalisms have been developed to simulate TRXS from non-equilibrium states of matter [@cao1998ultrafast; @henriksen; @lorenz2010theory; @dixit2012; @dixit2013jcp; @dixit2013prl; @dixit2014theory; @santra2014comment; @bredtmann2014x; @bennett2014time; @dixit2017time]. It was demonstrated that the scattering signal obtained via TRXS from an electronic wavepacket is not associated with the Fourier transform of instantaneous electron density [@cao1998ultrafast; @henriksen; @dixit2012; @dixit2013jcp; @bennett2014time]. Mukamel and co-workers have proposed that TRXS is capable to probe molecular nonadiabatic dynamics at avoided crossings and conical intersections [@bennett2018monitoring; @kowalewski2017monitoring] Also, frequency- and wavevector-resolved TRXS has been used to probe the electron dynamics in molecules [@bennett2014time]. Recently, it was shown that TRXS can probe electronic coherences among electronic states [@simmermacher2019electronic], and that TRXS from diatomic molecules are not centrosymmetric [@starace2019pra].
The main focus of this work is to illustrate the capability of TRXS for imaging quantum fluxes during non-stationary charge migration in a coherent electronic wavepacket prepared by an ultrashort pump pulse. Quantum fluxes find their origin in the interferences among quantum mechanical phases. The time-resolved response signal that can be extracted from a TRXS experiment contains information about these electronic coherences, and it is therefore suitable for mapping the current flux density. By following the time-evolution of the coherent electronic wavepacket, we demonstrate the relation between the quantum continuity equation for non-stationary charge migration and TRXS.
![Conceptual sketch of the charge migration mechanism. An $x$-polarized pump pulse induces non-stationary charge migration associated with an electronic wavepacket in benzene. The electron density (blue shaded area) migrates from one side of the molecule to the other with a period $\tau = 504$ attoseconds. Black arrows correspond to the electronic flux density associated with this process (for $z=1\,{\rm a_0}$).[]{data-label="fig01"}](mechanism_S0Pix.pdf){width="\linewidth"}
In this work, we investigate charge migration in benzene induced by a linearly $x$-polarized pulse. A resonant pump pulse of 3.57 fs duration (92 meV bandwidth) and 0.6 TW/cm$^{2}$ intensity at 8.2 eV photon energy is used to prepare a coherent electronic superposition of the $A_{1g}$ ground state and a low-lying optically accessible $E_{1u,x}$ electronic states [@jia2017quantum]. The time period of the non-stationary charge migration corresponds to $\tau = $504 attoseconds (see Fig. \[fig01\]). The timescale of the electronic motion of the wavepacket is much faster than the motion of nuclei [@mineo2014vibrational; @despre2015attosecond], which are kept frozen. The state-averaged CASSCF(6,6) method implemented in MOLPRO [@werner2012molpro] is used with an aug-cc-pVTZ basis [@dunning1989gaussian] to compute the singlet ground and low-lying electronic excited states of benzene, which is aligned in the $xy$-plane. As in Ref.\[\], Multi-Reference Configuration Interaction with Single and Double excitations is employed to correct excitation energies.
{width="\textwidth"}
To image charge migration and the associated fluxes in benzene, the time-resolved scattering signal is simulated using an expression for the differential scattering probability (DSP) of the form (in atomic units) [@henriksen; @dixit2012; @dixit2014theory] $$\label{eq1}
\frac{dP}{d\Omega} =
\frac{dP_{e}}{d\Omega} \sum_{j} \left|
\int d\mathbf{r} ~\langle \psi_{j} | \hat{n}(\mathbf{r}) | \Phi({t}) \rangle~ e^{-i \mathbf{Q} \cdot \mathbf{r}}\right|^{2},$$ where $\frac{dP_{e}}{d\Omega}$ is the Thomson scattering cross section of a free electron, $| \psi_{j} \rangle$ is an eigenstate of the unperturbed Hamiltonian, $| \Phi({t}) \rangle$ is an electronic wavepacket with $t$ as pump-probe time delay, $\hat{n}(\mathbf{r})$ is the electron density operator, and $\mathbf{Q}$ is the photon momentum transfer. In previous work, the numerical simulation of TRXS from the electronic wavepacket has been limited to atomic and simple molecular systems [@dixit2012; @dixit2013jcp; @bennett2014time; @simmermacher2017time; @simmermacher2019electronic]. [[ For a general electronic wavepacket, the summation over $j$ in Eq. ([\[eq1\]]{}) runs over a complete set of eigenstates. Simulating scattering signals using a large number of eigenstates is usually not practical due to the associated computational cost. The scattering signal is shown to converge rapidly with respect to the number of eigenstates (see Fig. S2). All results reported in this work are computed using the 7 lowest-lying of eigenstates. All transition amplitudes of the density operator from the many-body eigenfunctions, i.e., $\langle \psi_{A_{1g}} | \hat{n}(\mathbf{r}) | \psi_j \rangle$ and $\langle \psi_{E_{1u,x}} | \hat{n}(\mathbf{r}) | \psi_j \rangle$, are simulated using the ORBKIT toolbox [@hermann2016orbkit; @pohl2017open; @hermann2017open]. In the past, the summation over $j$ was restricted to the eigenstates spanning the wavepacket and the simulated scattering signals were used to understand the measured signals [@glownia2016self; @ware2019characterizing; @yong2018determining; @minitti2015imaging; @minitti2014toward]. Historically, it was believed that the DSP is proportional to the instantaneous electron density of the wavepacket. Neglecting the effect of electronic coherences was shown to be incorrect in similar contexts [@cao1998ultrafast; @henriksen; @dixit2012; @dixit2013jcp; @bennett2014time].]{}]{}
Here, we observe that the time-dependence of the momentum-space density also differs from that of the current flux density. The time-evolution of the signal obtained from Eq. correlates with the time-derivative of the density calculated from first principles. The theoretical support for this correspondence is detailed in the Supporting Information (SI), where the time-evolution of the signals is derived for a general superposition state. To confirm these results numerically, we investigate the many-electron dynamics using the instantaneous variation of the one-electron density, $\partial_t\rho(\mathbf{r},t)$, and the associate current flux density, $\mathbf{j}(\mathbf{r},t)$. These are connected via the continuity equation, $\partial_t\rho(\mathbf{r},t) = -\vec{\nabla}\cdot\mathbf{j}(\mathbf{r},t)$. The one-electron density and the current flux density are computed from the time-dependent many-electron wavepacket, as described elsewhere [@pohl2017open; @hermann2017open].
Time-resolved scattering patterns corresponding to an electronic wavepacket for different pump-probe delay times are presented in Fig. \[fig1\]a. The electronic wavepacket consists of a coherent superposition of two many-body electronic states which evolves according to $$\label{wp}
\Phi(\mathbf{r}^N,t) = c_{A_{1g}}(t)\psi_{A_{1g}}(\mathbf{r}^N) + c_{E_{1u,x}}(t)\psi_{E_{1u,x}}(\mathbf{r}^N)$$ The coefficients $c_j(t)= 2^{-1/2}e^{-i \varepsilon_jt/\hbar}$ are associated with the ground state, $\psi_{A_{1g}}(\mathbf{r}^N)$ at energy $\varepsilon_{A_{1g}}$, and an optically accessible excited state, $\psi_{E_{1u,x}}(\mathbf{r}^N)$ at energy $\varepsilon_{E_{1u,x}}$. Eq. (\[eq1\]) is used to simulate the patterns shown in Fig. \[fig1\]a and presented in the $Q_{x}-Q_{y}$ plane ($Q_{z}= 0$). For representation purposes, the scattering pattern at $t = 0$ is subtracted. The scattering patterns at $t = \tau/4$ and $3\tau/4$ have opposite phase, whereas they are similar at $t = \tau/8$ and $3\tau/8$, and at $ t = 5\tau/8$ and $7\tau/8$. Hence, the scattering patterns are sensitive to delay times, with a $\sin$ behaviour of period $\tau$. The time-derivative of the momentum space electron density, $\partial_t\rho({\mathbf{Q}})$, is shown in the central panels of Fig. \[fig1\]b. As visible from Figs. \[fig1\]a and \[fig1\]b, there is a one-to-one correspondence between the time-evolution of the scattering patterns obtained from Eq. and the time-derivative of the electron density. Although the structure of $\partial_t\rho({\mathbf{Q}})$ extends further in the $Q_{x}-Q_{y}$ plane, it contains the information of the DSP signal and the two quantities have the same period. As shown in the SI, the DSP signal is not exactly the time-derivative of the momentum-space electron density, but rather the convolution of its different contributions. This is the reason why the timescales correlate exactly with the dynamics but the spatial extent is different in both cases. According to Eq.(S12), the DSP signal from Eq. also contains a contribution from the instantaneous density with $\sin^2$ dependency of period $2\tau$, which would lead to an asymmetry in the signal at $t=\tau/4$ and $3\tau/4$. This asymmetry is not observed since the associated term is vanishingly small (see Fig.S1). Hence, the time-evolution of the experimental DSP signal yields quantitative information about the timescales involved in the [*time-derivative*]{} of the one-electron density.
![Comparison of the time-resolved signals from Eq. (blue) and the time-derivative of the momentum-space electron density $\rho(\mathbf{Q})$ (orange) along $Q_{x}$ ($Q_{y}=Q_{z}= 0$) at pump-probe delay times $t = \tau/4$ and $t = 3\tau/4$.[]{data-label="fig1-1"}](benzene_PNAS_left_22_cross.pdf){width="0.9\linewidth"}
A more quantitative comparison can be obtained from Fig.\[fig1-1\], which shows 1D cut of the DSP and the time-derivative of the momentum-space electron density. Despite differences at higher momenta, the pictures that emerges at low momenta are in good agreement. Low momenta are the most important, as they were shown to map the dynamics of valence electrons [@bredtmann2014x].
The time-dependent DSP signal encodes information about the time-evolution of the wavepacket in momentum-space. Hence, it contains information related to the velocity distributions. To reveal this information, we first reconstruct the current flux density from the many-electron wavepacket associated with the charge migration in benzene. The current flux density is a vector field in configuration space that maps the displacement of the volume elements of the one-electron density. Fig. \[fig2\] presents the time-derivative of the electron density (colour map) and the current flux density (arrows) at various delay times. These quantities are related via the electronic continuity equation, which describes the many-electron dynamics as the flow of a strongly correlated electronic fluid. The one-electron density is seen to migrate from left (violet/blue) to right (yellow/red) in the first half period of the charge migration process, before coming back. The nodal plane along the $y$-axis, which is a consequence of the pump pulse used to generate this superposition state [@jia2017quantum], is retained at all times.
The mechanistic information of the charge migration is encoded in the scattering patterns. However, it is not easy to know where are localised electrons that move in a certain region directly from the patterns. As can be seen from Fig. \[fig2\]a, the direction of the arrows correlates qualitatively with the time-resolved scattering patterns in momentum space depicted in the upper panels of the previous figure (see Fig. \[fig1\]a). The dominant electron flow is along the $x$-direction, with minor components in the $Q_{x}-Q_{y}$ plane at angles corresponding to the C-C bonds of the molecular scaffold. Both pictures are consistent and describe a bond-to-bond electron migration mechanism. On the other hand, the time-derivative of the one-electron density reveals a more intricate nodal structure in the central panels of Fig. \[fig1\]b. As discussed in previous work [@pohl2017open; @hermann2017open], the derivative of the electronic density around the nuclei is sensitive to the choice of atomic basis set. The Fourier transform of the density reveals this sensitivity in momentum space.
{width="\textwidth"}
The velocity field, calculated as $\mathbf{v}(\mathbf{r},t)=\mathbf{j}(\mathbf{r},t)/\rho(\mathbf{r},t)$, offers an alternative representation of the charge migration mechanism. It is shown in Fig. \[fig2\]b (arrows), along with the time-derivative of electron density (colour map). Although it contains mostly the same information as the current flux density, the velocity field is more easily related to the momentum observed in the DSP signal. The time-dependent rescaling through the one-electron density yields a better contrast of the electronic flow, which simplifies the direct comparison with scattering patterns. It can be observed that the electrons flow faster around the central carbon atoms, which contrasts with the picture offered by the current flux density. The latter predicts an homogeneous $\pi$-electron flow along the two C-C-C units of the scaffold. The $\pi$-electron density is lower on the atoms than on the bonds. Rescaling the flux density by the density thus reveals an increased velocity at the central carbon atoms. This phenomenon is analogous to the Venturi effect in classical hydrodynamics, if we assimilate the reduction of the electron density to a reduction of the cross-section through which electrons flow. Since the volumetric flow rate is conserved, the smaller electron density implies an increased velocity and a reduced hydrodynamic pressure at the carbon atoms.
In conclusion, we have shown that ultrafast time-resolved X-ray scattering has potential to extract mechanistic information about the flow of electrons in a molecule out-of-equilibrium by mapping the electronic current flux density. The latter is related to the time-variation of the momentum-space density. The TRXS signal contains qualitative information about the instantaneous electronic velocity distribution and quantitative information about temporal electronic coherences. Cross-correlation with first-principle simulations can be used to reveal the electronic flux density, which contains the time- and space-resolved mechanistic details of the electron migration process. The experimental realization is limited by the time- and momentum-resolutions of TRXS. While benzene is beyond current experiments, the prediction remains valid for slower processes. This would require including nuclear motion in the theoretical treatment.
Acknowledgements {#acknowledgements .unnumbered}
================
G.D. acknowledges for the Ramanujan fellowship (SB/S2/ RJN-152/2015). G.H. and V.P. are grateful for travel funding of the Freie Universität Berlin through the “Indo-German Partnership in Higher Education” program of the DAAD. J.C.T. , G.H., and V.P. are thankful to the Deutsche Forschungsgemeinschaft for funding through grant TR1109/2-1.
[10]{}
F. L[é]{}pine, M. Y. Ivanov, and M. J. J. Vrakking, Nature Photonics [**8**]{}, 195 (2014).
S. R. Leone, C. W. McCurdy, J. Burgd[ö]{}rfer, L. S. Cederbaum, Z. Chang, N. Dudovich, J. Feist, C. H. Greene, M. Ivanov, R. Kienberger, U. Keller, M. F. Kling, Z. H. Loh, T. Pfeifer, A. N. Pfeiffer, R. Santra, K. Schafer, A. Stolow, U. Thumm, and M. J. J. Vrakking, Nature Photonics [**8**]{}, 162 (2014).
F. Remacle and R. D. Levine, Proc. Natl. Acad. Sci. U.S.A [**103**]{}, 6793 (2006).
A. D. Dutoi, M. Wormit, and L. S. Cederbaum, J. Chem. Phys. [**134**]{}, 024303 (2011).
J. J. Sakurai, , Pearson Education India, 1967.
I. Barth and J. Manz, Angew. Chem. Int. Ed. [**45**]{}, 2962 (2006).
I. Barth, J. Manz, Y. Shigeta, and K. Yagi, J. Am. Chem. Soc. [**128**]{}, 7043 (2006).
K. Nagashima and K. Takatsuka, J. Phys. Chem. A [**113**]{}, 15240 (2009).
K. Takatsuka and T. Yonehara, Phys. Chem. Chem. Phys. [**13**]{}, 4987 (2011).
M. Okuyama and K. Takatsuka, Bull. Chem. Soc. Jpn. [**85**]{}, 217 (2012).
D. J. Diestler, A. Kenfack, J. Manz, B. Paulus, J. F. P[é]{}rez-Torres, and V. Pohl, J. Phys. Chem. A [**117**]{}, 8519 (2013).
K. Takatsuka, T. Yonehara, K. Hanasaki, and Y. Arasaki, , World Scientific, 2015.
G. Hermann, B. Paulus, J. P[é]{}rez-Torres, and V. Pohl, Phys. Rev. A [**89**]{}, 052504 (2014).
K. Yamamoto and K. Takatsuka, Chem. Phys. Chem. [**16**]{}, 2534 (2015).
G. Hermann, C. M. Liu, J. Manz, B. Paulus, J. F. Pérez-Torres, V. Pohl, and J. C. Tremblay, J. Phys. Chem. A [**120**]{}, 5360 (2016).
T. Bredtmann, M. Ivanov, and G. Dixit, Nature Communications [**5**]{}, 5589 (2014).
M. Okuyama and K. Takatsuka, Chem. Phys. Lett. [**476**]{}, 109 (2009).
D. Diestler, J. Phys. Chem. A [**116**]{}, 2728 (2011).
S. Patchkovskii, J. Chem. Phys. [**137**]{}, 084109 (2012).
I. Barth, J. Manz, Y. Shigeta, and K. Yagi, J. Am. Chem. Soc. [**128**]{}, 7043 (2006).
K. Takatsuka, Y. Arasaki, T. Yonehara, and K. Hanasaki, , World Scientific, 2015.
V. Pohl and J. Tremblay, Phys. Rev. A [**93**]{}, 012504 (2016).
A. Schild, F. Agostini, and E. Gross, J. Phys. Chem. A [**120**]{}, 3316 (2016).
K. Renziehausen and I. Barth, Prog. Theor. Exp. Phys. [**2018**]{}, 013A05 (2018).
T. Schaupp, J. Albert, and V. Engel, Eur. Phys. J. B [**91**]{}, 97 (2018).
R. Matsuzaki and K. Takatsuka, J. Chem. Phys. [**150**]{}, 014103 (2019).
J. Als-Nielsen and D. McMorrow, , Wiley, New York, 2011.
T. Ishikawa, H. Aoyagi, T. Asaka, Y. Asano, N. Azumi, T. Bizen, H. Ego, K. Fukami, T. Fukui, Y. Furukawa, S. Goto, H. Hanaki, T. Hara, T. Hasegawa, T. Hatsui, A. Higashiya, T. Hirono, N. Hosoda, M. Ishii, T. Inagaki, Y. Inubushi, T. Itoga, Y. Joti, M. Kago, T. Kameshima, H. Kimura, Y. Kirihara, A. Kiyomichi, T. Kobayashi, C. Kondo, T. Kudo, H. Maesaka, X. M. Marechal, S. Masuda, T.and Matsubara, T. Matsumoto, T. Matsushita, S. Matsui, M. Nagasono, N. Nariyama, H. Ohashi, T. Ohata, T. Ohshima, S. Ono, Y. Otake, C. Saji, T. Sakurai, T. Sato, K. Sawada, T. Seike, K. Shirasawa, T. Sugimoto, S. Suzuki, S. Takahashi, H. Takebe, K. Takeshita, K. Tamasaku, H. Tanaka, R. Tanaka, T. Tanaka, T. Togashi, K. Togawa, A. Tokuhisa, H. Tomizawa, K. Tono, S. K. Wu, M. Yabashi, M. Yamaga, A. Yamashita, K. Yanagida, C. Zhang, T. Shintake, H. Kitamura, and N. Kumagai, Nature Photonics [**6**]{}, 540 (2012).
C. Pellegrini, A. Marinelli, and S. Reiche, Rev. Mod. Phys. [**88**]{}, 015006 (2016).
P. Emma, R. Akre, J. Arthur, R. Bionta, C. Bostedt, J. Bozek, A. Brachmann, P. Bucksbaum, R. Coffee, F. J. Decker, Y. Ding, D. Dowell, S. Edstrom, A. Fisher, J. Frisch, S. Gilevich, J. Hastings, G. Hays, P. Hering, Z. Huang, R. Iverson, H. Loos, M. Messerschmidt, A. Miahnahri, S. Moeller, H. D. Nuhn, G. Pile, D. Ratner, J. Rzepiela, D. Schultz, T. Smith, P. Stefan, H. Tompkins, J. Turner, J. Welch, W. White, J. Wu, G. Yocky, and J. Galayda, Nature Photonics [**4**]{}, 641 (2010).
T. Tanaka, Phys. Rev. Lett. [**110**]{}, 084801 (2013).
S. Kumar, J. Lee, M. S. Hur, and M. Chung, JOSA B [**35**]{}, A75 (2018).
S. Kumar, Y. W. Parc, A. S. Landsman, and D. E. Kim, Scientific Reports [**6**]{}, 37700 (2016).
C. H. Shim, Y. W. Parc, S. Kumar, I. S. Ko, and D. E. Kim, Scientific Reports [**8**]{}, 7463 (2018).
N. Hartmann et al., Nature Photonics [**12**]{}, 215 (2018).
J. Duris, S. Li, T. Driver, E. G. Champenois, J. P. MacArthur, A. A. Lutman, Z. Zhang, P. Rosenberger, J. W. Aldrich, R. Coffee, G. Coslovich, F.-J. Decker, J. M. Glownia, G. Hartmann, W. Helml, A. Kamalov, J. Knurr, J. Krzywinski, M.-F. Lin, M. Nantel, A. Natan, J. O’Neal, N. Shivaram, P. Walter, A. Wang, J. J. Welch, T. J. A. Wolf, J. Z. Xu, M. F. Kling, P. H. Bucksbaum, A. Zholents, Z. Huang, J. P. Cryan, and A. Marinelli, arXiv preprint arXiv:1906.10649 (2019).
E. Lindroth, F. Calegari, L. Young, M. Harmand, N. Dudovich, N. Berrah, and O. Smirnova, Nature Reviews Physics [**1**]{}, 107 (2019).
L. Young et al., J. Phys. B [**51**]{}, 032003 (2018).
P. H. Bucksbaum, Science [**317**]{}, 766 (2007).
M. Peplow, Nature [**544**]{}, 408 (2017).
M. J. J. Vrakking, Physics [**9**]{}, 112 (2016).
G. Dixit, O. Vendrell, and R. Santra, Proc. Natl. Acad. Sci. U.S.A. [**109**]{}, 11636 (2012).
M. J. J. Vrakking and T. Elsaesser, Nature Photonics [**6**]{}, 645 (2012).
J. K[ü]{}pper, S. Stern, L. Holmegaard, F. Filsinger, A. Rouz[é]{}e, A. Rudenko, P. Johnsson, A. V. Martin, M. Adolph, A. Aquila, S. Bajt, A. Barty, C. Bostedt, J. Bozek, C. Caleman, R. Coffee, N. Coppola, T. Delmas, S. Epp, B. Erk, L. Foucar, T. Gorkhover, L. Gumprecht, A. Hartmann, R. Hartmann, G. Hauser, P. Holl, A. H[ö]{}mke, N. Kimme, F. Krasniqi, K. U. K[ü]{}hnel, J. Maurer, M. Messerschmidt, R. Moshammer, C. Reich, B. Rudek, R. Santra, I. Schlichting, C. Schmidt, S. Schorb, J. Schulz, H. Soltau, J. C. Spence, D. Starodub, L. Str[ü]{}der, J. Thøgersen, M. J. J. Vrakking, G. Weidenspointner, T. A. White, C. Wunderer, G. Meijer, J. Ullrich, H. Stapelfeldt, D. Rolles, and H. N. Chapman, Phys. Rev. Lett. [**112**]{}, 083002 (2014).
J. M. Glownia, A. Natan, J. P. Cryan, R. Hartsock, M. Kozina, M. P. Minitti, S. Nelson, J. Robinson, T. Sato, T. van Driel, G. Welch, C. Weninger, D. Zhu, and P. H. Bucksbaum, Phys. Rev. Lett. [**117**]{}, 153003 (2016).
M. R. Ware, J. M. Glownia, N. Al-Sayyad, J. T. O’Neal, and P. H. Bucksbaum, arXiv preprint arXiv:1902.01972 (2019).
M. P. Minitti, J. M. Budarz, A. Kirrander, J. S. Robinson, D. Ratner, T. J. Lane, D. Zhu, J. M. Glownia, M. Kozina, H. T. Lemke, M. Sikorski, Y. Feng, S. Nelson, K. Saita, B. Stankus, T. Northey, J. B. Hastings, and P. M. Weber, Phys. Rev. Lett. [**114**]{}, 255501 (2015).
M. P. Minitti et al., Faraday Discussions [**171**]{}, 81 (2014).
K. Pande, C. D. M. Hutchison, G. Groenhof, A. Aquila, J. S. Robinson, J. Tenboer, S. Basu, S. Boutet, D. P. DePonte, M. Liang, T. A. White, N. A. Zatsepin, O. Yefanov, D. Morozov, D. Oberthuer, C. Gati, G. Subramanian, D. James, Y. Zhao, J. Koralek, J. Brayshaw, C. Kupitz, C. Conrad, S. Roy-Chowdhury, J. D. Coe, M. Metz, P. L. Xavier, T. D. Grant, J. E. Koglin, G. Ketawala, R. Fromme, V. Srajer, R. Henning, J. C. H. Spence, A. Ourmazd, P. Schwander, U. Weierstall, M. Frank, P. Fromme, A. Barty, H. N. Chapman, K. Moffat, J. J. van Thor, and M. Schmidt, Science [**352**]{}, 725 (2016).
H. Yong et al., J. Phys. Chem. Lett. [**9**]{}, 6556 (2018).
J. Cao and K. R. Wilson, J. Phys. Chem. A [**102**]{}, 9523 (1998).
N. E. Henriksen and K. B. Moller, J. Phys. Chem. B [**112**]{}, 558 (2008).
U. Lorenz, K. B. M[ø]{}ller, and N. E. Henriksen, Phys. Rev. A [**81**]{}, 023422 (2010).
G. Dixit and R. Santra, J. Chem. Phys. [**138**]{}, 134311 (2013).
G. Dixit, J. M. Slowik, and R. Santra, Phys. Rev. Lett. [**110**]{}, 137403 (2013).
G. Dixit, J. M. Slowik, and R. Santra, Phys. Rev. A [**89**]{}, 043409 (2014).
R. Santra, G. Dixit, and J. M. Slowik, Phys. Rev. Lett. [**113**]{}, 189301 (2014).
K. Bennett, J. D. Biggs, Y. Zhang, K. E. Dorfman, and S. Mukamel, J. Chem. Phys. [**140**]{}, 204311 (2014).
G. Dixit and R. Santra, Phys. Rev. A [**96**]{}, 053413 (2017).
K. Bennett, M. Kowalewski, J. R. Rouxel, and S. Mukamel, Proc. Natl. Acad. Sci. U.S.A. [**115**]{}, 6538 (2018).
M. Kowalewski, K. Bennett, and S. Mukamel, Structural Dynamics [**4**]{}, 054101 (2017).
M. Simmermacher, N. E. Henriksen, K. B. M[ø]{}ller, A. M. Carrascosa, and A. Kirrander, Phys. Rev. Lett. [**122**]{}, 073003 (2019).
H.-C. Shao and A. F. Starace, Phys. Rev. A [**99**]{}, 033413 (2019).
D. Jia, J. Manz, B. Paulus, V. Pohl, J. C. Tremblay, and Y. Yang, Chem. Phys. [**482**]{}, 146 (2017).
H. Mineo, S. Lin, and Y. Fujimura, Chem.l Phys. [**442**]{}, 103 (2014).
V. Despr[é]{}, A. Marciniak, V. Loriot, M. Galbraith, A. Rouz[é]{}e, M. Vrakking, F. L[é]{}pine, and A. Kuleff, J. Phys. Chem. Lett. [**6**]{}, 426 (2015).
H. J. Werner, P. J. Knowles, G. Knizia, F. R. Manby, M. Sch[ü]{}tz, P. Celani, T. Korona, R. Lindh, A. Mitrushenkov, G. Rauhut, K. R. Shamasundar, T. B. Adler, R. D. Amos, A. Bernhardsson, A. Berning, D. L. Cooper, M. J. O. Deegan, A. J. Dobbyn, F. Eckert, E. Goll, C. Hampel, A. Hesselmann, G. Hetzer, T. Hrenar, G. Jansen, C. Koppl, Y. Liu, A. W. Lloyd, R. A. Mata, A. J. May, S. J. McNicholas, W. Meyer, M. E. Mura, A. Nicklass, D. P. O’Neill, P. Palmieri, D. Peng, K. Pflueger, R. Pitzer, M. Reiher, T. Shiozaki, H. Stoll, A. J. Stone, R. Tarroni, T. Thorsteinsson, and M. Wang, See http://www. molpro. net (2012).
T. H. Dunning Jr, J. Chem. Phys. [**90**]{}, 1007 (1989).
M. Simmermacher, N. E. Henriksen, and K. B. M[ø]{}ller, Phys. Chem. Chem. Phys. [**19**]{}, 19740 (2017).
G. Hermann, V. Pohl, J. C. Tremblay, B. Paulus, H. C. Hege, and A. Schild, J. Comp. Chem. [**37**]{}, 1511 (2016).
V. Pohl, G. Hermann, and J. C. Tremblay, J. Comput. Chem. [**38**]{}, 1515 (2017).
G. Hermann, V. Pohl, and J. C. Tremblay, J. Comput. Chem. [**38**]{}, 2378 (2017).
|
---
abstract: 'The security of source has become an increasingly important issue in quantum cryptography. Based on the framework of measurement-device-independent quantum-key-distribution (MDI-QKD), the source becomes the only region exploitable by a potential eavesdropper (Eve). Phase randomization is a cornerstone assumption in most discrete-variable (DV-) quantum communication protocols (e.g., QKD, quantum coin tossing, weak coherent state blind quantum computing, and so on), and the violation of such an assumption is thus fatal to the security of those protocols. In this paper, we show a simple quantum hacking strategy, with commercial and homemade pulsed lasers, by Eve that allows her to actively tamper with the source and violate such an assumption, without leaving a trace afterwards. Furthermore, our attack may also be valid for continuous-variable (CV-) QKD, which is another main class of QKD protocol, since, excepting the phase random assumption, other parameters (e.g., intensity) could also be changed, which directly determine the security of CV-QKD.'
author:
- 'Shi-Hai Sun$^1$'
- 'Feihu Xu$^{2,4}$'
- 'Mu-Sheng Jiang$^1$, Xiang-Chun Ma$^1$, Hoi-Kwong Lo$^2$'
- 'Lin-Mei Liang$^{1,3}$'
title: Effect of source tampering in the security of quantum cryptography
---
Introduction
============
Quantum key distribution (QKD) [@BB84] allows two remote parties to share an unconditional secret key, which has been proven in theory [@Lo99; @Shor00; @GLLP04] and demonstrated in experiment [@Wang12]. However, the imperfections of practical devices will compromise the security of QKD systems [@Zhao08; @Xu10; @Gerhardt11; @Lydersen10; @Bugge14; @Sun11; @Jain11; @Weier11; @Ma13]. So far, three main approaches have been proposed to bridge the gap between theory and practice. The first one is to close specific loopholes of devices with security patches [@Yuan10], but it could not close potential and unnoticed loopholes. The second one is device-independent (DI-) QKD [@Mayers98; @Acin07; @Pironio09]. By testing Bells inequality in a loophole-free setting, security could be obtained without detailed information about the implementation devices. But DI-QKD is impractical because an almost perfect single photon detector (SPD) is required, and even so the secret key rate is limited [@Curty11; @Gisin10]. The third approach is to remove as many device loopholes and assumptions as possible by either modifying the QKD protocol or refining the security proof. One of the best results with this approach is measurement-device-independent (MDI-) QKD [@Lo12], which can remove all detector loopholes. Since the detection system is widely regarded as the Achilles’ heel of QKD [@Zhao08; @Gerhardt11; @Lydersen10; @Bugge14], MDI-QKD is of great importance. Indeed, recently, MDI-QKD has been demonstrated both in the laboratory and in the field [@MDIExp].
Based on the framework of MDI-QKD, the source becomes the final battlefield for the legitimate parties and Eve. And the major flaw of the source is that a semiconductor laser diode (S-LD), which generates a weak coherent state, is normally used as a single photon source in most commercial and research QKD systems [@Wang12; @MDIExp]. The security of MDI-QKD as well as BB84 based on S-LD has been proven with decoy state [@Decoystate]. Hence, it has been convinced that if the source can be well characterized (for example the source flaws could be taken care of with the loss-tolerant QKD protocol [@Tamaki14]), perfect security can still be obtained.
Generally speaking, there are two main classes of QKD protocols, one is discrete-variable (DV-) QKD (including BB84, decoy state BB84, MDI-QKD, Scarani-Acin-Ribordy-Gisin (SARG04) [@Scarani04], differential phase shift (DPS) [@Takesue05], and so on) , and the other one is continuous-variable (CV-) QKD [@cvQKD]. In most DV- quantum communication protocols (e.g., DV-QKD, quantum coin tossing (QCT) [@Pappa14], weak coherent state blind quantum computing (BQC) [@Dunjko12]), the phase randomization is a cornerstone assumption. By assuming that the overall phase is uniformly distributed from 0 to $2\pi$ (in fact, discrete randomization with finite points, e.g., 10, is sufficient to guarantee QKD security [@Cao14]), a coherent state with intensity $|\alpha|^2$ is reduced into a classical mixture state, that is $\rho=\int_0^{2\pi}\frac{d\theta}{2\pi}|\alpha e^{i\theta}\rangle\langle\alpha e^{i\theta}|=\sum_{n=0}^\infty \frac{e^{-|\alpha|^2}|\alpha|^{2n}}{n!}|n\rangle\langle n|.$ Then it allows one to apply classical statistics theory to analyze quantum mechanics. Note that although the security of QKD with nonrandom phase had been proven [@Lo07], the performance is very limited in distance and key rate.
In this paper, however, we demonstrate a simple quantum hacking strategy, with both a commercial and homemade pulsed laser based on S-LD, that allows Eve to actively violate the phase randomization assumption, without leaving a trace afterwards. Thus it is effective for most of DV- quantum communication protocols. Our attack may also be effective for CV-QKD, since other parameters of the source (e.g., intensity) could also be changed. For example, it had been proven that the local oscillator fluctuation will compromise the security of CV-QKD [@Ma13]. Since S-LDs are widely used in most quantum information protocols (e.g., DV-QKD, CV-QKD, QCT, BQC, and so on), and the security of these protocols is closely related to S-LD’s parameters [@GLLP04], our work constitutes an important step towards secure quantum information processing.
Our attack differs from previous attacks [@Zhao08; @Xu10; @Gerhardt11; @Lydersen10; @Bugge14; @Sun11; @Jain11; @Weier11; @Ma13]. First, in our attack, Eve actively violate some basic assumptions required in the security proof by tampering with an initial perfect source. Second, unlike the laser damage attack [@Bugge14] in which Eve also actively creates loopholes for a perfect SPD, the loopholes created by our attack are temporary, this makes our attack impossible for Alice and Bob to detect during the off-time of the QKD system. Third, our attack also differs from the Trojan-horse attack [@Gisin06; @Jain14]. In our attack, Eve directly break some basic assumptions of QKD protocols, whereas in the Trojan horse attack, back-reflected light is measured to analyze Alice’s information. And as the best we know, the Trojan horse attack is invalid for Alice with multi-lasers [@Schmitt07], but our attack remains applicable to such systems. Fourth and most importantly, our attack targets the source instead of SPD. This makes our attack a serious threat for most quantum information protocols (not only QKD, but also QCT and BQC).
Here we emphasize that the phase randomization is a cornerstone assumption in the security of many quantum communication protocols including QKD, QCT and BQC. It is important for not only weak coherent pulse based protocols, but also, for instance, parametric down conversion based protocols [@QWang08]. And continuous or discrete phase randomization is also crucial for the loss-tolerant protocol [@Tamaki14]. In fact, without the phase randomization, the performance of a quantum communication protocol will be dramatically reduced in distance and key rate [@Lo07]. However, we demonstrate experimentally in a clear manner how easy it is for Eve to violate such a fundamental assumption in a practical setting. Thus our work is very generality for most of quantum information processing protocols. It works for most DV-QKD, with various encoding schemes (polarization, phase and time-bin) and various kinds of lasers (pulsed laser and continuous wave (cw) laser). It is also possibly a serious threat for CV-QKD and other quantum information processing protocols (such as QCT and BQC).
The basic principle of our attack is as follows. In the inter-driven mode, the semiconductor medium of the S-LD is excited from loss to gain by each driving current pulse. A laser pulse is generated from *seed* photons originating from spontaneous emission. The phase of the laser pulse is determined by the seed photons. Since the phase of the seed photons is random, the phase of each laser pulse is random inherently [@Williams10; @Xu12; @Yuan14; @Kobayashi14]. However, if a certain number of photons are injected from an external source into the semiconductor medium, these photons will also be amplified to generate laser pulses. Consequently, the seed photons consist of two parts: one from spontaneous emission and the other part from the external source. Both parts will affect the phase of the resulting laser pulse. If the injected photons greatly outnumber the photons from spontaneous emission, the phase of the output laser pulse is largely determined by the phase of the injected photons. Therefore, Eve can control the phase of Alice’s signal laser by illuminating the S-LD from an external ‘control source’, and successfully violate the phase randomization assumption.
Experiment and main results
===========================
Figure \[fig:scheme\] shows the schematic setup of our experiment. We test four sample S-LDs operating in inter-driven mode, two ID300 pulsed lasers from IdQuantique [@IDQ] (numbers ID300-1 and ID300-2), and two homemade pulsed lasers with S-LDs from Sunstar Communication Technology CoLtd (model: SDLP55HMBIFPN, numbers HM-1 and HM-2). To measure the phase relationship between adjacent pulses, an unbalanced Mach-Zehnder interferometer is used (see Fig.\[fig:scheme\](b)). The repetition rate of the signal laser is set to be 206.34MHz to match the delay of the interferometer. The output light is detected by a photodiode ($D_0$) with a bandwidth of 1 GHz, and the voltage of each pulse is recorded using an oscilloscope with bandwidth 33 GHz and sample rate 80 GHz (Agilent, model: DSOX93304Q).
Because the central frequency (with a finite linewidth) and polarization of the signal laser are unstable in experiment, Eve needs to carefully modulate the frequency and polarization of her control laser to match her control laser with Alice’s signal laser. In our experiment, a tuning laser module (model: 81600B-201, Agilent) is used as Eve’s control laser. Furthermore, in Fig.1 of the main text, we consider Eve’s control laser working at cw mode. However, at the end of this paper, we consider the possibility that Eve modulates her control laser into short photon pulses. This can reduce Alice’s ability to detect Eve’s attack.
In theory, the output voltage after $D_0$ is $V_P \propto [1+\cos(\Delta\phi+\theta_0)]/2$, where $\Delta\phi$ is the phase difference between adjacent pulses, and $\theta_0$ is the inherent phase difference between the two paths of the interferometer. By passively controlling the interferometer with temperature controller and vibration isolator, we can stabilize the interferometer within about 2 minute. In the test we set the number of pulses to be 25791 in each experimental point of Fig.\[fig:pro\] (In each experimental point of Fig.\[fig:pro\], we collect and store 10M data. Note that the repetition rate of the laser is 206.34 MHz and the sample rate of oscilloscope is 80GHz. The number of data is about (1/206.34MHz)/(1/80GHz)$\approx$ 388 in each pulse cycle. Thus the number of pulses is about $10M/388\approx 25791$.), and the time interval is about 0.125 ms ($25791/206.34MHz)$, which is much lower than the time scale of the interferometer. Thus we could set $V_P^s \propto [1+\sin(\Delta\phi)]/2$ for $\theta_0=\pi/2$.
A uniform distribution of $\Delta\phi$ from 0 to $2\pi$ will produce a U-type intensity distribution, owing to the fact that the mapping from phase to intensity is non-linear, $V_P \propto \sin(\Delta\phi)$. Indeed when Eve is absent, the same distributions (solid lines of Fig.\[fig:pro\]) are obtained in experiments with both ID-300 and the homemade pulsed laser. However, a bright light from Eve could correlate the phase of each pulse and violate the phase randomization assumption (dashed lines of Fig.\[fig:pro\]). In fact, when photons are injected into Alice’s signal laser, the intensity distribution of $V_P^s$ for both ID300 and the homemade signal laser becomes Gaussian. Consequently, various quantum hacking strategies can be applied to spy on the final key [@Sun12]. Figure \[fig:attack\](a) shows a schematic setup to attack a complete QKD system.
Theoretically speaking, Eve can perfectly control the phase of Alice’s source, and then the intensity distribution should be a sharp line. However, owing to the following two main reasons, the measured intensity distribution in Fig.\[fig:pro\] of the main text follows Gaussian distribution: (1) There exists phase noise in Eve’s controlling laser, which follows Gaussian distribution. The measured intensity is the interference of adjacent pulses (the interval of adjacent pulses is about 5ns), thus the experimental results depends on the phase noise of Eve’s control laser at different time. (2) The interference is imperfect, including the loss of two paths of the interferometer, the time jitter of optical pulse, and so on. Therefore, a practical Eve can’t perfectly control the phase of Alice’s source, and the phase noise decides how much information will be leaked to Eve. Furthermore, although the security of the BB84 protocol had been proven based on uniformed random phase from 0 to $2\pi$ [@GLLP04] and nonrandom phase [@Lo07], but the key rate (or mutual information between Alice and Eve) is still unknown, if the phase of source follows Gaussian distribution or a general probability distribution, which will be studied in future.
Furthermore, we note that when the LD is operated in inter driven mode, the emitted pulses have random phase, and such phase noise had been used as a quantum random number generator by many groups [@Williams10; @Xu12; @Yuan14; @Kobayashi14]. However, Fig.\[fig:pro\](e) of the main text does not prove that the phase of each pulse follows uniform distribution from $0$ to $2\pi$. In fact, if the phase is uniformly distributed from 0 to $\pi$, the same probability distribution could also be obtained. Thus, the phase randomization assumption must be carefully evaluated, particularly for a high-speed QKD system [@Kobayashi14]. Active phase randomization [@Zhao07] is a good countermeasure to guarantee the phase randomization assumption.
Countermeasure
==============
Figure \[fig:attack\](b) shows a possible countermeasure for Alice to monitor our attack. It includes three main devices, an isolator (Iso.), a filter and a photodetector. But these devices could not defeat our attack completely, if they are not carefully configured (see Appendix \[appendix\_a\] for details). (1) The isolator could not entirely stop Eve’s photons due to its finite isolation (see Fig.\[fig:pro\_isolator\]), and other imperfections of practical isolators have been found in a recent paper [@Jain14]. (2) Since the wavelength of Eve’s control laser is the same as that of Alice’s signal laser in our attack, an optical frequency filter is also ineffective. (3) Both optical power meter and classical photodetector could be foiled by Eve, so that they could not accurately show the power of light from the channel. For example, a short pulse light might reduce the average power of Eve’s light, and the finite bandwidth of these monitor devices might worsen the monitoring results. Furthermore, a recent paper also shows other imperfections of a practical monitoring photodetector [@Sajeed14].
An active phase randomization (Fig.\[fig:attack\](c)) [@Zhao07], or the cw laser followed by an external intensity modulator and an active phase randomization scheme, is another important choice for practical QKD systems, especially when the QKD system works in a high repetition rate [@Kobayashi14]. Then phase randomization assumption is automatically guaranteed. But such a countermeasure may not remove our attack entirely, since Eve can tamper with other parameters (e.g., intensity and shape, see Fig.\[fig:waveform\]) to compromise the security of such systems. For example, the key rate of both CV-QKD and DV-QKD depends on the intensity of the signal pules [@Ma13; @Wang-Peng08; @Mizutani15]. But the stability of S-LD (no matter whether it works on pulsed mode or cw mode) could be damaged by bright light, so that the intensity of Alice’s laser is unstable. Therefore, in this sense, our attack is also effective for the QKD system with a cw laser and an active phase randomization scheme. Another countermeasure is to use a protocol (or security proof) with an unrandom phase, but the performance of such a protocol is dramatically reduced in distance and key rate[@Lo07].
Discussion
==========
Fig.\[fig:waveform\] shows that the pulse shape would also be changed by Eve’s bright light. These changed parameters are also helpful. For example, the signal pulse is emitted earlier than that without Eve [@comm_timeshift], and the time shift is different for each S-LDs. Furthermore, in the absence of an external field, the first oscillation is much stronger than the following oscillation, and a few oscillations appear [@comment_expfig6]. But when Eve is present, more oscillations are observed, and different laser diodes have different oscillation waveform. Thus it is possible for Eve to compromise the security of QKD systems with multi-lasers [@Schmitt07] by measuring the characters of signal pules (e.g., time-shift, pulse width, optical frequency).
Here we remark that, generally speaking, the changes of pulse shape are helpful for both Eve and Alice. Although more imperfection could be exploited by Eve, more parameters could be monitored by Alice to discover the existence of Eve. In fact, both Eve and Alice must be very careful in the cat-and-mouse game (see Appendix \[appendix\_b\] for details). First, if Alice wants to completely monitor the changes of pulse shape, some advanced devices with high speed and bandwidth are required, which may dramatically increase the technology challenge and cost of a practical Alice. Second, Eve could carefully configure her attack to ensure that her attack could not increase the error rate and the changes of pulse shape could not be discovered by Alice. Third, generally speaking, the changed shape may actually benefit Eve more than Alice and Bob. This is because Eve could well be a spy or national security agency such as the NSA and so Eve has a much larger power and budget than Alice and Bob. Thus Eve is probably at a better position to exploit the imperfections that she has introduced in the quantum signal. Furthermore, note that even a tiny violation of the phase randomization assumption or other parameters of the source will undermine the very foundation of security proofs in QKD and it will no longer be fair for Alice and Bob to claim unconditional security.
Finally, in addition to using a laser, Eve can also attack the QKD system by using temperature, microwave radiation, and so on. At the same time, although most quantum hackers focus on the optical devices of the legitimate parties, Eve can also exploit imperfections in the electrical devices of the QKD system. For example, if the electromagnetic shielding of devices of Alice and Bob is imperfect, Eve could use microwave radiation from outside to control the parameters of these devices. These are the subjects of future investigations.
Conclusion
==========
In summary, phase randomization is a cornerstone assumption for many quantum communication protocols, and a tiny violation of such an assumption is fatal to the security of such protocols. However, here we demonstrate experimentally, with both commercial and homemade pulsed lasers, how easy it is for Eve to violate such a fundamental assumption in a practical setting. Additionally, besides the random phase, other parameters (e.g., intensity) of the source could also be changed. Our attack works for most DV-QKD protocols, and possibly for CV-QKD and other quantum information processing protocols (e.g., QCT and BQC). Thus our work constitutes an important step towards secure quantum information processing.
Acknowledgement
===============
We thank Z. Yuan and V. Makarov for helpful discussions. This work is supported by the National Natural Science Foundation of China, Grant No. 11304391. L.M.L is supported by the NCET program. H.-K. Lo is supported by NSERC. F. Xu is supported by the Office of Naval Research (ONR) and the Air Force Office of Scientific Research (AFOSR).
The scheme for Eve to foil Alice’s monitor devices {#appendix_a}
==================================================
Now we show that Alice’s countermeasure, shown in Fig.\[fig:attack\](b), of the main text (include an isolator, an optical filter, and a photodetector), can’t remove our attack entirely.
*(i) Isolator-* In general, an optical isolator serves to prevent back-reflected photons from returning to Alice’s lab. However, owing to the finite isolation of practical isolators, this approach only reduces the probability that photons infuse into Alice’s zone, but can not eliminate this probability entirely. We perform a proof-of-principle experiment, by inserting a 25dB isolator after the output port of the signal laser ID300-1. The experimental results of Fig.\[fig:pro\_isolator\] of the main text show that the intensity distribution is still Gaussian-type but not U-type when Eve uses a cw laser with a power of $0.6mW$. Thus the phase of adjacent pulses can be still correlated. Although isolation of some commercial isolators reaches 50dB (or Alice can use two or more isolators in series to increase the isolation), it can not totally foil our attack, because Eve can always increase the power of her control laser. Furthermore, other imperfections of the practical isolator have been found in a recent paper [@Jain14].
*(ii) Filter-* An optical frequency filter is often used by Alice to remove any wavelength-dependent flaws. By doing so, only the light within a narrow band of frequencies can enter Alice’s lab. However, the wavelength of Eve’s control laser is the same as that of Alice’s signal laser in our attack. Thus, an optical frequency filter is not an effective countermeasure against our attack.
*(iii) Photodetector-* Alice can use both an optical power meter and photodetector to monitor the intensity of light from a quantum channel, but the optical power meter measures the average power of light. Thus it could be foiled by Eve who uses a pulsed laser. For example, Fig.4 of the main text shows that a cw laser with an optical power $0.6mW$ is sufficient to correlate the phase of Alice’s signal pulse. Now, suppose that the repetition rate of the QKD system is 10MHz, and Eve uses a pulsed control laser with width of 100ps. Then the duty circle of Eve’s pulse is 100ps/10ns=0.001. Thus the average optical power is reduced to $0.6mW \times 0.001= 0.6\mu W$.
A classical photodetector with a discrimination voltage can be used to monitor the intensity of pulsed light. However, the classical photodetector could also be cheated due to the following two reasons.
First, the classical photodetector can be damaged by bright light so that it may not work as expected. There are two kinds of classical photodetectors: one based on the PIN, and the other one based on the APD. Both can be damaged by bright light [@Bugge14]. For example, the detector based on InGaAs-APD from Thorlabs has a maximal input power 10mW (model: APD310) and 1mW (model: APD110C). The maximal input power for the detector based on InGaAs-PIN from Thorlabs (model: PDA8GS) is about 1mW for cw and 20mW for 60ms [@thorlabs].
Second, the finite bandwidth of the classical photodetector may worsen the monitoring results. We experimentally measure the amplitude of an electrical signal using an oscilloscope with various bandwidth (Fig.\[fig:pulsewidth\](a)). Furthermore, the theoretical amplitudes of an ideal Gaussian pulse which passes a linear time-invariant ideal low-pass filter are also shown in Fig.\[fig:pulsewidth\](b)-(c). Generally speaking, when a signal pulse, $f(t)$, passes a linear time-invariant device, its amplitude function becomes $$\label{eq_gt}
g(t)=\int_{-\infty}^{\infty} G(\omega) F[f(t)]e^{i\omega t}d\omega,$$ where $F[\cdot]$ is the Fourier transformation, and $G(\omega)$ is the frequency response function of device. It clearly shows that devices with finite bandwidth will filter high-frequency signals, and reduce the amplitude of a signal pulse. For simply, we assume that the signal is Gaussian pulse and the device is an ideal low-pass filter, that is, $$\label{eq_con}
\begin{split}
f(t)&=\exp[-\frac{t^2}{2\sigma^2}],\\
G(\omega)&=\begin{cases} 1& |\omega|\leq\omega_0\\ 0& |\omega| >\omega_0\end{cases}.
\end{split}$$ Here $\sigma$ is the standard deviation of a signal pulse $f(x)$. If the 3dB width of $f(x)$ is noted as $\Delta x$, it is easy to check that $\Delta x=\sqrt{8\ln(2)}\sigma$. $\omega_0$ is the maximal bandwidth of the ideal low-pass filter.
The theoretical amplitude of $g(t)$ is shown in Fig.5(b)-(c) of the main text. The results clearly show that monitoring devices with finite bandwidth could not faithfully characterize the factual amplitude of the input signal, and Eve could foil the monitoring devices with a sharp pulsed signal. Although the test of Fig.\[fig:pulsewidth\] is performed for an electrical signal, the results can be directly applied to the photodetector with finite bandwidth. For example, suppose that the gain and discrimination voltage of the photodetector are $10^4$ V/W and 0.2 V, and that Eve uses a pulsed control light with a 3dB width of 100ps and a peak power of $100\mu W$. Then the expected output voltage of the photodetector should be 1V, which is much larger than the discrimination voltage, 0.2V.
Fig.\[fig:waveform\] of the main text also shows that if the bandwidth of Alice’s photodetector is high enough (e.g., $>5GHz$), Eve can be discovered. (Note that generally speaking, the gain of photodetector will be decreased when the bandwidth is increased. But here we simply assume the gain is independent of the bandwidth.) However, if the bandwidth of photodetector is limited (e.g., 1GHz), the factual output voltage is lower than the discrimination voltage, 0.2V. Alice can not discover the existence of Eve. Note that Fig.\[fig:pro\] of the main text has shown that $100\mu W$ is sufficient for Eve to break the phase randomization assumption. Furthermore, a recent paper also shows other imperfections in a practical monitoring photodetector [@Sajeed14].
Therefore, the possible countermeasure of Fig.\[fig:attack\](b) of the main text could be cheated by Eve, if the devices are not carefully configured. Furthermore, illumination by a bright light changes not only the phase but also the pulse waveform, including its width, amplitude, and shape. Although we still do not know how Eve can obtain more information by exploiting such a modified waveform, it remains possible for Eve to attack the QKD system.
A simple discussion about Fig.\[fig:waveform\] {#appendix_b}
==============================================
Fig.\[fig:waveform\] of the main text clearly shows that when the signal laser is illuminated by bright light, the pulse shape would also be changed. Generally speaking, the additional changes are helpful for both Eve and the legitimate parties. More imperfections can be exploited by Eve to spy the final key, and more parameters can be monitored by Alice to discover the existence of Eve. But it is still possible for Eve to perform our attack.
Theoretically speaking, Eve could perform a suitable attack to ensure that the modification of the pulse shape would not increase the error rate between Alice and Bob. In fact, Eve can perform the intercept-and-resend attack, and ensure that the error rate is lower than a reasonable value. For example, in the system with multi-laser diodes, she first measures the time-shift of each laser diode to determine Alice’s state. Then she can resend a faked state to Bob according to her measurement results. In this case, if the time-shift is distinguishable for each laser diode (it is possible according to Fig.\[fig:waveform\]), Eve could know the state sent by Alice. Then she can resend a perfect faked state to Bob according to her measurement. Thus no additional error will be introduced, and the legitimate parties could not discover the existence of Eve by monitoring the error rate.
Therefore, the main battlefield for Alice and Eve is the monitor devices, and both of them must be very careful in the cat-and-mouse game.
For Alice, she may discover the existence of Eve by carefully monitoring the parameters of the signal laser. But since the change is tiny in some parameters, some advanced devices with high speed and bandwidth (e.g., photodetectors, analog-digital convertors, or time-amplitude convertors, and so on) are required for Alice, which may dramatically increase the technology challenge and cost of a practical Alice. For example, the time-shift for ID300 lasers is about 100ps; thus if Alice wanted to characterize the time-shift of her pulses, the bandwidth and sample rate of Alice’s analog-digital convertor should be larger than 40GHz (generally speaking, at least four points are needed to recover a pulse). Furthermore, the bandwidth and sample rate should be increased for homemade lasers (see Fig.\[fig:waveform\] of the main text for HM-1 and HM-2), since much smaller changes are introduced.
For Eve’s part, she should carefully configure her attack to foil Alice’s monitor devices. (1) Eve may carefully stable her controlling laser, and match the optical frequency of her controlling laser with that of Alice’s signal laser, so that, excepting the random phase, many tiny changes will be introduced on the pulse shape. Taking the homemade lasers (HM-1 and HM-2) as an example, Eve’s light will correlate the phase of each of the pulses (see Fig.\[fig:pro\] (c) and (d) of the main text), but Fig.\[fig:waveform\] (c) and (d) of the main text show that the changes of pulse shape are very tiny (At least, compared with ID300-1 and ID300-2, we do not find any obvious changes in the pulse shape using a photodetector with 40GHz bandwidth, an oscilloscope with 33GHz bandwidth and a sample rate of 80GHz, thus if Alice wants to discover the changed shape of HM-1 and HM-2, advanced devices with higher bandwidth and sample rate are required. (2) Eve may reduce the risk of being discovered by spying parts (not all) of final key. For example, it has been proven that a small fluctuation of intensity will dramatically reduce the secret key rate of decoy state BB84 protocol [@Wang09]. Thus she still could obtain parts of final key by trivially changing the intensity of Alice’s signal laser. In fact, it had been shown that, if the intensity of Alice’s signal pulses fluctuates 1%, 2% and 3%, the final key rate will be reduced by 11.86%, 23.91% and 36.17% [@Wang09] (The simulation was performed based on the experimental parameters of Ref.[@Schmitt07](b) ).
Furthermore, generally speaking, the ability for Eve to change other parameters in an optical signal may actually benefit Eve more than Alice and Bob. This is because Eve could well be a spy or work for a national security agency such as the NSA, and so Eve has a much larger budget than Alice and Bob, and thus is probably in a better position to exploit the imperfections that she has introduced in the quantum signal.
[100]{}
C. H. Bennett, G. Brassard. *International Conference on Computers, Systems and Signal Processing*, Bangalore, India. New York: IEEE. p.175-179 (1984).
H. K. Lo, H. F. Chau. *Science* **283**, 2050-2056 (1999).
P. W. Shor and J. Preskill. *Phys. Rev. Lett.* **85**, 441 (2000).
D. Gottesman, H. K. Lo, N. Lütkenhaus, J. Preskill. *Quantum Inf. Comput.* **4**, 325 (2004).
S. Wang, W. Chen, J. F. Guo, Z. Q. Yin, H. W. Li, Z. Zhou, G. C. Guo, and Z. F. Han, *Opt. Lett.* **37**, 1008-1010 (2012); Y. Liu, T. Y. Chen, J. Wang, W. Q. Cai, X. Wan, L. K. Chen, J. H. Wang, S. B. Liu, H. Liang, L. Yang, *et al.* *Opt. Express* **18**, 8587-8594 (2010); Z. L. Yuan, A. R. Dixon, J. F. Dynes, A. W. Sharpe, and A. J. Shields, *Appl. Phys. Lett.* **92**, 201104 (2008).
Y. Zhao, C. H. F. Fung, B. Qi, C. Chen, and H. K. Lo, *Phys. Rev. A*, **78**, 042333 (2008).
F. H. Xu, B. Qi, and H. K. Lo, *New J. Phys.*, **12**, 113026 (2010).
L. Lydersen, C. Wiechers, C. Wittmann, D. Elser, J. Skaar, and V. Makarov, *Nat. Photonics*, **4**, 686 (2010).
I. Gerhardt, Q. Liu, A. Lamas-Linares, J. Skaar, C. Kurtsiefer, and V. Makarov, *Nat. Commun.*, **2**, 349 (2011).
S. H. Sun, M. S. Jiang, L. M. Liang. *Phys. Rev. A*, **83**, 062331 (2011).
N. Jain, C. Wittmann, L. Lydersen, C. Wiechers, D. Elser, C. Marquardt, V. Makarov, and G. Leuchs. *Phys. Rev. Lett.*, **107**, 110501 (2011).
H. Weier, H. Krauss, M. Rau, M. Fürst, S. Nauerth, and H. Weinfurter. *New J. Phys.*, **13**, 073024 (2011).
A. N. Bugge, S. Sauge, Aina Mardhiyah M. Ghazali, J. Skaar, L. Lydersen, and V. Makarov, *Phys. Rev. Lett.*, **112**, 070503 (2014).
X. C. Ma, S. H. Sun, M. S. Jiang, and L. M. Liang, *Phys. Rev. A*, **88**, 022339 (2013).
Z. L. Yuan, J. F. Dynes, and A. J. Shields. *Nat. Photon.*, **4**, 800 (2010); T. F. da Silva, G. B. Xavier, G. P. Termorao, and J. P. von der Weid. *Opt. Express*, **20**, 18911 (2012); T. F. da Silva, G. C. do Amaral, G. B. Xavier, G. P. Temporao, and J. P. von der Weid. *IEEE Journal of Selected Topics in Quant. Electron.*, Vol. **21**, Issue. **3**, 6600309 (2015); C. C. W. Lim, N. Walenta, M. Legré, N. Gisin, and H. Zbinden. *ibid*, Vol. **21**, Issue. **3**, 6601305 (2015).
D. Mayers, and A. Yao. *FOCS ‘98 Proceedings of the 39th Annual Symposium on Foundations of Computer Science, Page 503 (1998)*.
A. Acín, N. Brunner, N. Gisin, S. Massar, S. Pironio, and V. Scarani, *Phys. Rev. Lett.*, **98**, 230501 (2007).
S. Pironio, A. Acín, N. Brunner, N. Gisin, S. Massar, and V. Scarani, *New J. Phys.*, **11**, 045021 (2009).
N. Gisin, S. Pironio, N. Sangouard. *Phys. Rev. Lett.*, **105**, 070501 (2010).
M. Curty, and T. Moroder. *Phys. Rev. A*, 84, 010304(R) (2011).
H. K. Lo, M. Curty, B. Qi. *Phys. Rev. Lett.* **108**, 130503 (2012).
A. Rubenok, J. A. Slater, P. Chan, I. Lucio-Martinez, W. Tittel. *Phys. Rev. Lett.*, **111**, 130501 (2013); Y. Liu, T. Y. Chen, L. J. Wang, H. Liang, G. L. Shentu, J. Wang, K. Cui, H. L. Yin, N. L. Liu, L. Li, *et al.* *ibid.* **111**, 130502 (2013); Z. Y. Tang, Z. F. Liao, F. H. Xu, B. Qi, L. Qian, and H. K. Lo. *ibid.* **112**, 190503 (2014); Y. L. Tang, H. L. Yin, S. J. Chen, Y. Liu, W. J. Zhang, X. Jiang, L. Zhang, J. Wang, L. X. You, J. Y. Guan, *et al.* *ibid.* **113**, 190501 (2014); T. Ferreira da Silva, D. Vitoreti, G. B. Xavier, G. C. do Amaral, G. P. Temporao, J. P. von der Weid. *Phys. Rev. A*, 88, 052303 (2013).
W. Y. Hwang. *Phys. Rev. Lett.* **91**, 057901 (2003); H. K. Lo, X. F. Ma, K. Chen. *ibid.* **94**, 230504 (2005); X. B. Wang. *ibid.* **94**, 230503 (2005).
K. Tamaki, M. Curty, G. Kato, H. -K. Lo, and K. Azuma. *Phys. Rev. A*, **90**, 052314 (2014).
V. Scarani, A. Acin, G. Ribordy, N. Gisin. *Phys. Rev. Lett.* **92**, 057901 (2004).
H. Takesue, E. Diamanti, T. Honjo, C. Langrock, M. M. Fejer, K. Inoue, and Y. Yamamoto, *New J. Phys.* **7**, 232 (2005).
A. Leverrier, and P. Grangier. *Phys. Rev. Lett.*, **102**, 180504 (2009).
A. Pappa, P. Jouguet, T. Lawson, A. Chailloux, M. Legré, P. Trinkler, I. Kerenidis, and E. Diamanti. *Nat. Commun.* **5**, 3717 (2014).
V. Dunjko, E. Kashefi and A. Leverrier. *Phys. Rev. Lett.* **108**, 200502 (2012).
Z. Cao, Z. Zhang, H. K. Lo, and X. F. Ma. *New J. Phys.* **17**, 053014 (2015).
H. K. Lo, and J. Preskill. *Quant. Inf. Comput.* **7(5)**, 431 (2007).
N. Gisin, S. Fasel, B. Kraus, H. Zbinden, and G. Ribordy, *Phys. Rev. A*, **73**, 022320 (2006).
N. Jain, B. Stiller, I. Khan, V. Makarov, C. Marquardt, and G. Leuchs. IEEE J. Sel. Topics Quantum Elect. Vol. **21**, Issume. **3**, 6600710 (2015).
T. Schmitt-Manderbach, H. Weier, M. Fürst, R. Ursin, F. Tiefenbacher,T. Scheidl, J. Perdigues, Z. Sodnik, C. Kurtsiefer, J. G. Rarity, *et al.*, *Phy. Rev. Lett.*, **98**, 010504 (2007). C. Z. Peng, J. Zhang, D. Yang, W. B. Gao, H. X. Ma, H. Y, et al., *ibid.*, **98**, 010505 (2007).
Q. Wang, W. Chen, G. Xavier, M. Swillo, T. Zhang, S. Sauge, *et al.*, *Phys. Rev. Lett.*, **100**, 090501 (2008).
C. R. S, Williams, J. C. Salevan, X. Li, R. Roy, T. E. Murphy. *Opt. Express* **18**, 23584 (2010).
F. H. Xu, B. Qi, H. Xu, H. X. Zheng, H. K. Lo. *Opt.Express*, **20**, 12366 (2012).
Z. L. Yuan, M. Lucamarini, J. F. Dvnes, *et al.* *Appl. Phys. Lett.*, **104**, 261112 (2014).
T. Kobayashi, A. Tomita, A. Okamoto. *Phys. Rev. A*, **90**, 032320 (2014).
$\text{http://www.idquantique.com}$
The minimal power of Eve’s control laser ($P_c$) depends on the parameters of both the signal and control lasers, such as the polarization, line width, isolation of the S-LDs, and so on.
S. H. Sun, M. Gao, M. S. Jiang, C. Y. Li, and L. M. Liang, *Phys. Rev. A*, **85**, 032304 (2012); Y. L. Tang, H. L. Yin, X. F. Ma, C. H. F. Fung, Y. Liu, H. L. Yong, *et al.* *ibid.* **88**, 022308 (2013).
S. Sajeed, I. Radchenko, S. Kaiser, J. -P. Bourgoin, A. Pappa, L. Monat, *et al.* *Phys. Rev. A*, **91**, 032326(2015)..
Y. Zhao, B. Qi, H. K. Lo. *Appl. Phys. Lett.*, **90**, 044106 (2007); S. H. Sun, L. M. Liang. *ibid.* **101**, 071107 (2012).
X. B. Wang, C. Z. Peng, J. Zhang, L. Yang, and J. W. Pan. *Phy. Rev. A*, **77**, 042311 (2008).
A. Mizutani, M. Curty, C. C. Wen Lim, N. Imoto, and K. Tamaki. arXiv:quant-ph/1504.08151 (2015).
The time shift appears because when the laser cavity is seeded with an external field, the relaxation oscillations are dampened and the first oscillation occurs earlier. And the amount of the time shift is much larger for the two ID300 lasers than those of the homemade lasers, because the linewidth of two ID300 lasers is larger than that of the homemade lasers. Thus the probability that a photon is infused into the ID300 lasers is larger than that of the homemade lasers.
The reason is that it takes time for the initial field to appear (owing to the finite spontaneous emission rate and geometry of the laser chip). While the field takes time to build up, there is no or little stimulated emission, yet the electrical pumping continues at the full rate. The population inversion then rises above the steady-state value and far overshoots it. Then the field far overshoots the steady-state, because for a short time the laser has much higher gain, then the stronger field depletes it by stimulated emission below the steady-state value, and so on, there are a few oscillations before the emission settles to a stable value. But in the presence of the seed field in the cavity, the population inversion does not initially overshoot as high, because the emission stimulated by the seed field begins earlier.
$\text{http://www.thorlabschina.cn}$.
X. B. Wang, L. Yang, C. Z. Peng, and J. W. Pan, *New J. Phys.*, **11**, 075006 (2009).
|
---
abstract: 'Core-collapse supernovae produce elements between Fe and Ag depending on the properties of the ejected matter. Despite the fast progress in supernova simulations in the last decades, there are still uncertainties in the astrophysical conditions. In this paper we investigate the impact of astrophysical uncertainties on the nucleosynthesis. Since a systematic study based on trajectories from hydrodynamic simulations is computationally very expensive, we rely on a steady-state model. By varying the mass and radius of the proto-neutron star as well as electron fraction in the steady-state model, we cover a wide range of astrophysical conditions. In our study, we find four abundance patterns which can be formed in neutron-rich neutrino-driven ejecta. This provides a unique template of trajectories that can be used to investigate the impact of nuclear physics input on the nucleosynthesis for representative astrophysical conditions. Furthermore, we link these four patterns to the neutron-to-seed and alpha-to-seed ratios at $T=3$ GK. Therefore, our results give a good overview of the potential nucleosynthesis evolution which can occur in a supernova simulation.'
author:
- 'J. Bliss'
- 'M. Witt'
- 'A. Arcones'
- 'F. Montes'
- 'J. Pereira'
bibliography:
- 'paper.bib'
title: 'Survey of astrophysical conditions in neutrino-driven supernova ejecta nucleosynthesis'
---
Introduction {#sec:introduction}
============
Core-collapse supernovae represent the death of massive stars ($M\gtrsim 8M_\odot$), lead to the birth of neutron stars and stellar black holes, and they are the production site of many elements. They contribute to $1/3$ of the iron observed in our Galaxy, produce radioactive isotopes (e.g., $^{44}$Ti, $^{60}$Fe) whose decay has been observed [@Renaud.etal:2006; @Grebenev:2012; @Grefenstette.etal:2014; @Wallner.etal:2016], and synthesize heavy elements up to probably Ag/Cd [@Wanajo.etal:2011a; @Wanajo.etal:2013b]. In some rare extreme cases where the explosion is driven by magnetic fields, even the heaviest elements may be produced by the r-process [@Winteler.etal:2012; @Nishimura.etal:2015; @Moesta.etal:2017; @Halevi.Moesta:2018]. The contribution of core-collapse supernovae to the chemical history of the universe needs to be studied based on self-consistent supernova simulations. This implies following the explosion and ejecta evolution for several seconds with three dimensional simulations in general relativity including detailed neutrino transport, and for several stellar progenitors. However, this is not possible today even if new efforts have been reported in this direction [@Wanajo.etal:2011a; @Wanajo.etal:2013a; @Wanajo.etal:2013c; @Wanajo.etal:2017; @Harris.etal:2017; @Eichler.etal:2017].
In this paper, we focus on the production of elements between iron and silver in the neutrino-driven ejecta. We follow a complementary approach to the expensive simulations by using a steady-state wind model which allows to study the neutrino-driven ejecta. The steady-state wind model has been proven to be very efficient in determining the required conditions for the r-process to occur in core-collapse supernovae [@Qian.Woosley:1996; @Hoffman.etal:1997; @Otsuki.etal:2000; @Thompson.etal:2001; @Wanajo.etal:2001]. We explore many combinations of electron fractions, neutron star masses and radii. These are input parameters for the wind equations and lead to a broad range of values for the wind parameters, namely entropy, expansion time scale, and electron fraction. Here, we investigate neutron-rich conditions and find a typical charged particle reaction process (sometimes also referred to as alpha process), and weak r-process nucleosynthesis. Current simulations predict proton-rich ejecta after the explosion (e.g., [@Bruenn.etal:2016]). However, uncertainties in neutrino-matter interactions may slightly change this [@MartinezPinedo.etal:2012; @Roberts.etal:2012]. It has been found that there is also a small amount of neutron-rich matter that may still be ejected [@Wanajo.etal:2011a]. These ejecta are exposed only shortly to neutrinos and can be well described by our neutrino-driven wind model. Even if the amount of neutron-rich ejected matter is small, the contribution to the nucleosynthesis is very important because the mass fractions of elements heavier than iron are relatively high. In proton-rich conditions the ejected matter contains mainly alpha particles and protons, and therefore the mass fraction of heavy nuclei is very small [@Arcones.Bliss:2014; @Arcones.Montes:2011].
The paper is structure as followed. In Sect. \[sec:method\] the steady-state model and trajectories are described. We explain and compare the different nucleosynthesis groups created under different astrophysical conditions in Sect. \[sec:results\_nuc\]. Finally, we summarize our results and conclude in Sect. \[sec:conclusions\].
Steady-state model and trajectories {#sec:method}
===================================
We resort to steady-state models that were very successful in finding the appropriate conditions to produce the r-process in core-collapse supernovae [@Qian.Woosley:1996; @Hoffman.etal:1997; @Cardall.Fuller:1997; @Otsuki.etal:2000; @Thompson.etal:2001; @Wanajo.etal:2001]. With such a model, one can explore all possible conditions found in current and future simulations, as it was done for the r-process. Moreover, the trajectories obtained here mimic not only neutrino-driven wind ejecta, but also neutrino-driven ejecta in general, even if these are not supersonic winds. Therefore, our study can also roughly account for early neutrino-driven ejecta.
The steady-state model used here follows [@Otsuki.etal:2000] and it will be shortly summarized for completeness. Steady-state models rely on the fact that in the first few seconds after core-collapse, the proto-neutron star mass, radius, and (anti)neutrino luminosities and energies change slowly [@Qian.Woosley:1996] and time-dependencies can be neglected. We have compared the results of our steady-state model to simulations and found that, given the appropriate input parameters, it is possible to reproduce the evolution of the wind. However, in simulations there are also hydrodynamical features (like the reverse shock) that cannot be captured by a simple steady-state model [@Arcones.etal:2007; @Arcones.Janka:2011]. In slightly neutron-rich winds, such hydrodynamical features have a small impact on the nucleosythesis in contrast to proton-rich conditions [@Wanajo.etal:2011b; @Arcones.etal:2012; @Arcones.Bliss:2014].
The basic equations of the steady-state wind in a spherically symmetric Schwarzschild geometry are $$\begin{gathered}
\dot{M} = 4\pi r^2 \rho\, v \, , \label{eq:ndw1} \\
v\, \frac{dv}{dr} = -\, \frac{1}{\rho_{\mathrm{tot}}+P}\frac{dP}{dr}\left(1+v^2-\frac{2 M_\mathrm{ns}}{r}\right) - \frac{M_\mathrm{ns}}{r^2} \, , \label{eq:ndw2} \\
\dot{q} = v\left(\frac{d\epsilon}{dr} - \frac{P}{\rho^2} \frac{d\rho}{dr}\right) \, , \label{eq:ndw3}\end{gathered}$$ where $\dot{M}$ is the constant mass outflow rate, $r$ is the distance from the center of the proto-neutron star, $\rho$ is the (baryon) mass density, $v$ is the radial velocity of the wind, $P$ the pressure, $\rho_{\mathrm{tot}} = \rho (1 + \epsilon)$ the total energy density with $\epsilon$ as the specific internal energy. Pressure and specific internal energy can be approximated as $$\begin{aligned}
P &=& \frac{11\pi^2}{180} T^4 + \frac{\rho}{m_\mathrm{N}} T \, , \label{eq:eos1} \\
\epsilon &=& \frac{11\pi^2}{60} \frac{T^4}{\rho} + \frac{3}{2}\frac{T}{m_\mathrm{N}} \, , \label{eq:eos2}\end{aligned}$$ assuming that matter is composed of non-relativistic nucleons, relativistic electrons and positrons, and photon radiation [@Otsuki.etal:2000]. The nucleon rest mass is . Using these full set of equations, pressure, temperature, velocity, and density can be derived as a function of the distance from the center of the proto-neutron star, given its star mass $M_\mathrm{ns}$, radius $R_\mathrm{ns}$, and neutrino and (anti)neutrino luminosities and energies.
The net heating rate from neutrino interactions with matter, $\dot{q}$, takes into account neutrino and antineutrino absorption on nucleons, electron and positron capture on nucleons, neutrino and antineutrino scattering off electrons and positrons, neutrino- antineutrino annihilation into electron and positron and its inverse (for more details see Eqs. (8)-(16) of [@Otsuki.etal:2000]). These reactions depend on luminosities and energies for electron neutrino and antineutrino and on a third neutrino flavour that accounts for muon and tau neutrinos and antineutrinos. These neutrino quantities are all input parameters in the steady-state model. Since varying all of them is too expensive, we use the electron fraction to constrain them. We assume $\dot Y_{\mathrm{e}} = 0$, electron/positron capture negligible, and an initial composition consisting mainly of neutrons and protons. Then, the $Y_{\mathrm{e}}$ follows: $$Y_{\mathrm{e}} = \left[ 1 + \frac{L^n_{\bar{\nu}_{\mathrm{e}}} \langle\sigma_{\bar{\nu}_{\mathrm{e}}p}\rangle}{L^n_{\nu_{\mathrm{e}}} \langle\sigma_{\nu_{\mathrm{e}}n}\rangle}\right]^{-1}, \label{eq:ye}$$ where $L_{\nu}^n=L_{\nu} / \langle E_{\nu}\rangle$ is the number luminosity and is assumed to be the same for electron neutrinos and antineutrinos. The electron neutrino energy luminosity and energy are kept constant ($\langle E_{\nu_{\mathrm{e}}}\rangle =16.66$ MeV and $L_{\nu_{\mathrm{e}}}= 2 \cdot 10^{51}$ ergs/s [@Arcones.etal:2007]). The cross sections for electron neutrino absorption on neutrons ($\langle\sigma_{\nu_{\mathrm{e}}n}\rangle$) and electron antineutrino absorption on protons ($\langle\sigma_{\bar{\nu}_{\mathrm{e}}p}\rangle$) depend on the neutrino and antineutrino energies. Therefore, for a fixed $\langle E_{\nu_{\mathrm{e}}}\rangle$ and a given $Y_{\mathrm{e}}$, one can calculate the antineutrino energy from Eq. \[eq:ye\]. With this $\langle E_{\bar{\nu}_{\mathrm{e}}}\rangle$ and the assumption of equal number luminosities, $L_{\bar{\nu}_{\mathrm{e}}}$ is fixed. The electron fraction is the main nucleosynthesis parameter because it determines the initial composition. For given $Y_{\mathrm{e}}$, the electron neutrino energy and luminosity have a small impact on the abundances due to the formation of alpha particle that are not considered in Eq. \[eq:ye\]. Therefore, keeping the electron neutrino energy and luminosity constant is justified and allows us to use the electron fraction as input parameter.
The solutions of Eqs. – depend on the mass outflow rate [@Duncan.etal:1986]. For instance, for large enough mass outflow ($\dot{M}= \dot{M}_\mathrm{crit}$), the velocity reaches the speed of sound corresponding to the *wind* (or supersonic) solution. The so-called *breeze* (or subsonic) solutions are found for $\dot{M} < \dot{M}_{\mathrm{crit}}$. If $\dot{M} > \dot{M}_{\mathrm{crit}}$, one gets unphysical solutions where the mass outflow experiences an infinite acceleration. $\dot{M}_\mathrm{crit}$ depends on the neutron star and neutrino properties.
We vary the input of the steady-state equations to cover all possible conditions of the neutrino-driven ejecta. The range of neutron star masses and radii have been chosen taking into account current observational and theoretical constraints for neutron stars and neutron matter (see e.g., [@Lattimer.etal:2016]). The values for the input quantities are given in Tab. \[tab:wind\_input\] together with the values from [@Otsuki.etal:2000] and [@Thompson.etal:2001] for comparison. Here, we have focussed in neutron-rich conditions because we want to explore the weak r-process and charged particle reactions. By changing (anti)neutrino luminosities, energies, and $Y_{\mathrm{e}}$, one can also investigate proton-rich conditions. Note that in Tab. \[tab:wind\_input\] our values partially overlap with those of [@Otsuki.etal:2000] and [@Thompson.etal:2001], this implies that we also find some extreme cases that produce r-process. However, we do not consider such extreme trajectories because their conditions are inconsistent with current supernova models.
This work Otsuki Thompson
------------------------------- -------------- --------------- ----------------
$M_{\mathrm{ns}}/M_{\odot}$ $0.8 - 2$ $1.2 - 2 $ $1.4 - 2$
$R_{\mathrm{ns}}/\mathrm{km}$ $9 - 30$ $10$ $10 - 20.3$
$Y_{\mathrm{e}}$ $0.4 - 0.49$ $0.43 - 0.46$ $0.45 - 0.495$
: Comparison between input parameters in the steady-state models used in this study, [@Otsuki.etal:2000] and [@Thompson.etal:2001].[]{data-label="tab:wind_input"}
The evolution of wind temperature and density as a function of time (after converting velocity as a function of the wind radius) is shown in Fig. \[Fig.:Impact\_MnsRnsLv\_TempDens\] for different combinations of $M_{\mathrm{ns}}$, $R_{\mathrm{ns}}$, and $Y_{\mathrm{e}}$ (see Tab. \[tab:wind\_input\]). The most compact proto-neutron star ($M_\mathrm{ns} = 2M_{\odot}$ and $R_\mathrm{ns}=9\mathrm{km}$) results in a faster drop of the temperature and density. The highest temperatures and densities are obtained for the largest proto-neutron star radius and lowest proto-neutron star mass. The width of each band is due to the variation of the electron fraction.
![Overview of temperature (top panel) and density evolution (bottom panel) of the steady-state trajectories included in the present study (grey lines). Extreme trajectories calculated with $R_{\mathrm{ns}}=30$ km, $M_{\mathrm{ns}}=0.8$ $M_{\odot}$ and $R_{\mathrm{ns}}=9.0$ km, $M_{\mathrm{ns}}=2.0$ $M_{\odot}$ are shown by the red and blue bands, respectively. The spread of the red and blue bands is due to the different electron fractions ($0.40 \leq Y_{\mathrm{e}} \leq 0.49$).[]{data-label="Fig.:Impact_MnsRnsLv_TempDens"}](Overview_Traj_Temp_R_M.pdf "fig:"){width="0.99\linewidth"}\
![Overview of temperature (top panel) and density evolution (bottom panel) of the steady-state trajectories included in the present study (grey lines). Extreme trajectories calculated with $R_{\mathrm{ns}}=30$ km, $M_{\mathrm{ns}}=0.8$ $M_{\odot}$ and $R_{\mathrm{ns}}=9.0$ km, $M_{\mathrm{ns}}=2.0$ $M_{\odot}$ are shown by the red and blue bands, respectively. The spread of the red and blue bands is due to the different electron fractions ($0.40 \leq Y_{\mathrm{e}} \leq 0.49$).[]{data-label="Fig.:Impact_MnsRnsLv_TempDens"}](Overview_Traj_Dens_R_M.pdf "fig:"){width="0.99\linewidth"}
Figure \[Fig.:Impact\_MnsRnsLv\_EntropyTimescale\] illustrates the dependance of the entropy and expansion time scale (defined as [@Qian.Woosley:1996]) on $M_\mathrm{ns}$ and $R_\mathrm{ns}$ assuming . We chose a reference case, i.e., $M_{\mathrm{ns}}=1.4$ $M_{\odot}$ and $R_{\mathrm{ns}}=10$ km. As already explained in many wind studies (e.g., [@Cardall.Fuller:1997; @Otsuki.etal:2000; @Thompson.etal:2001; @Wanajo.etal:2001]), the wind entropy increases and the expansion time scale decreases as the proto-neutron star mass increases. Moreover, larger proto-neutron star radii lead to smaller entropies and longer expansion time scales. Therefore, a more compact proto-neutron star (i.e., more mass and/or smaller radius) ejects slightly less material due to the larger binding ($M/R$). In such a case, entropies are higher and expansion time scales shorter due to the larger neutrino energy deposition that is necessary to unbound matter [@Cardall.Fuller:1997; @Wanajo.etal:2001].
![Impact of the proto-neutron star mass ($M_{\mathrm{ns}}$) and radius ($R_{\mathrm{ns}}$) on the entropy (solid lines) and expansion time scale (dashed lines). The electron fraction is constant $Y_{\mathrm{e}}=0.45$. In the upper panel the proto-neutron star radius is kept constant and equal to 10 km, in the bottom panel the proto-neutron star mass is constant and equal to 1.4 $M_\odot$[]{data-label="Fig.:Impact_MnsRnsLv_EntropyTimescale"}](Impact_Mass_on_EntropyTimescale.pdf "fig:"){width="1.\linewidth"}\
![Impact of the proto-neutron star mass ($M_{\mathrm{ns}}$) and radius ($R_{\mathrm{ns}}$) on the entropy (solid lines) and expansion time scale (dashed lines). The electron fraction is constant $Y_{\mathrm{e}}=0.45$. In the upper panel the proto-neutron star radius is kept constant and equal to 10 km, in the bottom panel the proto-neutron star mass is constant and equal to 1.4 $M_\odot$[]{data-label="Fig.:Impact_MnsRnsLv_EntropyTimescale"}](Impact_Radius_on_EntropyTimescale.pdf "fig:"){width="1.\linewidth"}
Characteristic nucleosynthesis patterns {#sec:results_nuc}
=======================================
We have calculated the nucleosynthesis for 2696 steady-state trajectories using the WinNET reaction network [@Winteler:2012; @Winteler.etal:2012]. In the network, we consider 4412 nuclei from H to Ir including neutron- and proton-rich nuclei as well as stable ones. The reaction rates are taken from the JINA ReaclibV2.0 [@Cyburt.etal:2010] library. We use the same theoretical weak interaction rates and neutrino reactions on nucleons as in Ref. [@Froehlich.etal:2006]. We start the calculation of every nucleosynthesis trajectory at 10 GK and assume nuclear statistical equilibrium (NSE) down to 8 GK. Weak reactions are not in equilibrium and thus we calculate their impact on $Y_{\mathrm{e}}$ during the whole evolution. At early times when the temperature is still high, matter is close to the proto-neutron star and it consists mainly of neutrons and protons (photons dissociate any nuclei that forms). As matter expands and temperature decreases, alpha particles form and later these combine producing seed nuclei[^1]. The subsequent evolution strongly depends on entropy, expansion time scale, and $Y_{\mathrm{e}}$.
{width="0.9\linewidth"}\
{width="0.9\linewidth"}\
{width="0.9\linewidth"}\
{width="0.9\linewidth"}
For typical supernova conditions, we find four characteristic abundances patterns produced either mainly during the NSE evolution phase or through charged particle reactions (CPR) after NSE. Figure \[fig:nuc\_types\] gives an overview of elemental abundances at different temperatures together with the final abundances for the different groups. The four nucleosynthesis groups are defined by their $Y_{\mathrm{n}}/Y_{\mathrm{seed}}$ and the $Y_{\alpha}/Y_{\mathrm{seed}}$ at $T \approx 3$ GK, following a similar strategy as in [@Wanajo.etal:2017]. These ratios are shown for the different groups in Fig. \[fig:YnYseedvsYalphaYseed\], where every point corresponds to a single trajectory evolution for typical supernova conditions. The red line (limiting the phase space towards low ${Y_\mathrm{n}/Y_\mathrm{seed}}$ and low ${Y_\alpha/Y_\mathrm{seed}}$) links those steady-state solutions based on the lowest $M_{\mathrm{ns}}$ and largest $R_{\mathrm{ns}}$. Below this line, there are almost no physical solutions for the wind equations (Eqs. \[eq:ndw1\]–\[eq:ndw3\]). The few physical solutions found are discarded because they are based on massive proto-neutron stars with small radii and thus excluded by causality or, in few cases, they are subsonic breeze solutions. Additional trajectories corresponding to the most compact proto-neutron star (Tab. \[tab:wind\_input\]) are shown by a blue line (upper, right corner). The trajectories for the two limiting cases are shown with same colours as in Fig. \[Fig.:Impact\_MnsRnsLv\_TempDens\]. We have not included possible solutions with ${Y_\mathrm{n}/Y_\mathrm{seed}}\gtrsim 100$ since such high amount of neutrons is not found in current simulations of standard neutrino-driven supernova explosions. Neither solutions with high ${Y_\alpha/Y_\mathrm{seed}}$ are shown in the figures, these can be reached by increasing $Y_{\mathrm{e}}$ towards proton-rich conditions.
![Different nucleosynthesis patterns in the $Y_{\alpha}/Y_{\mathrm{seed}}-Y_{\mathrm{n}}/Y_{\mathrm{seed}}$ plane. The colors describing the different nucleosynthesis groups are the same as in Sects. \[sec:nse1\]–\[sec:cpr2\]. The red and blue lines mark the constraints of the proto-neutron star masses and radii used in the steady-state model on $Y_{\alpha}/Y_{\mathrm{seed}}$ and $Y_{\mathrm{n}}/Y_{\mathrm{seed}}$. The red (blue) line corresponds to $M_{\mathrm{ns}}=0.8$ $M_{\odot}$ ($M_{\mathrm{ns}}=2.0$ $M_{\odot}$) and $R_{\mathrm{ns}}=30$ km ($R_{\mathrm{ns}}=9$ km). Each chain represents a constant $Y_{\mathrm{e}}$.[]{data-label="fig:YnYseedvsYalphaYseed"}](YnYseedvsYalphaYseed.pdf){width="1.\linewidth"}
![Dependencies of the groups NSE1 (upper left panel), NSE1 (upper right panel), CPR1 (lower left panel), and CPR2 (lower right panel) on proto-neutron star mass and radius. The values of the proto-neutron star mass are only the ones on the axis going from 0.8 to 2.0 in intervals of 0.2. The different colors indicate various $Y_{\mathrm{e}}$ ranges.[]{data-label="fig:MRYe"}](Mass_Radius_YE.pdf){width="1\linewidth"}
![Different nucleosynthesis groups depending on entropy and expansion timescale.[]{data-label="fig:StauYe"}](Entropy_Timescale_YE.pdf){width="1\linewidth"}
In the following we describe the nucleosynthesis of every group. In addition to Fig. \[fig:YnYseedvsYalphaYseed\], the dependencies of the groups on proto-neutron star mass and radius, and on entropy and time scale are shown in Fig. \[fig:MRYe\] and Fig. \[fig:StauYe\], respectively. In these figures, every panel corresponds to a nucleosynthesis group and the different colors indicate various ranges of electron fractions. In Fig. \[fig:MRYe\] the points from models with different $Y_{\mathrm{e}}$ are shifted to avoid hiding them when overlapping. Proto-neutron star masses shown in intervals of 0.2 $\mathrm{M_{\odot}}$.
NSE1 {#sec:nse1}
----
The trajectories that lead to NSE1 patterns are produced by low mass and large radius proto-neutron stars (Fig. \[fig:MRYe\]). Therefore, the proto-neutron stars are not very compact and thus the wind entropy is relatively low (Fig. \[fig:StauYe\]). The $Y_{\mathrm{e}}$, with values between 0.40–0.43, is low when comparing to supernova simulations. Still these conditions can mimic some early ejecta that has been exposed to neutrinos only shortly. Moreover, in the early explosion phase the proto-neutron star is still less massive and its radius is large, as it is the case for the trajectories of the group NSE1.
In NSE1, the initial nucleosynthesis evolution is characterized by the sequence of three-body reactions $\alpha(\alpha n,\gamma)^{9}\mathrm{Be}$ and $^{9}\mathrm{Be}(\alpha,\gamma)$, which bypass the 3-$\alpha$ reaction bottleneck [@Woosley.Hoffman:1992]. This group is similar to the one identified by [@Wanajo.etal:2017] as NSE. Due to the small ${Y_\mathrm{n}/Y_\mathrm{seed}}$, the nucleosynthesis path evolves near the valley of stability on the neutron-rich side. At $T \approx 6$ GK matter moves along the Ca-Zn region, and reaches $Z \sim 40$ at $T \approx 5$ GK (see Fig. \[fig:Flux\_NSE1\], top panel), where the most abundant elements are Fe and Ni (left panel, first row, Fig. \[fig:nuc\_types\]). Between $T \approx 5-3$ GK, there is only a redistribution of matter by few charged particle reactions, as seen in the middle panel of Fig. \[fig:Flux\_NSE1\]. The nucleosynthesis path cannot extend beyond the neutron shell closure $N=50$ because of the small amount of free neutrons and alpha particles. The few alpha particles are not sufficient to recombine and fill the abundances for $Z=3-19$ at low temperatures (see first row, Fig. \[fig:nuc\_types\]). Consequently, the major abundance peaks are already formed around $T \approx 5$ GK at the end of NSE, and the subsequent evolution does not significantly change the abundance pattern. Therefore, the abundance distribution for the NSE1 group is mainly determined by binding energies and partition functions, and not so much by specific reactions. Finally, during the decay to stability (Fig. \[fig:Flux\_NSE1\], bottom panel), the abundance pattern changes slightly. The final abundance pattern (right panel, first row, Fig. \[fig:nuc\_types\]) exhibits characteristic Ni (not for all trajectories), Zn, and Kr peaks. Elements heavier than $Z \approx 38$ are not synthesized.
![Nucleosynthesis evolution of the NSE1 group at $T \approx 5$ GK (top), $T \approx 3$ GK (middle), and $T \approx 2$ GK (bottom). The arrows indicate the flow of the different reactions. The abundances are shown by different colors and stable nuclei are displayed by black dots.[]{data-label="fig:Flux_NSE1"}](Flux_NSEone_5GK.pdf "fig:"){width="0.8\linewidth"}\
![Nucleosynthesis evolution of the NSE1 group at $T \approx 5$ GK (top), $T \approx 3$ GK (middle), and $T \approx 2$ GK (bottom). The arrows indicate the flow of the different reactions. The abundances are shown by different colors and stable nuclei are displayed by black dots.[]{data-label="fig:Flux_NSE1"}](Flux_NSEone_3GK.pdf "fig:"){width="0.8\linewidth"}\
![Nucleosynthesis evolution of the NSE1 group at $T \approx 5$ GK (top), $T \approx 3$ GK (middle), and $T \approx 2$ GK (bottom). The arrows indicate the flow of the different reactions. The abundances are shown by different colors and stable nuclei are displayed by black dots.[]{data-label="fig:Flux_NSE1"}](Flux_NSEone_2GK.pdf "fig:"){width="0.8\linewidth"}
NSE2 {#sec:nse2}
----
NSE2 patterns are obtained for various compactness of the proto-neutron star, but as in NSE1, the patterns are still dominated by low mass, large radius proto-neutron stars (Fig. \[fig:MRYe\]). The range of possible entropies is larger than in NSE1 (Fig. \[fig:StauYe\]). However, the main difference is that most of the trajectories have relative high $Y_{\mathrm{e}}$ and this results in very low ${Y_\mathrm{n}/Y_\mathrm{seed}}$ and high ${Y_\alpha/Y_\mathrm{seed}}$ (Fig. \[fig:YnYseedvsYalphaYseed\]). Under such conditions, the nucleosynthesis path flows through the proton-rich side, as described below.
The final abundance pattern for the NSE2 group exhibits a characteristic peak at $Z=28$ and for some trajectories also at $Z=26$ and/or $Z=30$ (see second row Fig. \[fig:nuc\_types\]). Elements heavier than $Z=30$ are only formed for $Y_{\mathrm{n}}/Y_{\mathrm{seed}} > 10^{-9}$. In contrast to NSE1, there are some changes as the temperature drops. As shown in Fig. \[fig:nuc\_types\] (second row, left panel), at $T \approx 5$ GK, matter is accumulated mainly between $Z=22-30$ and the most abundant elements are Fe and Ni. The neutron abundances are very low and thus the nucleosynthesis path moves away from the valley of stability on the proton-rich side via $(\mathrm{p},\gamma)$ and $(\mathrm{p},\mathrm{n})$ reactions (see Fig. \[fig:Flux\_NSE2\]-top). For temperatures between $T \approx 4-3$ GK, matter is shifted from Fe to Ni by $(\mathrm{p},\gamma)$, $(\mathrm{p},\mathrm{n})$, and $(\alpha,\mathrm{p})$ reactions (see Fig. \[fig:Flux\_NSE2\], middle panel). Nickel and zinc act as bottlenecks in the nucleosynthesis evolution, and thus are the most abundant elements. When the temperature drops below 2 GK, there is only a redistribution of matter (Fig. \[fig:Flux\_NSE2\], bottom panel).
![Evolution of the abundances of group NSE2 at $T \approx 5$ GK (top), $T \approx 3$ GK (middle), and $T \approx 2$ GK (bottom).[]{data-label="fig:Flux_NSE2"}](Flux_NSEtwo_5GK.pdf "fig:"){width="0.8\linewidth"}\
![Evolution of the abundances of group NSE2 at $T \approx 5$ GK (top), $T \approx 3$ GK (middle), and $T \approx 2$ GK (bottom).[]{data-label="fig:Flux_NSE2"}](Flux_NSEtwo_3GK.pdf "fig:"){width="0.8\linewidth"}\
![Evolution of the abundances of group NSE2 at $T \approx 5$ GK (top), $T \approx 3$ GK (middle), and $T \approx 2$ GK (bottom).[]{data-label="fig:Flux_NSE2"}](Flux_NSEtwo_2GK.pdf "fig:"){width="0.8\linewidth"}
CPR1 {#sec:cpr1}
----
The group CPR1 marks a transition from groups NSE1 or NSE2 to CPR2 (Fig. \[fig:YnYseedvsYalphaYseed\]). In this group, proto-neutron stars can be massive and several trajectories come from small-radius proto-neutron stars (Fig. \[fig:MRYe\]). The more compact proto-neutron stars result in higher entropies than in groups NSE1 and NSE2 (Fig. \[fig:StauYe\]).
For this group, the abundance evolution and final abundances are shown in the third row in Fig. \[fig:nuc\_types\]. The nucleosynthesis path proceeds through a series of $(\alpha,\mathrm{n})$ and $(\mathrm{p},\mathrm{n})$ reactions on the neutron-rich side of stability. As temperature drops down from $T \approx 6$ GK to $T \approx 5$ GK, the nucleosynthesis path moves from the Ca-Zn region to nuclei around $Z=39$, with some $(\alpha,\mathrm{n})$ and $(\mathrm{p},\mathrm{n})$ frozen out (see Fig. \[fig:Flux\_CPR1\]-top). At $T \approx 5$ GK, the most abundant elements are Fe, Ni, and nuclei at $N=50$ (left panel, third row, Fig. \[fig:nuc\_types\]). When the temperature decreases to $T \approx 4$ GK, matter is redistributed by $(\mathrm{p},\mathrm{n})$ and $(\mathrm{p},\gamma)$ reactions. Most abundant are Fe, Co, Ni, Cu, Zn, and nuclei at $N=50$. At $T=3$ GK, the path stays along stable nuclei and matter has accumulated at $N=50$ (Fig. \[fig:Flux\_CPR1\], middle panel) because the alpha abundance is not large enough to overcome the negative Q-values of $(\alpha,\mathrm{n})$ of those nuclei. However, the amount of alphas is still enough to increase the abundances for $Z=6-20$ via alpha capture reactions (third row, Fig. \[fig:nuc\_types\]). For lower temperatures, there is only a redistribution of matter and decay to stability (Fig. \[fig:Flux\_CPR1\], bottom panel). The overall final abundance pattern has distinctive peaks at Ni, Zn, and Sr (right panel, third row, Fig. \[fig:nuc\_types\]). For some steady-state trajectories, there is also an abundance peak at Kr. Heavier elements than Zr are not formed due to the small $Y_{\alpha}/Y_{\mathrm{seed}}$ and the negative Q-values of some $(\alpha,\mathrm{n})$ reactions at $N=50$. Thus, the final abundances are mainly determined by the Q-values of $(\alpha,\mathrm{n})$ reactions at $N=50$ (see also [@Hoffman.etal:1996; @Wanajo:2006]).
![Abundance flows of the CPR1 group.[]{data-label="fig:Flux_CPR1"}](Flux_CPRone_5GK.pdf "fig:"){width="0.8\linewidth"}\
![Abundance flows of the CPR1 group.[]{data-label="fig:Flux_CPR1"}](Flux_CPRone_3GK.pdf "fig:"){width="0.8\linewidth"}\
![Abundance flows of the CPR1 group.[]{data-label="fig:Flux_CPR1"}](Flux_CPRone_2GK.pdf "fig:"){width="0.8\linewidth"}
CPR2 {#sec:cpr2}
----
This is the group with the most extreme astrophysical conditions with some trajectories reaching high entropies (Fig. \[fig:StauYe\]) and thus having a relative high ${Y_\mathrm{n}/Y_\mathrm{seed}}$ (Fig. \[fig:YnYseedvsYalphaYseed\]). Most trajectories have small $Y_{\mathrm{e}}$. Therefore, this group is characterised by a nucleosynthesis evolution on the neutron-rich side and the abundances can reach heavier elements than in the other groups. The conditions indicated by Fig. \[fig:MRYe\] and Fig. \[fig:StauYe\] can be found in some early, neutron-rich ejecta [@Wanajo.etal:2011a; @Wanajo.etal:2013a; @Wanajo.etal:2013c; @Wanajo.etal:2017] when the proto-neutron star is still large and not very massive and perhaps also during the wind evolution if the conditions are neutron-rich.
Around $T \approx 6$ GK the nucleosynthesis path proceeds close to stability via alpha capture reactions and especially $(\alpha,\mathrm{n})$ reactions. Most of the matter is accumulated between $Z \approx 20-30$. When the temperature decreases to $T \approx 5$ GK, the path has reached $Z=36$ (bottom row, Fig. \[fig:nuc\_types\]). The most abundant nuclei are in the neutron shell closure $N=50$, away from the valley of stability (Fig. \[fig:Flux\_CPR2\], top panel). At $T \approx 4$ GK, there are no free protons left. Between $T \approx 4-3$ GK, the neutron and alpha abundances are large and the nucleosynthesis flow can overcome the negative Q-value of $(\alpha,\mathrm{n})$ reactions for $N=50$ nuclei, moving matter up to $Z \sim 42$ (Fig. \[fig:Flux\_CPR2\], middle panel). The most abundant elements are Kr, Rb, and Sr (see panel for 3 GK, bottom row, Fig. \[fig:nuc\_types\]). Remarkable are the substantial changes in the overall abundance pattern when the temperature decreases from 5 GK to 3 GK. At $T=2$ GK, the most abundant elements do not change and the abundances are redistributed within isotopic chains (Fig. \[fig:Flux\_CPR2\], bottom panel). It is important to mention however, that the abundance pattern for this group varies for different steady-state trajectories (i.e., different $Y_{\mathrm{n}}/Y_{\mathrm{seed}}$ and $Y_{\alpha}/Y_{\mathrm{seed}}$). The overall final abundance pattern exhibits peaks at Kr (differently pronounced for different steady-state trajectories) and Zr. We find various patterns for Kr, Rb, Sr, and Y. In comparison to the other nucleosynthesis groups, heavier elements are synthesized (see bottom row, right panel, Fig. \[fig:nuc\_types\]). In addition, the heaviest elements vary for different steady-state trajectories, and thus depend on $Y_{\alpha}/Y_{\mathrm{seed}}$ and $Y_{\mathrm{n}}/Y_{\mathrm{seed}}$.
![Flux diagram for the CPR2 group.[]{data-label="fig:Flux_CPR2"}](Flux_CPRtwo_5GK.pdf "fig:"){width="0.8\linewidth"}\
![Flux diagram for the CPR2 group.[]{data-label="fig:Flux_CPR2"}](Flux_CPRtwo_3GK.pdf "fig:"){width="0.8\linewidth"}\
![Flux diagram for the CPR2 group.[]{data-label="fig:Flux_CPR2"}](Flux_CPRtwo_2GK.pdf "fig:"){width="0.8\linewidth"}
In this group there is more variability of patterns than in other groups. However, the trajectories assigned to this group have in common that the nucleosynthesis evolves beyond $N=50$ and nuclei heavier than $Z \sim 40$ are formed. Moreover, only for the group CPR2, individual reactions, especially $(\alpha,\mathrm{n})$ reactions, play a critical role to determine the abundances which combined with the fact that the reaction rates are rather uncertain [@Mohr:2016; @Pereira.Montes:2016; @Bliss.etal:2017] lead to variations in the final abundances. In a Monte Carlo study ([@Blissetal.inprep]) we use representative abundances of group CPR2 to identify the most relevant $(\alpha,\mathrm{n})$ reactions.
Conclusions {#sec:conclusions}
===========
We have systematically studied the neutron-rich neutrino-driven wind based on a steady-state model. We have chosen the input parameters $M_{\mathrm{ns}}$, $R_{\mathrm{ns}}$, and $Y_{\mathrm{e}}$ in agreement with observations and theoretical calculations of neutron stars and supernovae. We have identified four characteristic nucleosynthesis patterns that can be separated by their ${Y_\mathrm{n}/Y_\mathrm{seed}}$ and ${Y_\alpha/Y_\mathrm{seed}}$ values once the temperature in the outgoing mass shell has decreased to 3 GK.
The abundance distributions of the NSE1 and NSE2 groups are mainly determined during nuclear statistical equilibrium. The position of the nucleosynthesis path relative to the valley of stability is different between the NSE1 and NSE2 groups. Due to the small ${Y_\mathrm{n}/Y_\mathrm{seed}}$ and ${Y_\alpha/Y_\mathrm{seed}}$ the distribution changes only slightly after the breakdown of NSE. Therefore, the final abundances rather depend on binding energies and partition functions than specific reactions. The nucleosynthesis group CPR1 describes the transition from the groups NSE1 or NSE2 to the group CPR2. Charged particle reactions redistribute the abundances after the end of NSE but the ${Y_\mathrm{n}/Y_\mathrm{seed}}$ and ${Y_\alpha/Y_\mathrm{seed}}$ are not large enough to overcome the neutron shell closure $N=50$. Thus, the abundances are rather given by Q-values of $(\alpha,\mathrm{n})$ reactions at $N=50$. The abundance patterns within a group are rather similar for different trajectories indicating a comparable nucleosynthesis evolution. This is especially true for groups NSE1, NSE2, and CPR1. In contrast, the abundance distributions (especially the heaviest elements) of group CPR2 vary for different ${Y_\mathrm{n}/Y_\mathrm{seed}}$ and ${Y_\alpha/Y_\mathrm{seed}}$. Therefore, individual charged particle reactions can critically influence the abundance evolution.
Our conclusions can be extended to neutrino-driven ejecta even if these are not supersonic. Therefore, this work will help to get an overview of the nucleosynthesis in supernova models without detailed post-processing calculations. Typical trajectories and the corresponding abundances for each group are provided on our web site [nuc-astro.eu/](nuc-astro.eu/) in `Resources`. These can be used to compare to observations and to explore the impact on the nuclear physics input on the supernova nucleosynthesis.
Acknowledgments {#acknowledgments .unnumbered}
===============
J.B., M.W., and A.A. are supported by the Helmholtz-University Young Investigator grant No. VH-NG-825, Deutsche Forschungsgemeinschaft through SFB 1245, and ERC 677912 EUROPIUM. J.B. thanks the MGK of the SFB 1245 and the JINA Center for the Evolution of the Elements for the research stay at Michigan State University. F.M. and J.P. are supported by Michigan State University and the Facility for Rare Isotope Beams and was funded in part by the NSF under Contracts No. PHY-1102511 (NSCL) and PHY-1430152 (JINA Center for the Evolution of the Elements).
[^1]: Here, the seed abundance $Y_{\mathrm{seed}}$ is defined as the sum of the abundances of all nuclei heavier than helium.
|
1 pc -.1 in -.1 in -.2 in 20 cm 16 cm
$^{1,3}$[**Physics & Applied Mathematics Unit\
Indian Statistical Institute\
Calcutta-700 035, India\
e-mail : [email protected]\
e-mail : [dhrubajit]{}-[datta]{}@yahoo.com** ]{}\
$^{2}$[**Center for Earth Observing and Space Research\
and School of Computational Sciences\
George Mason University\
Fairfax, VA 22030-4444\
USA.\
e-mail : [email protected]\
e-mail : [email protected]**]{}
[**Abstract**]{}
The linearity of the Hubble relationship(i.e. between m and z) has been tested for galaxies and supernovea for low redshifts. We have studied this relationship for quasars for data taken from Veron Cetti Catalogue(2003).The data from Veron Cetti Catalogue for quasars appear to be truncated. The data have been analyzed using various statistical methods which are suitable for analysing the truncated data. This analysis shows lineraity (in $\log z$) of Hubble law for very small $z$ but non-linearity for high redshift.This will shed new light not only on the quasar astronomy but also in the cosmological debate.
**Introduction**
================
The general relationship between a distance of cosmic sources and corresponding redshift allows one to establish some important properties for the universe which can be used to probe the spatial geometry and especially the underlying cosmological principles. One of such relationships is the m-z relationship between the apparent magnitude of the source and its redshift. Astronomers normally work with quantities related to apparent magnitude $m$ and absolute magnitude $M$. The distance modulus is defined as the difference between $m$ and $M$ which is related to redshift $z$ and $H_0$, where $H_0$ is the Hubble constant. By analyzing the observational data, Hubble formulated a law which states that the galaxies appear to be receding with a velocity $v$ proportional to their distance $d$ from the observer: $$v = H_0 d$$. This is known as the Hubble law, where $H_0$ is called the Hubble constant. However, this relation can be derived from cosmological theory, if the universe is assumed to be homogeneous and isotropic. Various authors$^{1}$ discussed the limits of validity of the Hubble relation. In the above Hubble relation, the distance $d$ should be very large so that the recession velocity is larger than the radial component of the peculiar velocities : for example this can be up to $1000 km s^{-1}$ for galaxies inside clusters and this puts a restriction , $d \geq
10h^{-1} $Mpc. This means the redshift $z $ has to be much greater than $10^{-2}$. However, the distance should not be so large that the recession velocity exceeds the speed of light. Crudely speaking, one can use the above relation for $d<< 300h^{-1} Mpc$, or $z<<10^{-1}$. The distance $d$ can be shwon as $d \simeq \frac{c z}{H_0}\simeq 3000 h^{-1}$ Mpc for $10^{-2} \geq z \leq 10^{-1}$. This equation may be considered as first order approximation to the formula for luminosity distance as a function of redshift $z$ in the Friedmann model. There are two luminosity functions i.e. absolute luminosity and apparent luminosity. Normally astronomers work with absolute magnitude $M$ and apparent magnitude $m$, where $d \simeq {m-M}$ is known as distance modulus and is related to the luminosity distance. Therefore, the relation between the distance modulus and $log z$ is considered to test Hubble law. The linearity of Hubble relationship has been tested for Galaxies and Supernova for low redshifts$^{2}$. Statistical analysis has been done on various samples for galaxies. Sometimes, the samples are constructed subjectively and often samples are taken from Abell’s$^{3}$ catalogue which assumes Hubble’s law as the selection criterion. Hoessel et al $^{2}$ took samples of 116 galaxies from the Abell catalogue which supports Hubble law. Segal and Nicoll$^{4}$ took infrared astronomical satellite galaxy samples$^{5}$ and predicted an alternate redshift-distance law i.e. $z \sim r^p$ where $p = 1,2,3$. Some other attempts$^{6}$ have been made with the IRAS $1.2$Jy Redshift survey. Efron and Petrosian$^{7}$ studied the viability of various statitical tests for truncated data in connection with redshift survey of galaxies and quasars. From the plot of redshifts $z_i$ and log luminosities (=$y_i$) for $210$ quasars they found the data as doubly truncated data. Here, the trunctaion implies that it is not possible to get the information regarding the existence of ${(y_i,z_i)}$ if it fell outside of the region $R_i$ where, due to experimental constraints the distribution of each $y_i$ is truncated to a known interval $R_i$ depending on $z_i$. Truncated data may arise in various experiments. McLaren et al$^{8}$ analysed some experimental results in connection with Red Blood Cell volume distribution which lead to truncated data. In section II we will describe nonparametric methods to find whether or not the apparent magnitudes $m$ are independent of redshifts $z_i$ for the trucated data taken from Veron Cetti catalogue for quasars. The cosmological implications are discussed in section III.
**Statistical Analysis of Quasar Data from Veron Cetti Catalogue**
===================================================================
One of the main issues in analyzing astronomical data is to answer the following statistical question. Is a sample of observed points $(z_i, m_i)$ in the trncated data set of quasar survey consistent with the hyopthesis ${\bf H_0}$ that $z_i$ and $m_i$ are statistically independent ? Efron and Petrosian$^{7}$ investigated this issue in details using a small sub-sample of quasar data. Veron Cetti Catalogue$^{9}$ like other redshift survey provides a pair of measurements $(z_i,m_i)$. Various type of observational biases are ignored. One of the most common biases is introduced by limiting $m_i$ of the survey. We can write the data set as $(z_i,m_i)$ for $i = 1,2,3.....n$ with $m_i\leq m_0$. The absolute magnitude $M_i$ can be estimated if we assume particular cosmological model. The data set $(z_i,M_i$ can be reexpressed satisfying atruncated relationship $$M_i \leq m_0 - 5 \log d + C$$ where $C$ is a constant which can be set to zero. Efron et al anlyzed the data from redshift survey of 492 galaxies and the magnitude limit $m_0 = 21.5$ of this survey leads to the truncated boundary $$M_i \leq 21.5 - 5 \log z_i$$.
The scatter plot of $$m \\{\rm vs}\\ \log z$$ hint that there is truncation in the data. Here, the idea of truncation is used in the sense that the observatiopns $$(z_i,m_i)$$ are observable if some condition or mathematical relation is satisfied say, $$\log z \geq a m + b$$, for some $a \& b$ . It appears from scatter plot from Veron Cetti data that there exists at least one side turncation. By considering $a = 3/7$ and $b= - 64/7$ we found that there are only $18$ data points among the $48683$ data points for which $\log z \geq 3/7 m - 64/7$. So we discard these $18$ data points and this number is negligible compared to the size of the data set and take $\log z \leq 3/7 m - 64/7$ as the truncation relationship. In the next step we will use the Test of Independence for truncated data as elaborated by Efron and Petrosian. Suppose the data consists of a random sample of $n$ pair from the joint distributions of $${\rm data} = {(x_i,y_i), i = 1,2,....n}$$ For truncated data we assume that pairs $(x,y)$ are observable only if they satisfy the truncation relationship $$y\leq u(x)$$ where $u(x)$ is a monotonic function of $x$. Following Efron and Petrosian we took $x = m , y = -\log z , u(x) =
(-3/7)x + 64/7$. The test is to accept independence if $$|t_w{(\rm
data)}| \leq 1.96$$ and we take the rejection probability of the permutation test to be approximately $0.05$. Here, if we take $w_i = 1 \forall$ the test statistic $$|t_w(\rm data)|= 704.162$$ with $$w_i =
\frac{x_i-x_{\rm min}}{u_i - u_{\rm min}}$$, which leads locally the most powerful test $$|t_w(\rm data)|= 875.594$$ . In both the cases the extremely large value of $t_w({\rm data})$ clearly rejects the hypothesis of independence. Here, $$p-{\rm value}\\ \sim 0$$ where $$p-{\rm value}$$ is the maximum level of significance under which null hypothesis (here, independence of two variables) is accepted. So $z$ and $m$ are not independednt.
In the next step, we will try to get best fitting of these data using regression analysis. Then we will investigate the conditons under which we can get Hubble relation. The scatter plot of $z$ vs. apparent magnitude $m$ is illustrated in Figure 1(a,b). Our regression analysis shows that we can use the following relation between m and z. $$\log (m-12) = -4.528 + 16.542 {z}^{1/4} - 13.891 {z}^{1/2} + 3.884 {z}^{3/4}$$ for $ z \epsilon (0,7)$ i.e. for the whole range of $z$.
The following observation motivated us to analyse in a different manner. Here, we observed that in the region ${[0.2950; 2.995]}$ , the truncation distribution of the variable $$\frac{(m -12)}{(f(z) -12)}$$ can be well approximated by beta distribution with parameters $a$ and $ß$ which are some functions of $z$. Precisely given the truncation, the conditional distribution of $$\frac{(m -12)}{(f(z) -12)}$$ can be well approximated by beta$(a(z) , ß(z))$ where $$f(z) = (7/3)\log(z) + 64/3$$.Actually we did the above analysis for $$z = 0.2950, 0.3050, 0.3150,…,2.9950$$. This information has been used to calculate the expected value of $m$ given $z$ for the last said values of $z$. Then we went on doing usual regression analysis to find out $E (m|z) =19.484 + 0.886ln (z) - 0.783{(\log(z))}^ 2$ for the region ${[0.2950; 2.9950]}$. We found the $95\%$ tolerance interval with coverage probability $0.95$ in the similar fashion. $A \%$ tolerance interval with coverage probability $\alpha$ means that $A\%$ of the future observation will fall in the said interval with probability $\alpha$.In the specified region they are actually $(ml(z) ,mu(z))$ for given $z$ where
$$m_u = 16.8 + 7.6263 z - 4.162 z^2 + 0.80 z^3$$ and $$m_l = 12.51 + 5.576 z - 1.686 z^2$$. They are shown in Fig.2. For the region $( 0; 0.2950)$ we use our general regression techniques to find out our prediction equation as $m = 20.060 + 2.139\log z$ and prediction interval as
$$20.060 + 2.139 \log z ± 1.9631 \sqrt{0.4573}(1.0122 - 0.1132 \log z + 0.2937 {(\log z)}^2$$ Here, the prediction interval means that given z the value of m will fall in that interval with probability $0.95$.
**Possible Implications :**
===========================
It is found from our analysis for the data from Veron Cetti Catalogue that the Hubble relation between m and $\log z$ is valid for small $z$ i.e. for the range $$z = [0 ; 0.295]$$. For higher values of $z$, we get different relation as found from regression analysis. Conventionally, the Hubble relation is explained as due to the Doopler mechanism for shift of the spectral lines. Now the deviation from Hubble relation may be due to some other mechanism for redshift. It may be pointed out that the environmental effetc for the quasars may be taken into consideration to explain thism deviation. This kind of environmental effect has been modeled in Doopler like mechanism considered in Dynamic Multiple Scattering (DMS)theory$^{10}$. This DMS is essntially based on the odea of correlation indeuced mechanism as discovered by Wolf$^{11}$. Finally we have plotted another curve in Fig.3 as $$V_{\rm effect}= V^* = m -M \\ {\rm vs}\\ z$$. This figure clearly indicates the existence of three different clusters of quasars. It is possible to identify these classes of quasars. The detail study of these clusters and its implications will be studied in a subsequent paper.
References
==========
1. Coles Peter and Lucchin Francesco (2002)[*Cosmology : The Origin and Evolution of Cosmic Structure*]{}, 2nd Edition, John wiley & Sons, Ltd.p.77.
2. Abell G.O.(1958) Astrophys. J.Suppl. [**3**]{}, 211-288.
3. Hoessel J.G.,Gunn J.E. and Thuan T.X.(1980) Astrophys.J.[**241**]{},486-492.
4. Segal I.E.and Nicoll J.F.(1992)Procd.Nat.Acad.Sci.(USA)[**89**]{},11669-11672.
5. Saunders W., Rowan-Robinson M. et al(1990)Mom.Not.R.Astron.Soc.[**242**]{},318-337.
6. Korany D.M and Strauus M.A.(1996) Tersting the Hubble Law with the IRAS 1.2 Jy redshift Survey, http://xxx.lanl.gov/astro-ph/9610034.
7. Efron B. and Pterosian V.(1992) Astrophysics Journal , [**399**]{}, 345-352.
8. McLaren C, Wagstaff M., Brittegram G., Jacobs A.(1991), Biometrics [**47**]{}, 607-708.
9. 10th Edition, Veron-Cetty M.P., Veron P., (2001)
10. Datta S., Roy S., Roy M. and Moles M., (1998), Phys.Rev.A,[**58**]{},720;\
Roy S., Kafatos M. and Datta S.,(1999), Phys.Rev.A,[**60**]{}, 273.
11. Wolf E. and James D.F.V., (1996),Rep.Prog.Phys.,[**59**]{},771
Figure 1(a). Scatter plot of z vs. m\
Figure 1(b). Scatter plot of m vs. log(z)\
Figure 2. Regressions for m vs z.\
Figure 3. Scatter Plot of Veff vs z.
|
Subsets and Splits